According to reports, “Temnit Gebru is a co-leader of the Ethical Artificial Intelligence team at Google, said she was fired for sending an email that management deemed “inconsistent with the expectations of a Google manager.”
The email and the firing were the culmination of about a week of wrangling over the company’s request that Gebru retract an AI ethics paper she had co-written with six others, including four Google employees, that was submitted for consideration for an industry conference next year, Gebru said in an interview Thursday. If she wouldn’t retract the paper, Google at least wanted the names of the Google employees removed.
Gebru asked Google Research vice president Megan Kacholia for an explanation and told her that without more discussion on the paper and the way it was handled she would plan to resign after a transition period. She also wanted to make sure she was clear on what would happen with future, similar research projects her team might undertake.
Meanwhile, Gebru had chimed in on an email group for company researchers called Google Brain Women and Allies, commenting on a report others were working on about how few women had been hired by Google during the pandemic. Gebru said that in her experience, such documentation was unlikely to be effective as no one at Google would be held accountable. She referred to her experience with the AI paper submitted for the conference and linked to the report.
The next day, Gebru said she was fired by email, with a message from Kacholia saying that Google couldn’t meet her demands and respects her decision to leave the company as a result. The email went on to say that “certain aspects of the email you sent last night to non-management employees in the brain group reflect behavior that is inconsistent with the expectations of a Google manager. Representatives for Mountain View, California-based Google didn’t reply to multiple requests for comment.
The research paper in question deals with possible ethical issues of large language models, a field of research being pursued by OpenAI, Google and others. Gebru said she doesn’t know why Google had concerns about the paper, which she said was approved by her manager and submitted to others at Google for comment.
Google requires all publications by its researchers to be granted prior approval, Gebru said, and the company told her this paper had not followed proper procedure. The report was submitted to the ACM Conference on Fairness, Accountability, and Transparency, a conference Gebru co-founded to be held in March.
The paper called out the dangers of using large language models to train algorithms that could, for example, write tweets, answer trivia and translate poetry, according to a copy of the document. The models are essentially trained by analyzing language from the internet, which doesn’t reflect large swaths of the global population not yet online, according to the paper. Gebru highlights the risk that the models will only reflect the worldview of people who have been privileged enough to be a part of the training data.”