Email

Google pledges changes to research oversight after internal revolt

Alphabet Inc’s Google will change procedures before July for reviewing its scientists’ work, according to a town hall recording heard by Reuters, part of an effort to quell internal tumult over the integrity of its artificial intelligence (AI) research.


FILE PHOTO: The Google name is displayed outside the company’s office in London, Britain November 1, 2018. REUTERS/Toby Melville

In remarks at a staff meeting last Friday, Google Research executives said they were working to regain trust after the company ousted two prominent women and rejected their work, according to an hour-long recording, the content of which was confirmed by two sources.

Teams are already trialing a questionnaire that will assess projects for risk and help scientists navigate reviews, research unit Chief Operating Officer Maggie Johnson said in the meeting. This initial change will roll out by the end of the second quarter, and the majority of papers will not require extra vetting, she said.

Reuters reported in December that Google had introduced a sensitive topics review for studies involving dozens of issues, such as China or bias in its services. Internal reviewers had demanded that at least three papers on AI be modified to refrain from casting Google technology in a negative light, Reuters reported.

Jeff Dean, Google’s senior vice president overseeing the division, said Friday that the sensitive topics review is and was confusing and that he had tasked a senior research director, Zoubin Ghahramani, with clarifying the rules, according to the recording.

Ghahramani, a University of Cambridge professor who joined Google in September from Uber Technologies Inc, said during the town hall, We need to be comfortable with that discomfort of self-critical research.

Google declined to comment on the Friday meeting.

An internal email, seen by Reuters, offered fresh detail on Google researchers’ concerns, showing exactly how Google’s legal department had modified one of the three AI papers, called Extracting Training Data from Large Language Models. (bit.ly/3dL0oQj)

The email, dated Feb. 8, from a co-author of the paper, Nicholas Carlini, went to hundreds of colleagues, seeking to draw their attention to what he called deeply insidious edits by company lawyers.

Related posts

How to avoid the latest generation of scams this holiday season

6 ways to improve logistics and delivery efficiency

Meta releases AI model to enhance Metaverse experience