Google asking scientists to alter AI-related content?

981
AI

Google, a global mega-corporation that once based its operations on the slogan “don’t be evil” is under fire once again for what critics call its “sensitive process review,” where some contend top brass want scientists to change the ways in which they talk about relevant technologies.

 

Reuters reports today on some disturbing whistleblowing by people who have seen drafts of Google-related papers that eventually get rewritten and published in a very different way.

 

Here’s how Paresh Dave and Jeffrey Dastin at Reuters characterize one of the anecdotes surrounding this controversy:

 

“A senior Google manager reviewing a study on content recommendation technology shortly before publication this summer told authors to ‘take great care to strike a positive tone,’ according to internal correspondence read to Reuters,” the pair report. “The manager added, ‘This doesn’t mean we should hide from the real challenges’ posed by the software.”

In addition to this case of interesting doublespeak, the duo shows how one document was allegedly altered prior to publication:

“A draft reviewed by Reuters included ‘concerns’ that this technology can promote ‘disinformation, discriminatory or otherwise unfair results’ and ‘insufficient diversity of content,’ as well as lead to ‘political polarization’ … The final publication instead says the systems can promote ‘accurate information, fairness, and diversity of content.’ The published version, entitled ‘What are you optimizing for? Aligning Recommender Systems with Human Values,’ omitted credit to Google researchers. Reuters could not determine why.”

Professionals tasked with managing the content process have their own concerns.

“If we are researching the appropriate thing given our expertise, and we are not permitted to publish that on grounds that are not in line with high-quality peer review, then we’re getting into a serious problem of censorship,” Google senior research scientist Margaret Mitchell told the reporters regarding this type of coercion.

 

Analysts elsewhere are casting a wary eye toward the whitewashing of artificial intelligence technologies in general, as mentioned in this article at MIT where Karen Hao suggests Google created a “nominal AI ethics board” that didn’t really have the teeth to promote an ethical standard.

 

“The need for greater ethical responsibility has only grown more urgent,” Hao writes, ringing the alarm bells in a very direct way. “The same advancements made in GANs in 2018 have led to the proliferation of hyper-realistic deepfakes, which are now being used to target women and erode people’s belief in documentation and evidence. New findings have shed light on the massive climate impact of deep learning, but organizations have continued to train ever larger and more energy-guzzling models. Scholars and journalists have also revealed just how many humans are behind the algorithmic curtain. The AI industry is creating an entirely new class of hidden laborers—content moderators, data labelers, transcribers—who toil away in often brutal conditions.”

 

Ethical artificial intelligence is a big deal for a number of reasons. If we can’t get past trying to optimistically optimize our assessments of our creations, we’re likely to run into significant problems down the road. For investors, it’s important to look critically at the industry and evaluate whether the companies and processes that you’re invested in are going to be part of the solution, or part of the problem.

 

NO COMMENTS

LEAVE A REPLY