AI is at an inflection point, according to study

1250
artificial intelligence

New research suggests that the world has reached a turning point on artificial intelligence, related to the power and reach of that technology in our lives.

Alan Boyle at Geekwire reports today on a report called “Gathering Strength, Gathering Storm” that goes into some of this growing need for ethical AI and the human harnessing of computing power for managed applications. The report, he writes, is part of a project called the One Hundred Year Study on Artificial Intelligence, or “AI100.”

“AI100 was initiated by Eric Horvitz, Microsoft’s chief scientific officer, and hosted by the Stanford University Institute for Human-Centered Artificial Intelligence,” Boyle reports. “The project is funded by a gift from Horvitz, a Stanford alumnus, and his wife, Mary.”

Specifically mentioned in the reports are deepfakes, or videos showing false activity, as well as privacy intrusions and certain types of AI that can manipulate public opinion in fundamental ways.

“In the past five years, AI has made the leap from something that mostly happens in research labs or other highly controlled settings to something that’s out in society affecting people’s lives,” explains Brown University computer scientist Michael Littman in a press statement on the subject, as cited in Boyle’s coverage. “That’s really exciting, because this technology is doing some amazing things that we could only dream about five or ten years ago … But at the same time the field is coming to grips with the societal impact of this technology, and I think the next frontier is thinking about ways we can get the benefits from AI while minimizing the risks.”

Part of the problem, in the view of many experts, is the ‘black box issue,’ where AI programs become increasingly opaque to human operators as they gain capacity and power.

“Many of the machine learning techniques that led to the current success of AI are based on artificial neural networks,” write scientists at the U.S. National Institutes of Health.  “The features of these approaches that give rise to ethical concerns are opacity, unpredictability and the need for large datasets to train the technologies. Neither the developer, the deployer nor the user … can normally know in advance how the system will react to a given set of inputs. And because the system learns and is thus adaptive and dynamic, past behaviours are not a perfect predictor for future behaviour in identical situations.”

What can companies do?

“Companies need a plan for mitigating risk — how to use data and develop AI products without falling into ethical pitfalls along the way,” writes Reid Blackman at Harvard Business Review. “Just like other risk-management strategies, an operationalized approach to data and AI ethics must systematically and exhaustively identify ethical risks throughout the organization, from IT to HR to marketing to product and beyond.”

Look for the effects of these phenomena, in business, in society… and in markets!

 

NO COMMENTS

LEAVE A REPLY