In a recent letter to Amazon CEO Jeff Bezos, the Congressional Black Caucus expressed concern about the “profound negative unintended consequences” face surveillance could have for Black people, undocumented immigrants, and protesters. Our results validate this concern: Nearly 40 percent of Rekognition’s false matches in our test were of people of color, even though they make up only 20 percent of Congress.
Nature talks to Brent Hecht, who says peer reviewers must ensure that researchers consider negative societal consequences of their work.
Amazon’s Alexa and Google’s Assistant are spearheading a voice-activated revolution, rapidly changing the way millions of people around the world learn new things and plan their lives.
But for people with accents — even the regional lilts, dialects and drawls native to various parts of the United States — the artificially intelligent speakers can seem very different: inattentive, unresponsive, even isolating. For many across the country, the wave of the future has a bias problem, and it’s leaving them behind.
The IEEE Standards Association (IEEE-SA) and the MIT Media Lab are joining forces to launch a global Council on Extended Intelligence (CXI).
CXI was created to proliferate the ideals of responsible participant design, data agency and metrics of economic prosperity prioritizing people and the planet over profit and productivity.
Our pop culture visions of A.I. are not helping us. In fact, they’re hurting us. They’re decades out of date. And to make matters worse, we keep using the old clichés in order to talk about emerging technologies today. They make it harder for us to understand A.I. — what it is, what it isn’t, and what impact it will have on our lives. When we don’t understand A.I., then we don’t understand the power differentials at play. We won’t learn to ask questions that could lead to better A.I. in the future—and better clichés today. Let’s lay the ghosts and cyborgs to rest and find a real way to communicate about A.I.
Philosophically, intellectually—in every way—human society is unprepared for the rise of artificial intelligence.
Ultimately, the term artificial intelligence may be a misnomer. To be sure, these machines can solve complex, seemingly abstract problems that had previously yielded only to human cognition. But what they do uniquely is not thinking as heretofore conceived and experienced. Rather, it is unprecedented memorization and computation. Because of its inherent superiority in these fields, AI is likely to win any game assigned to it. But for our purposes as humans, the games are not only about winning; they are about thinking. By treating a mathematical process as if it were a thought process, and either trying to mimic that process ourselves or merely accepting the results, we are in danger of losing the capacity that has been the essence of human cognition.
Source: How the Enlightenment Ends
We’re announcing seven principles to guide our work in AI.
– “Be socially beneficial”.
– “Avoid creating or reinforcing unfair bias”.
– “Be built and tested for safety”.
– “Be accountable to people”.
– “Incorporate privacy design principles”.
– “Uphold high standards of scientific excellence”.
– “Be made available for uses that accord with these principles”
Source: AI at Google: our principles
In “The Efficiency Paradox,” Edward Tenner considers why technologies intended to improve our lives often end up complicating them instead.
“Silicon Valley’s mistake is not in developing efficient algorithms from which we all benefit, but in encouraging the illusion that algorithms can and should function in the absence of human skills.”
The hotness of the moment is machine learning, a subfield of AI.
Even worse, when you look under the rock at all the machine learning, you see a horrible nest of mathematics: Squiggling brackets and functions and matrices scatter. Software FAQs, PDFs, Medium posts all spiral into equations. Do I need to understand the difference between a sigmoid function and tanh? Can’t I just turn a dial somewhere?
It all reminds me of Linux and the web in the 1990s: a sense of wonderful possibility if you could just scale the wall of jargon. And of course it’s worth learning, because it works.
But while the home crowd cheered enthusiastically at how capable Google had seemingly made its prototype robot caller — with Pichai going on to sketch a grand vision of the AI saving people and businesses time — the episode is worryingly suggestive of a company that views ethics as an after-the-fact consideration.
One it does not allow to trouble the trajectory of its engineering ingenuity.