If you could choose one photo to represent "machine learning", what would it be?
I'm sick of pulsing brains of 1s and 0s or people standing around a chalk board.
— Hilary Mason (@hmason) November 28, 2017
The best way to maximize the impact of any technology is to make it as accessible as possible. Only then will AI begin to creep into ordinary offices and workplaces. DataRobot is already being used in some of those settings.
Yet all the advancements exhibited at the event, the World Internet Conference, in the picturesque eastern Chinese city of Wuzhen, also offered reason for caution. The technology enabling a full techno-police state was on hand, giving a glimpse into how new advances in things like artificial intelligence and facial recognition can be used to track citizens — and how they have become widely accepted here.
At the moment, only about 12–15% of the engineers who are building the internet and its software are women.
“Siri and Alexa remain either evasive, grateful, or flirtatious, while Cortana and Google Home crack jokes in response to the harassments they comprehend.”
Artificial intelligence is already transforming charity, humanitarian relief, and international development. Since people donate more to causes that receive more attention, the AIs that rank news on social media already influence the flow of charitable donations.
Law enforcement’s embrace of A.I. and its built-in racial biases terrifies me.
I’m terrified for what these advances mean for my two young children. The same technology that’s the source of so much excitement in my career is being used in law enforcement in ways that could mean that in the coming years, my son, who is 7 now, is more likely to be profiled or arrested — or worse — for no reason other than his race and where we live.
Source: Opinion | ‘Intelligent’ Policing and My Innocent Children – New York Times
Facebook first introduced its AI-based suicide prevention tools earlier this year. But until now, those tools still required a user, or one of their friends, to seek help. Now the social network says the tech has advanced to the point that it can proactively intervene when it detects that someone may be at risk of self harm or suicide — even if no one else has made a report.
As machine learning becomes more powerful, the field’s researchers increasingly find themselves unable to account for what their algorithms know — or how they know it.
To create a neural net that can reveal its inner workings, the researchers in Gunning’s portfolio are pursuing a number of different paths. Some of these are technically ingenious — for example, designing new kinds of deep neural networks made up of smaller, more easily understood modules, which can fit together like Legos to accomplish complex tasks. Others involve psychological insight: One team at Rutgers is designing a deep neural network that, once it makes a decision, can then sift through its data set to find the example that best demonstrates why it made that decision. (The idea is partly inspired by psychological studies of real-life experts like firefighters, who don’t clock in for a shift thinking, These are the 12 rules for fighting fires; when they see a fire before them, they compare it with ones they’ve seen before and act accordingly.) Perhaps the most ambitious of the dozen different projects are those that seek to bolt new explanatory capabilities onto existing deep neural networks. Imagine giving your pet dog the power of speech, so that it might finally explain what’s so interesting about squirrels. Or, as Trevor Darrell, a lead investigator on one of those teams, sums it up, “The solution to explainable A.I. is more A.I.”
Source: Mapping’s Intelligent Agents
Technologists once told us that social bots would change our lives forever. They were right — but not in the way they expected.
Less transparent social bots — primarily on Twitter and other social platforms — posed a risk to media and discourse. “Platforms, governments and citizens must step in and consider the purpose — and future — of bot technology before manipulative anonymity becomes a hallmark of the social bot,” the authors cautioned.
This warning wasn’t just a prediction; it was based in observation. Anonymous bots masquerading as citizen and political actors had been a creeping feature in foreign elections for years. The 2012 election of President Enrique Peña Nieto of Mexico was supported by armies of automated social-media accounts, which flooded Twitter with supportive messages. “Peñabots” became a feature of online Mexican political discourse through at least 2015. But bots hadn’t yet run rampant on American tech companies’ home turf. Manipulation by A.I. was typically seen as “something that was happening somewhere else,” M.C. Elish, a researcher at Data & Society who contributed to the report, told me. “We only notice something when it’s arrived on our doorstep.”
Source: Not the Bots We Were Looking For New York Times