But what if we’re worrying about the wrong thing, like we have almost every single time before? What if the real danger of AI was far remote from the “superintelligence” and “singularity” narratives that many are panicking about today? In this post, I’d like to raise awareness about what really worries me when it comes to AI: the highly effective, highly scalable manipulation of human behavior that AI enables, and its malicious use by corporations and governments.
The Prime Minister Édouard Philippe tasked Cédric Villani with a mission on artificial intelligence. The goal was to lay the foundations of an ambitious French strategy in the AI field.
Given the fast-changing nature of AI technologies and practices, our society has a collective duty to be aware of and discuss the issues this raises. This is especially relevant for fragile populations and groups already excluded from the digital sector, for whom AI represents an even greater danger.
AI could lead to a better, fairer and more efficient society, or it could lead to wealth being concentrated in the hands of a very small group of digital elites. Therefore, in the AI field, inclusive policies must seek to attain two goals: ensure that the development of these technologies does not increase social and economic equalities, and use AI to reduce these inequalities.
Source: AI for humanity
Some types of developers are involved in the increasing role of machine learning and artificial intelligence in the world today, so we asked developers what they think is dangerous and exciting about these technologies. There is not much consensus among developers about what is most dangerous; each answer was chosen roughly equally. The top choice for what is exciting about increasing AI is that jobs can be automated.
and via (Import AI):
People that identified as technical specialists tended to say they were more concerned about issues of fairness than the singularity, whereas designers and mobile developers tended to be more concerned about the singularity.
What’s interesting, though, is that modern AI techniques like deep neural networks aren’t actually that well-suited for projects like Sheldon County. Ryan says he mainly uses what’s sometimes called symbolic AI or, pejoratively, “good old-fashioned AI.” This approach is less about mining data to look for patterns, as with deep learning, and more about creating sets of rules and logical instructions that guide a process.
There are some simple reasons modern AI don’t work for tasks like this, says Riedl. It’s partly that techniques like deep learning still aren’t good at generating coherent text (even the most advanced chatbots today rely on preprogrammed phrases). And also because older techniques give programmers more control over the output.
Academics, economists, and AI researchers often undervalue the role of intuition in science. Here’s why they’re wrong.
Source: Can AI Ever Learn To Follow Its Gut? — Wired
AI is unlike any of our previous art-making technologies. Working with AI, artists can harness chaos and complexity to find unexpected signals and beauty in the noise. We can parse, recode, and connect to values and patterns that exceed our grasp. AI can provide extraordinary precision tools for artists who are, on the whole, perhaps better suited to tangential and divergent thinking.
Google is providing assistance to the Defense Department’s new algorithmic warfare initiative to apply artificial intelligence solutions to drone targeting.
This reading list is made for engineers, scientists, designers, policy makers and those interested in machine learning and AI. It’s an open ended document that examines machine learning as a sociotechnical system and contextualises its critical discourse
In the coming months, NYC Mayor Bill de Blasio will announce a new task force on “Automated Decision Systems” — the first of its kind in…
While these systems are already influencing important decisions, there is still no clear framework in the US to ensure that they are monitored and held accountable.¹ Indeed, even many simple systems operate as “black boxes,” as they are outside the scope of meaningful scrutiny and accountability. This is worrying. If governments continue on this path, they and the public they serve will increasingly lose touch with how decisions have been made, thus rendering them unable to know or respond to bias, errors, or other problems. The urgency of this concern is why AI Now has called for an end to the use of black box systems in core public agencies. Black boxes must not prevent agencies from fulfilling their responsibility to protect basic democratic values, such as fairness and due process, and to guard against threats like illegal discrimination or deprivation of rights.
This is a rather difficult “entity” to design an identity for — it’s not an identity for a restaurant or a company selling shoes or even a telco — as there are few points of reference or comparison for anyone involved (from client, to designer, to audience). This is similar to IBM Watson’s identity in that it has to give voice and personality to an ambiguous thinking brain making decisions while trying to make it marketable at time where there aren’t that many mass-market artificial intelligence platforms to compare against.