Some types of developers are involved in the increasing role of machine learning and artificial intelligence in the world today, so we asked developers what they think is dangerous and exciting about these technologies. There is not much consensus among developers about what is most dangerous; each answer was chosen roughly equally. The top choice for what is exciting about increasing AI is that jobs can be automated.
and via (Import AI):
People that identified as technical specialists tended to say they were more concerned about issues of fairness than the singularity, whereas designers and mobile developers tended to be more concerned about the singularity.
What’s interesting, though, is that modern AI techniques like deep neural networks aren’t actually that well-suited for projects like Sheldon County. Ryan says he mainly uses what’s sometimes called symbolic AI or, pejoratively, “good old-fashioned AI.” This approach is less about mining data to look for patterns, as with deep learning, and more about creating sets of rules and logical instructions that guide a process.
There are some simple reasons modern AI don’t work for tasks like this, says Riedl. It’s partly that techniques like deep learning still aren’t good at generating coherent text (even the most advanced chatbots today rely on preprogrammed phrases). And also because older techniques give programmers more control over the output.
Academics, economists, and AI researchers often undervalue the role of intuition in science. Here’s why they’re wrong.
Source: Can AI Ever Learn To Follow Its Gut? — Wired
AI is unlike any of our previous art-making technologies. Working with AI, artists can harness chaos and complexity to find unexpected signals and beauty in the noise. We can parse, recode, and connect to values and patterns that exceed our grasp. AI can provide extraordinary precision tools for artists who are, on the whole, perhaps better suited to tangential and divergent thinking.
Google is providing assistance to the Defense Department’s new algorithmic warfare initiative to apply artificial intelligence solutions to drone targeting.
This reading list is made for engineers, scientists, designers, policy makers and those interested in machine learning and AI. It’s an open ended document that examines machine learning as a sociotechnical system and contextualises its critical discourse
In the coming months, NYC Mayor Bill de Blasio will announce a new task force on “Automated Decision Systems” — the first of its kind in…
While these systems are already influencing important decisions, there is still no clear framework in the US to ensure that they are monitored and held accountable.¹ Indeed, even many simple systems operate as “black boxes,” as they are outside the scope of meaningful scrutiny and accountability. This is worrying. If governments continue on this path, they and the public they serve will increasingly lose touch with how decisions have been made, thus rendering them unable to know or respond to bias, errors, or other problems. The urgency of this concern is why AI Now has called for an end to the use of black box systems in core public agencies. Black boxes must not prevent agencies from fulfilling their responsibility to protect basic democratic values, such as fairness and due process, and to guard against threats like illegal discrimination or deprivation of rights.
This is a rather difficult “entity” to design an identity for — it’s not an identity for a restaurant or a company selling shoes or even a telco — as there are few points of reference or comparison for anyone involved (from client, to designer, to audience). This is similar to IBM Watson’s identity in that it has to give voice and personality to an ambiguous thinking brain making decisions while trying to make it marketable at time where there aren’t that many mass-market artificial intelligence platforms to compare against.
Our problem isn’t that Artificial Intelligence is getting better at being human, it’s that human beings are getting worse at it.
Using Google Clips to understand how a human-centered design process elevates artificial intelligence
As was the case with the mobile revolution, and the web before that, machine learning will cause us to rethink, restructure, and reconsider what’s possible in virtually every experience we build. In the Google UX community, we’ve started an effort called “human-centered machine learning” to help focus and guide that conversation. Using this lens, we look across products to see how machine learning (ML) can stay grounded in human needs while solving for them—in ways that are uniquely possible through ML. Our team at Google works across the company to bring UXers up to speed on core ML concepts, understand how to best integrate ML into the UX utility belt, and ensure we’re building ML and AI in inclusive ways.
Source: The UX of AI