While AI impacts a plethora of rights, ARTICLE 19 and Privacy International are particularly concerned about the impact it will have on the right to privacy and the right to freedom of expression and information.
This scoping paper focuses on applications of ‘artificial narrow intelligence’: in particular, machine learning and its implications for human rights.
The aim of the paper is fourfold:
- 1. Present key technical definitions to clarify the debate;
2. Examine key ways in which AI impacts the right to freedom of expression and the right to privacy and outline key challenges;
3. Review the current landscape of AI governance, including various existing legal, technical, and corporate frameworks and industry-led AI initiatives that are relevant to freedom of expression and privacy; and
4. Provide initial suggestions for rights-based solutions which can be pursued by civil society organisations and other stakeholders in AI advocacy activities.
We believe that policy and technology responses in this area must:
— Ensure protection of human rights, in particular the right to freedom of expression and the right to privacy;
— Ensure accountability and transparency of AI;
— Encourage governments to review the adequacy of any legal and policy frameworks, and regulations on AI with regard to the protection of freedom of expression and privacy;
— Be informed by a holistic understanding of the impact of the technology: case studies and empirical research on the impact of AI on human rights must be collected; and
— Be developed in collaboration with a broad range of stakeholders, including civil society and expert networks.
If our taste is dictated by data-fed algorithms controlled by massive tech corporations, then we must be content to classify ourselves as slavish followers of robots.
Source: What Does The Amazon Echo Look Mean For Personal Style? – Racked
That’s why, at the M.I.T. Media Lab, we are starting to refer to such technology as “extended intelligence” rather than “artificial intelligence.” The term “extended intelligence” better reflects the expanding relationship between humans and society, on the one hand, and technologies like AI, blockchain, and genetic engineering on the other. Think of it as the principle of bringing society or humans into the loop.
Source: A.I. Engineers Must Open Their Designs To Democratic Control
Sci-fi, my reasoning goes, plays an informal and largely unacknowledged role in setting public expectations and understanding about technology in general and AI in particular. That, in turn, affects public attitudes, conversations, behaviors at work, and votes. If we found that sci-fi was telling the public misleading stories over and over, we should make a giant call for the sci-fi creating community to consider telling new stories. It’s not that we want to change sci-fi from being entertainment to being propaganda, but rather to try and take its role as informal opinion-shaper more seriously.
Looks like the start to an interesting set of posts.
Source: Untold AI
But what if we’re worrying about the wrong thing, like we have almost every single time before? What if the real danger of AI was far remote from the “superintelligence” and “singularity” narratives that many are panicking about today? In this post, I’d like to raise awareness about what really worries me when it comes to AI: the highly effective, highly scalable manipulation of human behavior that AI enables, and its malicious use by corporations and governments.
Source: What worries me about AI – François Chollet – Medium
Some types of developers are involved in the increasing role of machine learning and artificial intelligence in the world today, so we asked developers what they think is dangerous and exciting about these technologies. There is not much consensus among developers about what is most dangerous; each answer was chosen roughly equally. The top choice for what is exciting about increasing AI is that jobs can be automated.
and via (Import AI):
People that identified as technical specialists tended to say they were more concerned about issues of fairness than the singularity, whereas designers and mobile developers tended to be more concerned about the singularity.
Source: Stack Overflow Developer Survey 2018
What’s interesting, though, is that modern AI techniques like deep neural networks aren’t actually that well-suited for projects like Sheldon County. Ryan says he mainly uses what’s sometimes called symbolic AI or, pejoratively, “good old-fashioned AI.” This approach is less about mining data to look for patterns, as with deep learning, and more about creating sets of rules and logical instructions that guide a process.
There are some simple reasons modern AI don’t work for tasks like this, says Riedl. It’s partly that techniques like deep learning still aren’t good at generating coherent text (even the most advanced chatbots today rely on preprogrammed phrases). And also because older techniques give programmers more control over the output.
Source: What an ‘infinite’ AI-generated podcast can tell us about the future of entertainment – The Verge
Academics, economists, and AI researchers often undervalue the role of intuition in science. Here’s why they’re wrong.
Source: Can AI Ever Learn To Follow Its Gut? — Wired
AI is unlike any of our previous art-making technologies. Working with AI, artists can harness chaos and complexity to find unexpected signals and beauty in the noise. We can parse, recode, and connect to values and patterns that exceed our grasp. AI can provide extraordinary precision tools for artists who are, on the whole, perhaps better suited to tangential and divergent thinking.
Source: AI will be the art movement of the 21st century