We’re announcing seven principles to guide our work in AI.
– “Be socially beneficial”.
– “Avoid creating or reinforcing unfair bias”.
– “Be built and tested for safety”.
– “Be accountable to people”.
– “Incorporate privacy design principles”.
– “Uphold high standards of scientific excellence”.
– “Be made available for uses that accord with these principles”
Source: AI at Google: our principles
In “The Efficiency Paradox,” Edward Tenner considers why technologies intended to improve our lives often end up complicating them instead.
“Silicon Valley’s mistake is not in developing efficient algorithms from which we all benefit, but in encouraging the illusion that algorithms can and should function in the absence of human skills.”
Source: What Silicon Valley Could Use More of: Inefficiency
The hotness of the moment is machine learning, a subfield of AI.
Even worse, when you look under the rock at all the machine learning, you see a horrible nest of mathematics: Squiggling brackets and functions and matrices scatter. Software FAQs, PDFs, Medium posts all spiral into equations. Do I need to understand the difference between a sigmoid function and tanh? Can’t I just turn a dial somewhere?
It all reminds me of Linux and the web in the 1990s: a sense of wonderful possibility if you could just scale the wall of jargon. And of course it’s worth learning, because it works.
Source: I Tried to Get an AI to Write This Story
But while the home crowd cheered enthusiastically at how capable Google had seemingly made its prototype robot caller — with Pichai going on to sketch a grand vision of the AI saving people and businesses time — the episode is worryingly suggestive of a company that views ethics as an after-the-fact consideration.
One it does not allow to trouble the trajectory of its engineering ingenuity.
Source: Duplex shows Google failing at ethical and creative AI design
…if this technology becomes widespread, it will have other, more subtle effects, the type which can’t be legislated against. Writing for The Atlantic, Alexis Madrigal suggests that small talk — either during phone calls or conversations on the street — has an intangible social value. He quotes urbanist Jane Jacobs, who says “casual, public contact at a local level” creates a “web of public respect and trust.” What do we lose if we give people another option to avoid social interactions, no matter how minor?
Source: Google’s AI sounds like a human on the phone — should we be worried?
Awful AI is a curated list to track current scary usages of AI – hoping to raise awareness to its misuses in society
Artificial intelligence in its current state is unfair and easily susceptible to attacks. Nevertheless, more and more concerning uses of AI technology are appearing in the wild. This list aims to track all of them. We hope that Awful AI can be a platform to spur discussion for the development of possible contestational technology (to fight back!).
Source: GitHub – daviddao/awful-ai: Awful AI is a curated list to track current scary usages of AI – hoping to raise awareness
The UK is in a strong position to be a world leader in the development of artificial intelligence (AI). This position, coupled with the wider adoption of AI, could deliver a major boost to the economy for years to come. The best way to do this is to put ethics at the centre of AI’s development and use concludes a report by the House of Lords Select Committee on Artificial Intelligence, AI in the UK: ready, willing and able?, published today.
Source: UK can lead the way on ethical AI, says Lords Committee – News from Parliament
While AI impacts a plethora of rights, ARTICLE 19 and Privacy International are particularly concerned about the impact it will have on the right to privacy and the right to freedom of expression and information.
This scoping paper focuses on applications of ‘artificial narrow intelligence’: in particular, machine learning and its implications for human rights.
The aim of the paper is fourfold:
- 1. Present key technical definitions to clarify the debate;
2. Examine key ways in which AI impacts the right to freedom of expression and the right to privacy and outline key challenges;
3. Review the current landscape of AI governance, including various existing legal, technical, and corporate frameworks and industry-led AI initiatives that are relevant to freedom of expression and privacy; and
4. Provide initial suggestions for rights-based solutions which can be pursued by civil society organisations and other stakeholders in AI advocacy activities.
We believe that policy and technology responses in this area must:
— Ensure protection of human rights, in particular the right to freedom of expression and the right to privacy;
— Ensure accountability and transparency of AI;
— Encourage governments to review the adequacy of any legal and policy frameworks, and regulations on AI with regard to the protection of freedom of expression and privacy;
— Be informed by a holistic understanding of the impact of the technology: case studies and empirical research on the impact of AI on human rights must be collected; and
— Be developed in collaboration with a broad range of stakeholders, including civil society and expert networks.
If our taste is dictated by data-fed algorithms controlled by massive tech corporations, then we must be content to classify ourselves as slavish followers of robots.
Source: What Does The Amazon Echo Look Mean For Personal Style? – Racked
That’s why, at the M.I.T. Media Lab, we are starting to refer to such technology as “extended intelligence” rather than “artificial intelligence.” The term “extended intelligence” better reflects the expanding relationship between humans and society, on the one hand, and technologies like AI, blockchain, and genetic engineering on the other. Think of it as the principle of bringing society or humans into the loop.
Source: A.I. Engineers Must Open Their Designs To Democratic Control