The hotness of the moment is machine learning, a subfield of AI.
Even worse, when you look under the rock at all the machine learning, you see a horrible nest of mathematics: Squiggling brackets and functions and matrices scatter. Software FAQs, PDFs, Medium posts all spiral into equations. Do I need to understand the difference between a sigmoid function and tanh? Can’t I just turn a dial somewhere?
It all reminds me of Linux and the web in the 1990s: a sense of wonderful possibility if you could just scale the wall of jargon. And of course it’s worth learning, because it works.
But while the home crowd cheered enthusiastically at how capable Google had seemingly made its prototype robot caller — with Pichai going on to sketch a grand vision of the AI saving people and businesses time — the episode is worryingly suggestive of a company that views ethics as an after-the-fact consideration.
One it does not allow to trouble the trajectory of its engineering ingenuity.
…if this technology becomes widespread, it will have other, more subtle effects, the type which can’t be legislated against. Writing for The Atlantic, Alexis Madrigal suggests that small talk — either during phone calls or conversations on the street — has an intangible social value. He quotes urbanist Jane Jacobs, who says “casual, public contact at a local level” creates a “web of public respect and trust.” What do we lose if we give people another option to avoid social interactions, no matter how minor?
Awful AI is a curated list to track current scary usages of AI – hoping to raise awareness to its misuses in society
Artificial intelligence in its current state is unfair and easily susceptible to attacks. Nevertheless, more and more concerning uses of AI technology are appearing in the wild. This list aims to track all of them. We hope that Awful AI can be a platform to spur discussion for the development of possible contestational technology (to fight back!).
The UK is in a strong position to be a world leader in the development of artificial intelligence (AI). This position, coupled with the wider adoption of AI, could deliver a major boost to the economy for years to come. The best way to do this is to put ethics at the centre of AI’s development and use concludes a report by the House of Lords Select Committee on Artificial Intelligence, AI in the UK: ready, willing and able?, published today.
While AI impacts a plethora of rights, ARTICLE 19 and Privacy International are particularly concerned about the impact it will have on the right to privacy and the right to freedom of expression and information.
This scoping paper focuses on applications of ‘artificial narrow intelligence’: in particular, machine learning and its implications for human rights.
The aim of the paper is fourfold:
- 1. Present key technical definitions to clarify the debate;
2. Examine key ways in which AI impacts the right to freedom of expression and the right to privacy and outline key challenges;
3. Review the current landscape of AI governance, including various existing legal, technical, and corporate frameworks and industry-led AI initiatives that are relevant to freedom of expression and privacy; and
4. Provide initial suggestions for rights-based solutions which can be pursued by civil society organisations and other stakeholders in AI advocacy activities.
We believe that policy and technology responses in this area must:
— Ensure protection of human rights, in particular the right to freedom of expression and the right to privacy;
— Ensure accountability and transparency of AI;
— Encourage governments to review the adequacy of any legal and policy frameworks, and regulations on AI with regard to the protection of freedom of expression and privacy;
— Be informed by a holistic understanding of the impact of the technology: case studies and empirical research on the impact of AI on human rights must be collected; and
— Be developed in collaboration with a broad range of stakeholders, including civil society and expert networks.
If our taste is dictated by data-fed algorithms controlled by massive tech corporations, then we must be content to classify ourselves as slavish followers of robots.
That’s why, at the M.I.T. Media Lab, we are starting to refer to such technology as “extended intelligence” rather than “artificial intelligence.” The term “extended intelligence” better reflects the expanding relationship between humans and society, on the one hand, and technologies like AI, blockchain, and genetic engineering on the other. Think of it as the principle of bringing society or humans into the loop.
Sci-fi, my reasoning goes, plays an informal and largely unacknowledged role in setting public expectations and understanding about technology in general and AI in particular. That, in turn, affects public attitudes, conversations, behaviors at work, and votes. If we found that sci-fi was telling the public misleading stories over and over, we should make a giant call for the sci-fi creating community to consider telling new stories. It’s not that we want to change sci-fi from being entertainment to being propaganda, but rather to try and take its role as informal opinion-shaper more seriously.
Looks like the start to an interesting set of posts.
Source: Untold AI
But what if we’re worrying about the wrong thing, like we have almost every single time before? What if the real danger of AI was far remote from the “superintelligence” and “singularity” narratives that many are panicking about today? In this post, I’d like to raise awareness about what really worries me when it comes to AI: the highly effective, highly scalable manipulation of human behavior that AI enables, and its malicious use by corporations and governments.