Is AI an existential threat to humanity?
Hello everyone,
As I write this newsletter I’m thinking about how amazing it is that most email inboxes, including my own, have a spam filter which is based on artificial intelligence (AI). AI is also behind the shopping, streaming and music recommendations that pop up on my laptop everyday.
Because targeted recommendations are not particularly exciting, science fiction prefers to depict AI as super-intelligent robots that overthrow humanity. My favourite sci-fi stories, Asimov’s ‘I, Robot’, famously describe humans controlling AI robots using the ‘Three Laws of Robotics’, the most important being that robots may not injure human beings. Whilst such a scenario may seem far-fetched, many believe that current AI systems will one day become more intelligent than us. Notable figures, including the late Stephen Hawking, have expressed fear about how this future AI may escape control and threaten humanity.
To address this concern we asked 11 experts in AI and Computer Science “Is AI an existential threat to humanity?” There was an 82% consensus that it is not an existential threat. Here is what we found out…
Each month we investigate a topic voted by the community by asking the world's top experts to review the evidence. Please vote on which topic you would like us to review next month here:
EXPERT CONSENSUS
Is AI an existential threat to humanity?
82% disagreed from 11 experts
How close are we to making AI that is more intelligent than us?
The AI that currently exists is called ‘narrow’ or ‘weak’ AI. It is widely used for many applications like facial recognition, self-driving cars, and internet recommendations. It is defined as ‘narrow’ because these systems can only learn and perform very specific tasks. They often actually perform these tasks better than humans – famously Deep Blue was the first AI to beat a world chess champion in 1997 – however they cannot apply their learning to anything other than a very specific task (Deep Blue can only play chess).
Another type of AI is called Artificial General Intelligence (AGI). This is defined as AI that mimics human intelligence, including the ability to think and apply intelligence to multiple different problems. Some people believe that AGI is inevitable and will happen imminently in the next few years. Matthew O’Brien, robotics engineer from the Georgia Institute of Technology disagrees, “the long-sought goal of a "general AI" is not on the horizon. We simply do not know how to make a general adaptable intelligence, and it's unclear how much more progress is needed to get to that point.”.
How could a future AGI threaten humanity?
Whilst it is not clear when or if AGI will come about, can we predict what threat they might pose to us humans? AGI learns from experience and data as opposed to being explicitly told what to do. This means that, when faced with a new situation it has not seen before, we may not be able to completely predict how it reacts. Dr Roman Yampolskiy, computer scientist from Louisville University also believes that “no version of human control over AI is achievable” as it is not possible for the AI to both be autonomous and controlled by humans. Not being able to control super-intelligent systems could be disastrous.
Yingxu Wang, professor of Software and Brain Sciences from Calgary University disagrees, saying that “professionally designed AI systems and products are well constrained by a fundamental layer of operating systems for safeguard users’ interest and wellbeing, which may not be accessed or modified by the intelligent machines themselves.” Dr O’Brien adds “just like with other engineered systems, anything with potentially dangerous consequences would be thoroughly tested and have multiple redundant safety checks.”
Could the AI we use today become a threat?
Many of the experts agreed that AI could be a threat in the wrong hands. Dr George Montanez, AI expert from Harvey Mudd College highlights that “robots and AI systems do not need to be sentient to be dangerous; they just have to be effective tools in the hands of humans who desire to hurt others. That is a threat that exists today.”
Even without malicious intent, today’s AI can be threatening. For example, racial biases have been discovered in algorithms that allocate health care to patients in US. Similar biases have been found in facial recognition software used for law enforcement. These biases have wide-ranging negative impacts despite the ‘narrow’ ability of the AI.
AI bias come from the data it is trained on. In the cases of racial bias, the training data was not representative of the general population. Another example happened in 2016, when an AI-based chatbox was found sending highly offensive and racist content. This was found to be because people were sending the bot offensive messages, which it learnt from.
The takeaway:
The AI that we use today is exceptionally useful for many different tasks. That doesn’t mean it is always positive – it is a tool which, if used maliciously or incorrectly, can have negative consequences. Despite this, it currently seems to be unlikely to become an existential threat to humanity.
May the facts be with you!
Eva
METAFACT REVIEW
Is coffee good for you?
Exclusive reviews for our fact-loving members.
Each month we investigate a topic voted by the community by asking the world's top experts to review the evidence. Reviews are what you need to know. You can read all the reviews online here.
Many of us don’t just like our morning cup of coffee – without it, we seem to function poorly. The alertness and stimulation of caffeine is well known – making it a daily ritual for most of the world making coffee is the world’s most popular beverage (excluding water).
For any drink so popular as coffee – people have claimed all sorts of things. Some say coffee fights Alzheimer's, helps weight loss, and makes us live longer! Are these true? What is the evidence behind them? Is there a science to making a good cup of coffee? And what level of coffee consumption is safe? This month we asked independent experts from across the globe to review the facts. Here's what we found...
If you like this weeks newsletter - share it to your colleagues and friends!