Artificial Intelligence - Review
For some, artificial intelligence (AI) may just exist in futuristic worlds like Matrix or in evil androids such as HAL 9000 or Terminator. But actually, AI is in your pocket. It filters your spam, helps you choose what to watch on Netflix and may answer if you say “Siri” or “Alexa” out loud.
AI was first envisioned in 1950 by Alan Turing, who wrote a scientific article where he questioned whether machines are able to think. From that seminal work, researchers have managed to create computer systems capable of performing tasks that would normally require human intelligence. This is what we call artificial intelligence.
But how does AI work? The broad rule of AI is to learn from examples, gathering information and finding patterns to make general rules. For example, Siri is an AI system that’s been given a huge number of different questions (with its answers), so it can recognize a specific question we ask and select the appropriate answer for us.
More complex AI systems are able to complete tasks using just logic and general data. A self-driving car, for example, uses computer vision and image recognition to gather information from its surroundings and decide whether to turn left, brake or accelerate.
AI is faster and more reliable than humans, and is always available. This might explain why AI has made its way into healthcare, education, finance, law or manufacturing. However, legal, ethical and security concerns have also arisen.
To better understand the benefits, disadvantages or dangers of AI, this month we have asked experts all about it. Can we trust self-driving cars? Can AI improve our health and our society? How similar can it be to a human being? Could AIs be a real threat to us? Here is what we learned.
Meta-Index
40,000: number of AI-related scientific studies published in 2019.
12 billion: spending on AI in Europe in 2021 (dollars).
77: compounds identified by AI that can halt coronavirus spread.
15.7 trillion: predicted AI contribution to the global economy by 2030 (dollars).
99%: percentage of accuracy of AI in detecting metastatic breast cancer.
85 million: jobs taken by AI.
97 million: jobs created by AI.
40%: percentage of increase in productivity attributed to AI.
BACKGROUND
Beethoven's AI
Beethoven's 9th symphony is considered his greatest work and one of the most celebrated musical compositions. Beethoven's music was getting better with age and experience. Everyone was eagerly awaiting his 10th symphony which he was writing when he died in 1827, aged 56. Could others finish writing the 10th symphony?
Beethoven scholars have attempted to complete it, but Artificial Intelligence has dared to finish the job. Dr Ahmed Elgammal from Rutgers University started this project in 2019 together with composers, computational music experts and musicologists. But how?
Following the general rule of how AI works, based on acquiring a huge amount of data, analyzing patterns and classifying them. For each pattern, AI then creates rules or instructions that will be used to solve specific problems or answer specific questions. In this particular case, researchers took the whole of Beethoven's work along with the few available sketches from his 10th symphony, and programmed an AI to work out how it was that Beethoven was composing.
The 'Beethoven AI' was able to learn how to take a short musical fragment and develop it into a longer and more complicated one, just like Beethoven. Believe it or not, the AI learned how Beethoven composed his 5th symphony out of just the initial four notes.
Of course, many other challenges appeared. The AI had to learn how Beethoven harmonized a melody, how he linked the different parts of a symphony or even how he assigned different instruments for each part). They literally had a machine learn the genius’ creative process inside out, to create something that Beethoven himself might have written. The results are impressive.
While voice assisting AIs (weak AI) are only able to respond to very specific tasks, this project developed what’s known as a strong AI, able to use fuzzy logic to apply knowledge from one domain to another. Some may think that AIs will never get closer to Beethoven's irreverent and supreme genius, that they should not try to replicate or even replace human creative process. Others see AI rather as a tool to get closer to art. But can they really do art?
CONSENSUS
Can AIs forget?
Our brains can play us false in many ways, but forgetting things is certainly one of the most common. It can be annoying when you forget your keys at home or miss an anniversary, or when you blank at that sentence when giving a presentation. It can get worse than that, when it comes to getting ill and forgetting life memories, people's faces or who we are. Will AIs be able to bypass the passage of time and never forget? Or, on the contrary, will they be as forgetful as we are?
Dr David Tuffley from Griffith University writes that AIs can actually forget if we program them to do so, “to forget information that corresponds to certain defined criteria.” This view is shared by Prof Mark Lee from Aberystwyth University, who stresses that “although computer hardware chips have perfect memory, it is possible to design any computer program to forget, so, in principle, an AI system can forget too.” Still, Dr Tuffley also points out that AIs may be immune to the breakdown in memory that we humans suffer as a consequence of disease or age, since “silicon based neural networks do not suffer the same degradation over time as organic neural tissue.”
However, forgetfulness may get more complicated than that, and AIs could actually forget. According to Prof Kate Senko from Boston University, “something like 'forgetting' can happen if an AI program is continuously adjusted on new data but has limited storage, so that the older program becomes overwritten with the new one.” Prof Senko points out that, of course, “an AI program does not need to forget if there is infinite storage available,” but such infinite space is not currently available to AIs. Prof Senko uses her own research as an example: “if we keep adjusting AI to new languages, then eventually it might 'forget' how to recognize the earlier language, meaning it would have worse recognition accuracy.”
This is what Prof Scott Fahlman from Carnegie Mellon University calls “catastrophic forgetting.” Fahlman explains that this process occurs when “you train a network on one task or set of data, and then move to a different task.” In such a scenario, “performance on the first task can be degraded or ruined.” Prof Fahlman presents this as one of the most pressing challenges and writes that AI researchers “are working to combat” it. Meanwhile, Prof Lee argues that this process puts a spotlight on “how unlike human memory such AI memories can be.”
CONSENSUS
Can AIs discriminate?
We all want a fair society where important decisions (including those made by politicians and judges) are free from subjective opinions or prejudice. The problem is that those decisions are made by a human being who will, most probably, have opinions that may bias the final verdict. One wonders if we couldn’t use AIs to remove any chance of prejudice and bias. However, can AIs indeed avoid discrimination?
We hate to break the bad news, but AIs seem to discriminate. Prof Zdenka Kuncic from the University of Sydney writes that AIs will discriminate “if the data provided is biased.” Dr David Tuffley agrees, and uses the following example to make this point: “if the data suggest that most doctors are men based on 1970’s data, whereas the reality in 2021 is that there are at least as many female doctors as male, then the AI could be accused of discrimination against women in assuming most doctors today are men.” However, Dr Tuffley stresses that the source of this discrimination is that present in the data the AI is trained with, which is very much human-dependent. He reminds us that “AI is subject to the limitations of its training data” so, at the end of the day, “AI does discriminate in the same way that humans do.”
Prof Mark Lee explains this issue further by bringing us back to the basics of AI. He points out that “one of the oldest and most studied techniques in AI is classification. A classifier system attempts to learn the differences between patterns of data and then sort any new examples into different classes.” According to Prof Lee, “this is discrimination and it can be applied to any data, given suitable criteria for dividing up the classes.” He admits that this is a huge problem with a difficult solution, as “these systems learn their discrimination function through training on very large sets of examples, often millions of examples. The training process is too machine intensive for humans to follow in detail and the end result may contain some forms of unknown bias or partiality.”
Finally, Prof Kay Kirkpatrick from the University of Illinois proposes a way to tackle this unintentional discrimination, which involves humans countering it from the get-go. In her view, we need to assume “that an AI will discriminate by default and to plan on removing the bias in early testing and getting feedback after deployment for continued reduction of bias.”
CONSENSUS
Can AIs feel emotions?
Few things are more human than love, sadness or boredom. Emotions. That is precisely why, when we think about AI - a machine, after all - having that sort of feelings, we may feel troubled. Would they feel so as well?
The answer is, probably not. Prof Mark Lee writes that “AI systems can detect emotions when looking at human faces. They can analyze gestures, posture, and tiny facial muscle movements to build up a picture of an emotional state.” However, “they cannot feel emotions themselves,” he argues. Prof Lee still admits that “it's always possible to write a program that simulates a type of behavior” but considers that this would be “just a trick.” Dr David Tuffley agrees. He goes on to explain that “this is so-called “affective computing” and that it is a rapidly developing discipline, which aims to create anthropomorphic-like AIs for use in androids. Also for disembodied voices, he says,such as the ones you find when you use Google Assistant, Alexa or Cortana. Prof Kate Saenko provides an example: “a synthetic AI-based avatar can produce facial expressions like smiling.”
It’s all very well saying that AIs can display emotions, but how to make sure? Assessing their emotions is, according to experts, an issue by itself. Prof Roman Yampolskiy from the University of Louisville writes that “it is not obvious how that [feelings] can be measured.” Along the same lines, Prof Scott Fahlman is uncertain on whether AIs can feel emotions, but also raises another, very important point: even if AIs were to have feelings, we may not feel them as genuine as those of fellow humans. Indeed, he argues that this ultimately depends on how we consider others' feelings. Prof Fahlman proposes that “I know that I feel emotions - I can feel them. Other humans claim to feel emotions similar to mine, and appear to act in emotions in ways similar to what I would do.” This is, according to Prof Fahlman, why he likes to give people the benefit of the doubt. Now, would this soften our judgement when it is a machine that is on the other end of the line? For him, it would be down to us whether we accept whichever emotions we manage to create in AI systems as genuine, even assuming that “it's possible to make an AI system, or even a simple ‘chatbot’ claim and appear to feel similar emotions in similar circumstances” as a fellow human would. Curious about his personal take on it? “I think that these are as real as your emotions (though probably much simpler), but not as real (to me) as my emotions,” Prof Fahlman concludes.
CONSENSUS
Is AI an existential threat to humanity?
The first of Asimov’s Laws reads: “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” This is only a rule devised by a science fiction writer, of course, but it has largely influenced the ethics of AI. To what extent is this law in action? Are we in danger?
In reality, it is extremely unlikely that AIs will deliberately harm any of us. Prof Yingxu Wang from the University of Calgary reminds us that “professionally designed AI systems and products are well constrained by a fundamental layer of operating systems.” He points out that international standards are being developed to restrict “permits for AI systems on potentially harmful behaviors to humans or the environment.” Dr George Montanez from Microsoft agrees and he adds that “AI systems are a long way from cracking the hard problem of consciousness and being able to generate their own goals contrary to their programming.” Still, he makes an important remark when it comes to the danger of AI: “it becomes dangerous in the same way a gun is dangerous - not because it is sentient or autonomous, but because of the intentions of the person wielding it.”
In these same lines, Dr Dipan Pal from Carnegie Mellon University warns that AI may be dangerous “if we apply it to problems prematurely.” Another problem may arise if villains take over, of course. We might be in trouble “if people who understand the technology well do not have the best of intentions or do not completely grasp the large scale long term implications of the application,” warns Dr Pal. In order to minimize those risks, he stresses the importance of really understanding AI, including the technology itself and its implications. He claims that ”once enough people understand it, humanity as a whole is safer since for any AI threat, there is likely an equally powerful AI (or otherwise) solution to neutralize it.”
Dr Matthew O’Brian from Georgia Institute of Technology stresses that the potential of AI is enormous, as “we have made true general AI, an entity with intelligence that rivals or surpasses our own.” Experts agree that it is people that fail to understand the limitations of AIs that we should be wary of, as well as the risk of these systems falling in bad hands. They also argue, in the same breath, that we don’t need to fear any harm from AIs themselves. Why? As Prof Scott Fahlman points out, “humans evolved as social animals with instinctual desires for self-preservation, procreation, and (in some of us) a desire to dominate others.” However, “AI systems will not inherently have such instincts and there will be no evolutionary pressure to develop them,” he argues, “since we humans would try to prevent this sort of motivation from emerging.”
Quick Answers
Are driver-less cars dangerous to introduce onto our roads, at this time? Current car crash statistics show that the self-driving AI technology implemented so far actually reduces the risk of crashes. However, this technology is still in early stages and needs improvements on car-to-car communications and specific infrastructures. Either way, at the minute, human drivers are still behind the wheel, which helps in situations that need intuition.
Can AIs lie? AIs may lie, say our experts, but never deliberately. Non-intentional lies could be produced if they lack enough data or if the data we give them are wrong, which ends up with AIs giving false or incomplete answers. On top of that, we humans can program AI to select the wrong answer while knowing which is the correct one.
Can AI be used in medicine? Definitely. AI systems are currently used to diagnose diseases or prescribe drugs based on large arrays of medical images, genomic sequencing or combinations of specific symptoms. AI can also assist in surgery or in the testing of biological samples, speeding up diagnoses or the development of new drugs. Of note, although in many of these tasks AI can outperform humans, experts argue that collaboration between AI and humans renders the best results.
Can AI speed up the development of poorer countries? Yes. Although it will require monitoring strategies, control of usage and efficient logistics, AI can help improve life conditions of people in poorer countries. AI can help finding water, monitoring crops, spotting disease outbreaks, or assisting in school settings. Still, experts stress that this is not magic, but just a tool that humans have to invest in and use properly to provide benefits.
Will we ever make an AI with consciousness? Defining consciousness is a complex matter. If we go with one of the most common definitions (having subjective experiences), some think that we do have mathematical approaches to equip AIs with consciousness. However, others disagree and argue that, even if we could do so, detecting consciousness in AI would not be possible.
TOP ANSWER
Do algorithms outperform us at decision making?
Yes, they certainly can! Consider programs that play games like chess or Go. These AI systems have now reached grandmaster level and have beaten the world's best human players. The search algorithms used in chess can analyze millions of possible moves in seconds and this allows them to see much further ahead in the game than humans can manage. This produces excellent decisions and many game playing programs can now learn to play a game from scratch and quickly become champion players. However, games are a very constrained kind of problem. In a board game everything is well defined and consistent – nothing else happens other than pieces are moved!
In the real world, in our human environment, there are all kinds of problems for algorithms to overcome. Just deciding where to take a holiday, for example, can be influenced by weather forecasts, geographic choices, travel convenience, personal preferences, interactions with family and others, calculations of acceptable budgets of time and cost, etc. These are not straightforward yes or no issues, unlike making a move in chess, but involve uncertainty, missing information, and value judgements. Algorithms that attempt to replace general human decision making have not been successful. AI research is working in many areas that help us to make decisions, perhaps the most useful is in providing relevant information, but the subtly of the human mind still beats machines in many ways. So, yes, algorithms can massively outperform humans, but only in certain limited contexts and restricted applications.
Emeritus Professor Mark Lee
Professor of Intelligent Systems in the Department of Computer Science at Aberystwyth University
Takeaways
AIs are no existential threat to us - phew. They are thought to have created more jobs than they’ve taken, too.
They are pretty much everywhere, both in “bodied” forms (like the futuristic androids you see on the Internet) and “disembodied” forms (the much more widespread chatbots and voice assistants).
They are not exempt from biases, though. They are programmed by humans and trained with human-selected datasets, so better not give them final say on important decisions.
They can give us wrong answers as well, whether they have been programmed to lie or because they have been trained with defective data.
AIs can show emotions, but experts are split on whether they feel them - or indeed, on whether we would give them credibility if they did, knowing that AIs aren’t human.
AI systems can and are used for good and evil, and they have already proven themselves crucial for medicine, agriculture and mass surveillance.
Experts agree that we should think of AI just as a very powerful tool, and it is for us to decide where and how to apply it. Humans are behind these systems and so it is down to us, in the end, what to make of them.