Can AI understand morality and ethics?

By: Richard van Hooijdonk

Artificial intelligence knows whether your smile is real or fake. It can predict your online behavior and sell that data to hungry marketers. One day, artificial intelligence could even be smarter than you – and this is just the tip of the iceberg. The AI market is set to reach $190 billion by 2025, and this technology is being applied almost everywhere. From Facebook’s newsfeed toTesla’s vehicles, from courts to hospital rooms – people rely on AI to make important decisions. And as machines start to control our lives, it has become apparent that we did not really think this through.

What if one day humans are not the smartest species on Earth? What if Skynet becomes a reality, and no Terminator can save us? What if Elon Musk’s prediction that AI is “an immortal dictator” comes true? Super intelligent machines could, for example, take control of a nuclear launch system and rain missiles on cities. Or how about private, proprietary algorithms that could accidentally reflect existing social biases and deny jobs, education, and justice to people? One way to solve these problems is to make machines more like humans, and teach them ethics.

As Rosalind Picard, the director of the Affective Computing Group at MIT, says, “The greater the freedom of a machine, the more it will need moral standards.” Although this seems like a great idea, it creates more questions than answers. Ethics aren’t a simple line of code, but a complex system of values that even humans can’t fully agree on. So why should we teach AI this imperfect system, and what would the morality of artificial intelligence even look like?

artificial intelligence and ethicsMoral dilemmas

Experts agree that immoral AI is a far bigger threat to people than AI guided by human morality. An ethics-driven machine will act within the boundaries we set, but AI without any value system could make disastrous decisions. “If the car’s designers fail to specify a set of ethical values that could act as decision guides, the AI system may come up with a solution that causes more harm,” write Jane Zavalishina and Dr Vyacheslav Polonski.

This is true for AI in other sectors as well, but the question remains: how do we convey complex values in lines of code? And even if that’s figured out, we need to decide whose values we’ll install in these machines. People are guided by multiple, often competing moral systems – about which they fundamentally disagree with others. Which one should guide AI, and on which issues?

Another concern is that AI could be unintentionally corrupted by the algorithms that amplify racial and gender biases. Scandals like the unintentional racism of Apple’s face recognition technology and Twitter’s bots are illustrative of this issue. And despite all these challenges,  people aren’t allowed to explore and understand the inner functioning of such algorithms. The proprietary right of private companies seems to be more important than the protection of citizens. But far from complicated questions, some scientists see these issues in a simpler way.

Asked how a driverless car will react if it has to choose between hitting two kids or an approaching  motorbike, Jaguar’s Amy Rimmer says: “I don’t have to answer that question to pass a driving test… So why would we dictate that the car has to have an answer to these unlikely scenarios…?” But not all scientists share her opinion, and instead, they’re working to find ways to teach ethics to AI.

Teach AI like kids

There are several schools of thought on how to teach morality to machines. One system is advocated by Marek Rosa, the owner of the Prague-based company GoodAI. His approach is to educate AI by slowly exposing it to increasingly complex problems, with the ultimate goal of creating a machine capable of behaving morally in new and unexpected situations. Another approach is to read stories to AI. This technique is used by Mark Riedl at Georgia Tech, where he feeds AI with sentences that algorithms analyse and use to form conclusions about social norms. But what if AI reads about villains? “I could cherry-pick stories of antiheroes or ones in which bad guys win all the time. But if the agent is forced to read all stories, it becomes very, very hard for any one individual to corrupt the AI,” says Riedl.

And as scientists work to find solutions and teach morality to AI, many wonder what the position of Silicon Valley giants on this topic is. One of them was forced to make its position very clear.

The Position Of Tech Giants

Google had to cancel its contract with the Pentagon on cutting-edge AI tech after thousands of engineers rebelled. They refused to build AI killing machines. To calm these employees, its CEO, Sundar Pichai, wrote a blog post detailing the principles that the company will follow in AI projects. He promised Google will only work on socially beneficial solutions, accountable to people, built for safety, and upholding “high standards of scientific excellence”. But he also noted that AI is a force of good: “Farmers are using it to monitor the health of their herds. Doctors are starting to use AI to help diagnose cancer and prevent blindness.”

Mark Zuckerberg couldn’t agree more. He plans to use AI to fix Facebook’s problems such as hate speech, terrorist content, Russian bots, and much more. Although many consider these as urgent problems, Zuckerberg asked Congress for five to ten years to figure it out.

Which stories will we read?

The discussions on AI are increasingly passionate because people are afraid. Will this tech turn into Skynet or a cancer prevention tool, a danger on the road or a saviour of human lives? Nobody can tell for sure, but AI is developing and we’ll continue using it in the future. And in light of this reality, even imperfect moral standards are better than immoral and unconstrained AI. If  machines are anything like humans, they’ll have a good and a bad side. Which one will prevail? Maybe it depends on which stories we read to them.

All information/views/opinions expressed in this article are that of the author. This Website may or may not agree with the same.

All images provided by writer.


Richard van HooijdonkAbout Author: Richard van Hooijdonk

International keynote speaker, trend watcher and futurist Richard van Hooijdonk offers inspiring lectures on how technology impacts the way we live, work and do business. Over 420,000 people have already attended his renowned inspiration sessions, in the Netherlands as well as abroad. He works together with RTL television and presents the weekly radio program ‘Mindshift’ on BNR news radio. Van Hooijdonk is also a guest lecturer at Nyenrode and Erasmus Universities. https://www.richardvanhooijdonk.com


Sources: 

Beall, Abigail, “It’s time to address artificial intelligence’s ethical problems,” Wired, 24 August 2018, accessed 27 Sep 2018 https://www.wired.co.uk/article/artificial-intelligence-ethical-framework

Edmonds, David, “Can we teach robots ethics?,” BBC, 15 October 2017, accessed 27 Sep 2018 https://www.bbc.com/news/magazine-41504285

Lee, Justin, “The ethics of artificial intelligence,” GrowthBot, 26 June 2018, accessed 27 Sep 2018 https://blog.growthbot.org/the-ethics-of-artificial-intelligence

Lohrmann, Dan, “Privacy, Ethics and Regulation in Our New World of Artificial Intelligence,” Government Technology, 15 April 2018, accessed 27 Sep 2018 http://www.govtech.com/blogs/lohrmann-on-cybersecurity/privacy-ethics-and-regulation-in-our-new-world-of-artificial-intelligence.html

Marketsandmarkets, “Artificial Intelligence Market Worth 190.61 Billion USD by 2025,” PR Newswire, 14 February 2018, accessed 27 Sep 2018 https://www.prnewswire.com/news-releases/artificial-intelligence-market-worth-19061-billion-usd-by-2025-674053943.html

Parkin, Simon, “Teaching robots right from wrong,”1843, June 2017, accessed 27 Sep 2018 https://www.1843magazine.com/features/teaching-robots-right-from-wrong

Pichai, Sundar, “AI at Google: our principles,” Google, 7 June 2018, accessed 27 Sep 2018 https://www.blog.google/technology/ai/ai-principles/

Trickey, Erick, “Morality in the machines,” Harvard Law Today, 26 June 2018, accessed 27 Sep 2018 https://today.law.harvard.edu/feature/morality-in-the-machines/

Zavalishina, Jane and Dr Vyacheslav Polonski, “Can we teach morality to machines? Three perspectives on ethics for artificial intelligence,” Oxford Internet Institute, 19 December 2017, accessed 27 Sep 2018 https://www.oii.ox.ac.uk/blog/can-we-teach-morality-to-machines-three-perspectives-on-ethics-for-artificial-intelligence/

 

,

Leave a Reply

Click here to opt out of Google Analytics