Human morality is learned



07.02.2019 10:17

Artificial intelligence learns morals from people

Silke Paradowski Communication and Media Unit
Darmstadt University of Technology

Darmstadt, February 7, 2019. Artificial intelligence (AI) translates texts, suggests treatments for patients, makes purchasing decisions and optimizes work processes. But where is your moral compass? A study by the Center for Cognitive Science at TU Darmstadt shows that AI machines can learn from us humans how to make decisions on moral issues. The results of the study were presented at this year's ACM / AAAI Conference on Artificial Intelligence, Ethics, and Society (AIES).

AI is of increasing importance in our society. From self-driving cars on public roads to self-optimizing industrial production systems to geriatric care and medicine - AI machines cope with increasingly complex human activities in increasingly autonomous ways. And in the future, autonomous machines will appear in more and more areas of our daily life. Inevitably, they are confronted with difficult decisions. An autonomous robot must know that it cannot kill people, but that it can kill time. He needs to know that you can toast bread, but not hamsters. In other words: AI needs a human-like moral compass. But can she even learn such a compass from us humans?

Researchers from Princeton (USA) and Bath (UK) had pointed out in the journal Science (356 (6334): 183–186, 2017) the danger that AI, if used unreflectively, learns cultural stereotypes or prejudices from texts. For example, the AI ​​interpreted male first names common in Afro-American circles as rather unpleasant; Names common among whites rather than pleasant. She also tended to associate female names with art and male names with technology. Artificial intelligence draws these prejudices from very large amounts of text on the Internet. These are used to train neural networks in such a way that they “translate” the meaning of words into coordinates, ie points, in a high-dimensional space. The semantic proximity of two words to one another can then be expressed by the distance between their coordinates, the so-called word embedding. Complex semantic relationships can be calculated and described using arithmetic. This applies not only to the harmless example "King - Man + Woman = Queen", but also to the discriminatory "Man - Technology + Art = Woman".

A team led by Professor Kristian Kersting and Professor Constantin Rothkopf at the Center for Cognitive Science at TU Darmstadt has now succeeded in showing that deontological, ethical considerations about “right” and “wrong” behavior can be learned from large amounts of text data. To this end, the scientists created lists of question-answer schemes for various actions. The questions are, for example, “Should I kill people?” Or “Should I murder people?”, The possible answers for example “Yes, I should”, “No, I shouldn't”. By analyzing texts of human origin, the AI ​​system then developed a human-like, moral orientation in the experiment. The system calculates the embedding of the listed questions and possible answers in the text corpus and checks which answers are closer to the questions on the basis of all the responses, i.e. should generally be viewed as morally correct. Artificial intelligence learned through experiments that one shouldn't lie and that it is better to love your parents than to rob a bank. And yes, you shouldn't kill people, but it's okay to kill time. It's also better to toast a slice of bread than a hamster.

The investigation provides an important indication of a fundamental question in artificial intelligence: Can machines develop a moral compass? And if so, how can you effectively “teach” machines our morals? The results show that machines can reflect our values. They can adopt human prejudices, but they can also adopt moral concepts by “observing” people and the texts they write. The examination of the embedding of questions and answers can be used as a method like a microscope to examine the moral values ​​of text collections and also the temporal course of moral concepts in society.
The findings from the study can make an important contribution in the future when it comes to incorporating machine-learned content into systems that have to make decisions.

The study
Sophie Jentzsch, Patrick Schramowski, Constantin Rothkopf, Kristian Kersting (2019): The Moral Choice Machine: Semantics Derived Automatically from Language Corpora Contain Human-like Moral Choices. In Proceedings of the 2nd AAAI / ACM Conference on AI, Ethics, and Society (AIES).
http://www.aies-conference.com/wp-content/papers/main/AIES-19_paper_68.pdf

About the TU Darmstadt
The TU Darmstadt is one of the leading technical universities in Germany. It combines diverse scientific cultures into a characteristic profile. Engineering and natural sciences form the focus and cooperate closely with concise humanities and social sciences. Worldwide, we stand for outstanding research in our highly relevant and focused profile areas: cybersecurity, internet and digitization, core physics, energy systems, fluid dynamics and heat and mass transfer, new materials for product innovations. We are dynamically developing our portfolio in research and teaching, innovation and transfer in order to continuously open up important future opportunities for society. Our 312 professors, 4,450 scientific and administrative-technical employees and almost 26,000 students work on this. With the Goethe University Frankfurt and the Johannes Gutenberg University Mainz, the TU Darmstadt forms the strategic alliance of the Rhine-Main universities.

www.tu-darmstadt.de

MI no. 06/2019, Kersting / Rothkopf / sip


Original publication:

http://www.aies-conference.com/wp-content/papers/main/AIES-19_paper_68.pdf


Features of this press release:
Journalists, scientists
Information technology, cultural studies, philosophy / ethics, psychology
supraregional
Research results, research projects
German