Back to LessWrong

Machine ethics

From Lesswrongwiki

Jump to: navigation, search
Smallwikipedialogo.png
Wikipedia has an article about

Machine Ethics is the emerging field that tries to understand how machines which consider the moral implications of their actions and act accordingly can be created. That is, how humanity can ensure that the minds created through AI can reason morally about other minds - thus creating Artificial Moral Agents (AMAs).

Historically, the earliest famous attempt at machine ethics was that by Issac Asimov in a 1942 short story, a set of rules known as the Three Laws of Robotics. The basis of many of his stories, they demonstrated how the law's seeming impermeability could so often fail - even without the errors inevitable from machine comprehension.

Currently, machine ethics is a subject whose application is limited to simple machines programmed with narrow AI - various moral philosophies have been programmed, using many techniques and all with limited success. Despite that, it has been argued that as we approach the development of a superintelligence, humanity should focus on the task of developing machine ethics for an artificial general intelligence before that moment arrives.

As Wallach and Allen pose it, “even if full moral agency for machines is a long way off, it is already necessary to start building a kind of functional morality, in which artificial moral agents have some basic ethical sensitivity”.

Further Reading & References

See also