Difference between revisions of "Machine ethics"

From Lesswrongwiki
Jump to: navigation, search
m (Typos)
(Focus more on present machine ethics)
Line 1: Line 1:
'''Machine Ethics''' is the emerging field which seeks to create technology with moral decision making capabilities. A [[superintelligence]] will take many actions with moral implications. Programming it to act with respect to out values, given how [[Complexity of value|complex they are]], is the main goal of the field of [[friendly artificial intelligence]].  
+
{{wikilink}}
+
'''Machine Ethics''' is the emerging field which seeks to create machines which consider the moral implications of their actions and seeks to act morally. A machine which does so is called an '''Artificial Moral Agent''' or '''AMA'''. Machine ethics is a subject whose application is currently limited to simple machines programmed with narrow AI. Various moral philosophies have been programmed, using many techniques and all with limited success.
A famous early attempt at machine ethics was that by Issac Asimov in a 1942 short story, a set of rules known as the [[Wikipedia:Three Laws of Robotics|Three Laws of Robotics]]. They formed the basis of many of his stories.  
+
 
 +
Today, there are many practical applications of Machine Ethics. Drones used in war, though they risk no operator's life, make targeted killing easier. Robots developed to care for the elderly may reduce their human contact, reduce their privacy and made them feel devalued, but could also permit them greater independence. The development of driverless cars will save lives but raise conflict between gas efficiency and environmentalism, and may force a solution to be programmed for the [[Wikipedia:Trolley Problem|trolley problem]]. Programming AMAs is not currently used in industry, and present computer errors could be eliminated with only better programming. However, the moral decisions used in making these programs are hardly unbiased. 
 +
 
 +
Several attempts have been made to program robots to obey utilitarian and deontological ethics. Programs which analyze a situation, compare it with others in a database, and return the an analysis have been created in several narrow ethical fields. Due to the explicitness required in programming machines to act ethically, as said by [[Wikipedia:Daniel Dennett|Daniel Dennett]], "AI makes philosophy honest".  
 +
 
 +
A famous early attempt at machine ethics was that by Issac Asimov in a 1942 short story, a set of rules known as the [[Wikipedia:Three Laws of Robotics|Three Laws of Robotics]]. The basis of many of his stories, they demonstrated how the law's seeming impermeability could so often fail - even without the errors inevitable from machine comprehension. The zeroth rule was a later addition extrapolated by his robots from the three programmed rules. The laws are:
 
:0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
 
:0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
 
# A robot may not injure a human being or, through inaction, allow a human being to come to harm.
 
# A robot may not injure a human being or, through inaction, allow a human being to come to harm.
 
# A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
 
# A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
 
# A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
 
# A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
The zeroth rule was a later extrapolated by his robots from the three programmed rules.
 
 
Various moral philosophies have been explored as bases for machines. Several attempts have been made to program robots to obey utilitarian and deontological ethics. Programs which analyze a situation, compare it with others in a database, and return the an analysis have been created in several narrow ethical fields. An approach developed by [[Eliezer Yudkowsky]], [[Coherent Extrapolated Volition]], permits for the [[singularity]] to occur without a set of clear ethics driving it. Due to the explicitness required in programming machines to act ethically, as said by [[Wikipedia:Daniel Dennett|Daniel Dennett]], "AI makes philosophy honest".
 
  
Today, there are many practical applications of Machine Ethics. Drones used in war, though they risk no operator's life, make targeted killing easier. Robots developed to care for the elderly may reduce their human contact, reduce their privacy and made them feel devalued, but could also permit them greater independence. The development of driverless cars will save lives but increase pollution and change family dynamics.
+
The fields of machine ethics and [[friendly artificial intelligence]] are presently disjoint, but informal efforts have been made to bridge this gap. A particular challenge is that a portion of the machine ethics community believes in [[Wikipedia:Moral universalism| moral universalism]]. As the development of a [[superintelligence]] approaches, it is expected that more of the machine ethics community will focus on the task of developing machine ethics for an [[artificial general intelligence]].  
  
=== References ===
+
== References ==
 
* [http://commonsenseatheism.com/wp-content/uploads/2011/11/Muehlhauser-Helm-The-Singularity-and-Machine-Ethics-draft.pdf The Singularity and Machine Ethics] by Luke Muehlhauser and Louie Helm
 
* [http://commonsenseatheism.com/wp-content/uploads/2011/11/Muehlhauser-Helm-The-Singularity-and-Machine-Ethics-draft.pdf The Singularity and Machine Ethics] by Luke Muehlhauser and Louie Helm
 
* [http://intelligence.org/upload/machine-ethics-superintelligence.pdf Machine Ethics and Superintelligence] by Carl Shulman, Henrik Jonsson, and Nick Tarleton
 
* [http://intelligence.org/upload/machine-ethics-superintelligence.pdf Machine Ethics and Superintelligence] by Carl Shulman, Henrik Jonsson, and Nick Tarleton
Line 19: Line 21:
 
* [http://commonsenseatheism.com/wp-content/uploads/2011/02/Powers-Prospects-for-a-Kantian-Machine.pdf Prospects for a Kantian Machine] by Thomas M. Powers
 
* [http://commonsenseatheism.com/wp-content/uploads/2011/02/Powers-Prospects-for-a-Kantian-Machine.pdf Prospects for a Kantian Machine] by Thomas M. Powers
  
==== Applications Today ====
+
== Applications Today ==
 
* [http://www.peterasaro.org/writing/asaro%20just%20robot%20war.pdf How Just Could a Robot War Be?] by Peter M. Asaro
 
* [http://www.peterasaro.org/writing/asaro%20just%20robot%20war.pdf How Just Could a Robot War Be?] by Peter M. Asaro
 
* [http://staffwww.dcs.shef.ac.uk/people/A.Sharkey/sharkey-granny.pdf Granny and the robots: Ethical issues in robot care for the elderly] by Amanda Sharkey and Noel Sharkey
 
* [http://staffwww.dcs.shef.ac.uk/people/A.Sharkey/sharkey-granny.pdf Granny and the robots: Ethical issues in robot care for the elderly] by Amanda Sharkey and Noel Sharkey

Revision as of 08:31, 21 July 2012

Smallwikipedialogo.png
Wikipedia has an article about

Machine Ethics is the emerging field which seeks to create machines which consider the moral implications of their actions and seeks to act morally. A machine which does so is called an Artificial Moral Agent or AMA. Machine ethics is a subject whose application is currently limited to simple machines programmed with narrow AI. Various moral philosophies have been programmed, using many techniques and all with limited success.

Today, there are many practical applications of Machine Ethics. Drones used in war, though they risk no operator's life, make targeted killing easier. Robots developed to care for the elderly may reduce their human contact, reduce their privacy and made them feel devalued, but could also permit them greater independence. The development of driverless cars will save lives but raise conflict between gas efficiency and environmentalism, and may force a solution to be programmed for the trolley problem. Programming AMAs is not currently used in industry, and present computer errors could be eliminated with only better programming. However, the moral decisions used in making these programs are hardly unbiased.

Several attempts have been made to program robots to obey utilitarian and deontological ethics. Programs which analyze a situation, compare it with others in a database, and return the an analysis have been created in several narrow ethical fields. Due to the explicitness required in programming machines to act ethically, as said by Daniel Dennett, "AI makes philosophy honest".

A famous early attempt at machine ethics was that by Issac Asimov in a 1942 short story, a set of rules known as the Three Laws of Robotics. The basis of many of his stories, they demonstrated how the law's seeming impermeability could so often fail - even without the errors inevitable from machine comprehension. The zeroth rule was a later addition extrapolated by his robots from the three programmed rules. The laws are:

0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

The fields of machine ethics and friendly artificial intelligence are presently disjoint, but informal efforts have been made to bridge this gap. A particular challenge is that a portion of the machine ethics community believes in moral universalism. As the development of a superintelligence approaches, it is expected that more of the machine ethics community will focus on the task of developing machine ethics for an artificial general intelligence.

References

Applications Today