Difference between revisions of "Benevolence"

From Lesswrongwiki
Jump to: navigation, search
m
 
(2 intermediate revisions by one other user not shown)
Line 1: Line 1:
For an AGI to have a positive and not a negative effect on humanity, its ''[[terminal value]]'' must be benevolent to humans. In other words, it must seek the welfare of humans, the maximization of the [[Complexity of value|full set of human values]] (for the humans' benefit, not for itself). This article discusses benevolence as a [Terminal value] and as an instrumental value in future artificial general intelligence, and how  benevolence might arise, whether specified as a value or not.
+
For an AGI to have a positive and not a negative effect on humanity, its ''[[terminal value]]'' must include benevolence to humans. In other words, it must seek the welfare of humans, the maximization of the [[Complexity of value|full set of human values]] (for the humans' benefit, not for itself). This article discusses the contrast between benevolence as a [[terminal value]] and as an instrumental value in future artificial general intelligence; and how  benevolence might arise, whether specified as a value or not.  
  
 
Since cooperation has  instrumental value for achieving a variety of terminal values, benevolence in agents--humans, AGIs, or others--may arise even if it is not specified as an end-goal.  
 
Since cooperation has  instrumental value for achieving a variety of terminal values, benevolence in agents--humans, AGIs, or others--may arise even if it is not specified as an end-goal.  
  
For example, humans often cooperate because they expect either an immediate benefit in response; or because they want to establish a reputation for  benevolence in action or personal character that may engender future cooperation; or because they have live in a human society that rewards cooperation and punishes misbehavior.  
+
For example, humans often cooperate because they expect either an immediate benefit in response; or because they want to establish a reputation for  benevolence in action or personal character that may engender future cooperation; or because they want to signal, to the extent that they can make their true motivations evident, that they are truly altruistic and likewise engender future cooperation; or because they have live in a human society that rewards cooperation and punishes misbehavior.  
  
There is also the possibility that benevolence arises spontaneously as an instrumental value. Humans sometimes undergo a moral shift (described by Immanuel Kant), in which they become altruistic and learn to value benevolence in its own right, not just as a means to an end.  
+
Humans also sometimes undergo a moral shift (described by Immanuel Kant), in which they become altruistic and learn to value benevolence in its own right, not just as a means to an end.  
  
However, these considerations cannot be relied on to bring about benevolence in an artificial general intelligence. Benevolence is an instrumental value for an AGI only when humans are at roughly equal power to it. If the AGI is much more intelligent than humans, it will not care about  the rewards and punishments which  humans can deliver. Moreover, a Kantian shift is unlikely in a sufficiently powerful AGI, as any changes in one's goals, including [[Subgoal stomp|replacement of terminal by instrumental values]], generally reduces the likelihood of achieving one's goals (Fox & Shulman 2010; Omohundro 2008).
+
However, these considerations cannot be relied on to bring about benevolence in an artificial general intelligence. Benevolence is an instrumental value for an AGI only when humans are at roughly equal power to it. If the AGI is much more intelligent than humans, it will not care about  the rewards and punishments which  humans can deliver. Moreover, a Kantian shift is unlikely in a sufficiently powerful AGI, as any changes in one's goals, including [[Subgoal stomp|replacement of terminal by instrumental values]], generally reduce the likelihood of achieving one's goals (Fox & Shulman 2010; Omohundro 2008).
  
 
==References==
 
==References==
 
* [http://intelligence.org/files/SuperintelligenceBenevolence.pdf Joshua Fox and Carl Shulman (2010), "Superintelligence does not imply benevolence"], Proceedings of the VIII European Conference on Computing and Philosophy, Oct, 2010. Ed. Klaus Mainzer. (Munich: Verlag Dr. Hut), pp. 456-461
 
* [http://intelligence.org/files/SuperintelligenceBenevolence.pdf Joshua Fox and Carl Shulman (2010), "Superintelligence does not imply benevolence"], Proceedings of the VIII European Conference on Computing and Philosophy, Oct, 2010. Ed. Klaus Mainzer. (Munich: Verlag Dr. Hut), pp. 456-461
 
* [http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf S. Omohundro, The basic AI drives]. In Artificial general intelligence 2008: Proceedings of the first AGI conference, ed. Pei Wang, Ben Goertzel, and Stan Franklin, 483–492. Frontiers in Artificial Intelligence and Applications 171. Amsterdam: IOS Press.
 
* [http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf S. Omohundro, The basic AI drives]. In Artificial general intelligence 2008: Proceedings of the first AGI conference, ed. Pei Wang, Ben Goertzel, and Stan Franklin, 483–492. Frontiers in Artificial Intelligence and Applications 171. Amsterdam: IOS Press.

Latest revision as of 12:33, 22 November 2012

For an AGI to have a positive and not a negative effect on humanity, its terminal value must include benevolence to humans. In other words, it must seek the welfare of humans, the maximization of the full set of human values (for the humans' benefit, not for itself). This article discusses the contrast between benevolence as a terminal value and as an instrumental value in future artificial general intelligence; and how benevolence might arise, whether specified as a value or not.

Since cooperation has instrumental value for achieving a variety of terminal values, benevolence in agents--humans, AGIs, or others--may arise even if it is not specified as an end-goal.

For example, humans often cooperate because they expect either an immediate benefit in response; or because they want to establish a reputation for benevolence in action or personal character that may engender future cooperation; or because they want to signal, to the extent that they can make their true motivations evident, that they are truly altruistic and likewise engender future cooperation; or because they have live in a human society that rewards cooperation and punishes misbehavior.

Humans also sometimes undergo a moral shift (described by Immanuel Kant), in which they become altruistic and learn to value benevolence in its own right, not just as a means to an end.

However, these considerations cannot be relied on to bring about benevolence in an artificial general intelligence. Benevolence is an instrumental value for an AGI only when humans are at roughly equal power to it. If the AGI is much more intelligent than humans, it will not care about the rewards and punishments which humans can deliver. Moreover, a Kantian shift is unlikely in a sufficiently powerful AGI, as any changes in one's goals, including replacement of terminal by instrumental values, generally reduce the likelihood of achieving one's goals (Fox & Shulman 2010; Omohundro 2008).

References