Difference between revisions of "Nonperson predicate"

From Lesswrongwiki
Jump to: navigation, search
m
(More editing)
Line 1: Line 1:
A Nonperson Predicate is a theorized test which can distinguish between a person and a non-person. It must never return a false negative, claiming a person isn't a person, but false positives are tolerable. The need for such a test arises from the possibility that in seeking to accurately predict a person's actions, an [[ Artificial General Intelligence]] may develop a model so complete it itself qualifies as a person. Since that model exists only for the AGI's use, it would find itself experiencing both every bad and good possibility the AGI simulated. Such a situation may be avoidable by limiting the complexity an AGI is permitted to simulate a sentient being with, as discussed in Computational Hazards.  
+
A '''Nonperson Predicate''' is a theorized test used to distinguish between a person and anything that isn't a person. The need for such a test arises from the possibility that when an [[Artificial General Intelligence]] predicts a person's actions, it may develop a model of them so complete that the model itself qualifies as a person. As the AGI investigates possibilities, all the negative situations the model experiences would generate a large amount of negative [[utility]]. Simulating a sufficiently complex model of a person is a [[computational hazard]]. Such a situation may be avoidable by limiting the complexity of any model of a person that an AGI creates, as discussed in Computational Hazards.  
 +
 
 +
Any practical implementation would likely consist of a large number of nonperson predicates of increasing complexity. For most nonpersons, an predicate will quickly return that it is not a person and conclude the test. Although any number of the predicates may be used before the test claims that something is not a person, it is crucial that any predicate in the test never claims that a person isn't. If unavoidable, it is preferable that the AGI considers nonpersons persons than considering a person a nonperson.
 +
 
 +
=== See Also ===
 +
* [[Philosophical zombie]]
  
 
=== Blog Posts ===
 
=== Blog Posts ===
 
* [http://lesswrong.com/lw/x4/nonperson_predicates/  Nonperson Predicates] by Eliezer Yudkowsky
 
* [http://lesswrong.com/lw/x4/nonperson_predicates/  Nonperson Predicates] by Eliezer Yudkowsky
 
* [http://lesswrong.com/lw/d2f/computation_hazards/ Computational Hazards] by Alex Altair
 
* [http://lesswrong.com/lw/d2f/computation_hazards/ Computational Hazards] by Alex Altair

Revision as of 04:39, 13 July 2012

A Nonperson Predicate is a theorized test used to distinguish between a person and anything that isn't a person. The need for such a test arises from the possibility that when an Artificial General Intelligence predicts a person's actions, it may develop a model of them so complete that the model itself qualifies as a person. As the AGI investigates possibilities, all the negative situations the model experiences would generate a large amount of negative utility. Simulating a sufficiently complex model of a person is a computational hazard. Such a situation may be avoidable by limiting the complexity of any model of a person that an AGI creates, as discussed in Computational Hazards.

Any practical implementation would likely consist of a large number of nonperson predicates of increasing complexity. For most nonpersons, an predicate will quickly return that it is not a person and conclude the test. Although any number of the predicates may be used before the test claims that something is not a person, it is crucial that any predicate in the test never claims that a person isn't. If unavoidable, it is preferable that the AGI considers nonpersons persons than considering a person a nonperson.

See Also

Blog Posts