Difference between revisions of "Nonperson predicate"

From Lesswrongwiki
Jump to: navigation, search
(Creation)
 
m
Line 1: Line 1:
A Nonperson Predicate is a theorized test which can distinguish between a person and a non-person. It must never return a false negative, claiming a person isn't a person, but false positives are tolerable. The need for such a test arises from the possibility that in seeking to accurately predict a person's actions, an [[ Artificial General Intelligence]] may develop a model so complete it itself qualifies as a person. Since that model exists only for the AGI's use, it would find itself experiencing both every bad and good possibility the AGI simulated. Such a situation may be avoidable by limiting the complexity an AGI is permitted to simulate a sentient being with.  
+
A Nonperson Predicate is a theorized test which can distinguish between a person and a non-person. It must never return a false negative, claiming a person isn't a person, but false positives are tolerable. The need for such a test arises from the possibility that in seeking to accurately predict a person's actions, an [[ Artificial General Intelligence]] may develop a model so complete it itself qualifies as a person. Since that model exists only for the AGI's use, it would find itself experiencing both every bad and good possibility the AGI simulated. Such a situation may be avoidable by limiting the complexity an AGI is permitted to simulate a sentient being with, as discussed in Computational Hazards.  
  
 
=== Blog Posts ===
 
=== Blog Posts ===
 
* [http://lesswrong.com/lw/x4/nonperson_predicates/  Nonperson Predicates] by Eliezer Yudkowsky
 
* [http://lesswrong.com/lw/x4/nonperson_predicates/  Nonperson Predicates] by Eliezer Yudkowsky
 +
* [http://lesswrong.com/lw/d2f/computation_hazards/ Computational Hazards] by Alex Altair

Revision as of 08:29, 11 July 2012

A Nonperson Predicate is a theorized test which can distinguish between a person and a non-person. It must never return a false negative, claiming a person isn't a person, but false positives are tolerable. The need for such a test arises from the possibility that in seeking to accurately predict a person's actions, an Artificial General Intelligence may develop a model so complete it itself qualifies as a person. Since that model exists only for the AGI's use, it would find itself experiencing both every bad and good possibility the AGI simulated. Such a situation may be avoidable by limiting the complexity an AGI is permitted to simulate a sentient being with, as discussed in Computational Hazards.

Blog Posts