Nonperson predicate

From Lesswrongwiki
Jump to: navigation, search

A Nonperson Predicate is a theorized test which can distinguish between a person and a non-person. It must never return a false negative, claiming a person isn't a person, but false positives are tolerable. The need for such a test arises from the possibility that in seeking to accurately predict a person's actions, an Artificial General Intelligence may develop a model so complete it itself qualifies as a person. Since that model exists only for the AGI's use, it would find itself experiencing both every bad and good possibility the AGI simulated. Such a situation may be avoidable by limiting the complexity an AGI is permitted to simulate a sentient being with, as discussed in Computational Hazards.

Blog Posts