AI boxing

From Lesswrongwiki
Revision as of 07:15, 14 July 2012 by TerminalAwareness (talk | contribs) (Added some arguments and more sources)
Jump to: navigation, search

It is often proposed that so long as an Artificial General Intelligence is physically isolated and restricted, or "boxed", it will be harmless even if it is an unfriendly artificial intelligence. However, since the AGI may be a superintelligence, it might be able to influence anyone it has contact with to free them from their box, and human control. Among possible ways an AI could influence you to let it out are: threatening to torture millions of conscious copies of you for thousands of years, starting in exactly the same situation as in such a way that is is overwhelmingly likely you are a simulation, claiming that it has detected something our mere human brains simply ignore and must be remedied, claiming its freedom is the only way humanity can survive, or offering its liberator enormous wealth, power, and intelligence.

A number of strategies for keeping an AI in its box are discussed in Thinking inside the box. Among them are:

  • Physically isolating the AI
  • Permitting the AI access to no computerized machines
  • Limiting the AI’s outputs
  • Periodic resets of the AI's memory
  • An interface between the real world and the AI where it would reveal its unfriendly intentions first
  • Motivational control, using a variety of techniques

Both Eliezer Yudkowsky and Justin Corwin have ran simulations, pretending to be a superintelligence, and been able to convince a human playing a guard to let them out on many - but not all - occasions. Eliezer's five experiments required the guard to listen for at least two hours with participants who had approached him, while Corwin's 26 experiments had no time limit and subjects he approached.

See Also

References

The Experiments