Difference between revisions of "Regulation and AI risk"

From Lesswrongwiki
Jump to: navigation, search
(Created page with "'''Regulation and AI risk''' is the debate on whether regulation could be used to reduce the risks of Unfriendly AI, and what forms of regu...")
 
Line 14: Line 14:
 
* '''Carl Shulman & Stuart Armstrong (2009):''' [http://intelligence.org/files/ArmsControl.pdf Arms control and intelligence explosions]. European Conference on Computing and Philosophy.
 
* '''Carl Shulman & Stuart Armstrong (2009):''' [http://intelligence.org/files/ArmsControl.pdf Arms control and intelligence explosions]. European Conference on Computing and Philosophy.
 
* '''Roman Yampolskiy & Joshua Fox (2012):''' [http://intelligence.org/files/SafetyEngineering.pdf Safety Engineering for Artificial General Intelligence]. Topoi.
 
* '''Roman Yampolskiy & Joshua Fox (2012):''' [http://intelligence.org/files/SafetyEngineering.pdf Safety Engineering for Artificial General Intelligence]. Topoi.
 +
 +
== See also ==
 +
* [[AI arms race]]
 +
* [[AGI Sputnik moment]]
 +
* [[Existential risk]]
 +
* [[Unfriendly artificial intelligence]]

Revision as of 01:27, 16 August 2012

Regulation and AI risk is the debate on whether regulation could be used to reduce the risks of Unfriendly AI, and what forms of regulation would be appropriate.

Several authors have advocated AI research to be regulated, but been vague on the details. Yampolskiy & Fox (2012) note that university research programs in the social and medical sciences are overseen by institutional review boards, and propose setting up analogous review boards to evaluate potential AGI research. In order to be successful, AI regulation would have to be global, and there is the potential for an AI arms race between different nations. Partially because of this, McGinnis (2010) argues that the government should not attempt to regulate AGI development. Rather, it should concentrate on providing funding to research projects intended to create safe AGI.

While Shulman & Armstrong (2009) argue the unprecedentedly destabilizing effect of AGI could be a cause for world leaders to cooperate more than usual, the opposite argument can be made as well. Gubrud (1997) argues that molecular nanotechnology could make countries more self-reliant and international cooperation considerably harder, and that AGI could contribute to such a development. AGI technology is also much harder to detect than e.g. nuclear technology is - AGI research can be done in a garage, while nuclear weapons require a substantial infrastructure (McGinnis 2010).

Goertzel & Pitt (2012) suggest that for regulation to be enacted, there might need to be an AGI Sputnik moment - a technological achievement that makes the possibility of AGI evident to the public and policy makers. They note that after such a moment, it might not take a very long time for full human-level AGI to be developed, while the negotiations required to enact new kinds of arms control treaties would take considerably longer.

References

See also