Difference between revisions of "Regulation and AI risk"

From Lesswrongwiki
Jump to: navigation, search
(Add Kaushal & Nolan summary + reference)
Line 1: Line 1:
 
'''Regulation and AI risk''' is the debate on whether regulation could be used to reduce the risks of [[Unfriendly artificial intelligence|Unfriendly AI]], and what forms of regulation would be appropriate.
 
'''Regulation and AI risk''' is the debate on whether regulation could be used to reduce the risks of [[Unfriendly artificial intelligence|Unfriendly AI]], and what forms of regulation would be appropriate.
  
Several authors have advocated AI research to be regulated, but been vague on the details. Yampolskiy & Fox (2012) note that university research programs in the social and medical sciences are overseen by institutional review boards, and propose setting up analogous review boards to evaluate potential AGI research. In order to be successful, AI regulation would have to be global, and there is the potential for an [[AI arms race]] between different nations. Partially because of this, McGinnis (2010) argues that the government should not attempt to regulate AGI development. Rather, it should concentrate on providing funding to research projects intended to create safe AGI.
+
Several authors have advocated AI research to be regulated, but been vague on the details. Yampolskiy & Fox (2012) note that university research programs in the social and medical sciences are overseen by institutional review boards, and propose setting up analogous review boards to evaluate potential AGI research. In order to be successful, AI regulation would have to be global, and there is the potential for an [[AI arms race]] between different nations. Partially because of this, McGinnis (2010) argues that the government should not attempt to regulate AGI development. Rather, it should concentrate on providing funding to research projects intended to create safe AGI.  Kaushal & Nolan (2015) point out that regulations on AGI development would result in a speed advantage for any project willing to buck the regulations, and instead propose government funding (possibly in the form of an "AI Manhattan Project") for AGI projects meeting particular criteria.
  
 
While Shulman & Armstrong (2009) argue the unprecedentedly destabilizing effect of AGI could be a cause for world leaders to cooperate more than usual, the opposite argument can be made as well. Gubrud (1997) argues that molecular nanotechnology could make countries more self-reliant and international cooperation considerably harder, and that AGI could contribute to such a development. AGI technology is also much harder to detect than e.g. nuclear technology is - AGI research can be done in a garage, while nuclear weapons require a substantial infrastructure (McGinnis 2010). On the other hand, Scherer (2015) argues that artificial intelligence could nevertheless be susceptible to regulation due to the increasing prominence of governmental entities and large corporations in AI research and development.
 
While Shulman & Armstrong (2009) argue the unprecedentedly destabilizing effect of AGI could be a cause for world leaders to cooperate more than usual, the opposite argument can be made as well. Gubrud (1997) argues that molecular nanotechnology could make countries more self-reliant and international cooperation considerably harder, and that AGI could contribute to such a development. AGI technology is also much harder to detect than e.g. nuclear technology is - AGI research can be done in a garage, while nuclear weapons require a substantial infrastructure (McGinnis 2010). On the other hand, Scherer (2015) argues that artificial intelligence could nevertheless be susceptible to regulation due to the increasing prominence of governmental entities and large corporations in AI research and development.
Line 15: Line 15:
 
* '''Carl Shulman & Stuart Armstrong (2009):''' [http://intelligence.org/files/ArmsControl.pdf Arms control and intelligence explosions]. European Conference on Computing and Philosophy.
 
* '''Carl Shulman & Stuart Armstrong (2009):''' [http://intelligence.org/files/ArmsControl.pdf Arms control and intelligence explosions]. European Conference on Computing and Philosophy.
 
* '''Roman Yampolskiy & Joshua Fox (2012):''' [http://intelligence.org/files/SafetyEngineering.pdf Safety Engineering for Artificial General Intelligence]. Topoi.
 
* '''Roman Yampolskiy & Joshua Fox (2012):''' [http://intelligence.org/files/SafetyEngineering.pdf Safety Engineering for Artificial General Intelligence]. Topoi.
 +
* '''Mohit Kaushal & Scott Nolan (2015):''' [http://www.brookings.edu/blogs/techtank/posts/2015/04/14-understanding-artificial-intelligence Understanding Artificial Intelligence]. Brookings.
  
 
== See also ==
 
== See also ==

Revision as of 23:10, 11 June 2015

Regulation and AI risk is the debate on whether regulation could be used to reduce the risks of Unfriendly AI, and what forms of regulation would be appropriate.

Several authors have advocated AI research to be regulated, but been vague on the details. Yampolskiy & Fox (2012) note that university research programs in the social and medical sciences are overseen by institutional review boards, and propose setting up analogous review boards to evaluate potential AGI research. In order to be successful, AI regulation would have to be global, and there is the potential for an AI arms race between different nations. Partially because of this, McGinnis (2010) argues that the government should not attempt to regulate AGI development. Rather, it should concentrate on providing funding to research projects intended to create safe AGI. Kaushal & Nolan (2015) point out that regulations on AGI development would result in a speed advantage for any project willing to buck the regulations, and instead propose government funding (possibly in the form of an "AI Manhattan Project") for AGI projects meeting particular criteria.

While Shulman & Armstrong (2009) argue the unprecedentedly destabilizing effect of AGI could be a cause for world leaders to cooperate more than usual, the opposite argument can be made as well. Gubrud (1997) argues that molecular nanotechnology could make countries more self-reliant and international cooperation considerably harder, and that AGI could contribute to such a development. AGI technology is also much harder to detect than e.g. nuclear technology is - AGI research can be done in a garage, while nuclear weapons require a substantial infrastructure (McGinnis 2010). On the other hand, Scherer (2015) argues that artificial intelligence could nevertheless be susceptible to regulation due to the increasing prominence of governmental entities and large corporations in AI research and development.

Goertzel & Pitt (2012) suggest that for regulation to be enacted, there might need to be an AGI Sputnik moment - a technological achievement that makes the possibility of AGI evident to the public and policy makers. They note that after such a moment, it might not take a very long time for full human-level AGI to be developed, while the negotiations required to enact new kinds of arms control treaties would take considerably longer.

References

See also