Difference between revisions of "Singularity"

From Lesswrongwiki
Jump to: navigation, search
m
 
(32 intermediate revisions by 9 users not shown)
Line 1: Line 1:
The '''Singularity''' or '''Technological Singularity''' refers to a hypothetical future event where Artificial Intelligence vastly outperforms the abilities of the human mind. Due to the fact these Intelligences are, by definition, beyond Human comprehension it becomes difficult for us to imagine how such beings would behave. Various commentators have provided varying dates for when this will occur and the implications it would have for humanity.
+
{{wikilink|Technological singularity}}
 +
The '''Singularity''' or '''Technological Singularity''' is a term with a number of different meanings, ranging from a period of rapid change to the creation of greater-than-human intelligence.
  
These predictions are based on the mathematical projections of Moore’s Law which has been accurately predicting the exponential growth of computers for over 50 years. These projections allow computer scientists to estimate the dates when certain computing projects  (such as Brain Emulation) will be feasible, even if they are beyond the capabilities of today’s computers. 
+
==Three Singularity schools==
  
Although Alan Turing had envisioned the possibility of such Intelligences as early as 1951, it was another mathematician, John von Neumann, who is the first recorded to have used the term “Singularity”. However,  he did not make any predictions about a when the Singularity would arrive or formally wrote any papers on the subject.
+
Eliezer Yudkowsky has observed that the varying perspectives on the Singularity can be broadly split into three "major schools" - Accelerating Change (Ray Kurzweil), the Event Horizon (Vernor Vinge), and the Intelligence Explosion (I.J. Good).
  
The concept passed from academia and into popular culture via science fiction authors such as Vernor Vinge (also a professor of mathematics) who also distributed an essay over the Internet in 1993 that included his prediction that "Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.....I'll be surprised if this event occurs before 2005 or after 2030."
+
'''The Accelerating Change School''' observes that, contrary to our intuitive linear expectations about the future, the rate of change of information technology grows exponentially. In the last 200 years we have seen more [[technological revolution|technological revolutions]] than in the last 20.000 before that. Clear examples of this exponentiality includes, but is not restricted to: Moore’s law, Internet speed, gene sequencing and the spatial resolution of brain scanning. By projecting these technology growths into the future it becomes possible to imagine what will be possible to engineer in the future. [[Wikipedia:Ray Kurzweil|Ray Kurzweil]] specifically dates the Singularity happening in 2045.
  
A number of prominent computer scientists have also speculated on the Singularity happening in the near term future. These include Hans Moravec, Eliezer Yudkowsky, Bill Joy and most notoriously Ray Kurzweil in his book “The Singularity is Near”.
+
'''The Event Horizon School''' asserts that for the entirety of Earth’s history all technological and social progress has been the product of the human mind. However, [[Wikipedia:Vernor Vinge|Vernor Vinge]] asserts that technology will soon improve on human intelligence either via brain-computer interfaces or Artificial Intelligence or both. Vinge argues since one must be at least as smart as the agent to be predicted, after we create smarter than human agents technological progress will be beyond the comprehension of anything a mere human can imagine now. He called this point in time the Singularity.
  
The consequences of such an event range from Kurzweil’s largely positive predictions to Bill Joy’s existential pessimism outlined in his essay “Why the future doesn’t need us.
+
'''The [[Intelligence explosion]] School''' asserts that a positive feedback loop could be created in which an intelligence is making itself smarter, thus getting better at making itself even smarter. A strong version of this idea suggests that once the positive feedback starts to play a role, it will lead to a dramatic leap in capability very quickly.  This scenario does not necessarily rely upon an entirely computing substrate for the explosion to occur, humans with computer augmented brains or genetically altered may also be methods to engineer an Intelligence Explosion. It is this interpretation of the Singularity that Less Wrong broadly focuses on.
  
==Blog Post==
+
==Chalmers' analysis==
  
*[http://yudkowsky.net/singularity/schools Blog post by Eliezer S. Yudkowky on the Singularity.
+
Philosopher David Chalmers published a [http://consc.net/papers/singularity.pdf significant analysis of the Singularity], focusing on intelligence explosions, in ''Journal of Consciousness Studies''. He performed a very careful analysis of the main premises and arguments for the existence of the singularity. According to him, the main argument is:
*[http://facingthesingularity.com/ Singularity Blog] by The Singularity Institute's Executive Director Luke Muehlhauser.
 
  
==External Links==
+
*1. There will be AI (before long, absent defeaters).
 +
*2. If there is AI, there will be AI+ (soon after, absent defeaters).
 +
*3. If there is AI+, there will be AI++ (soon after, absent defeaters).
 +
—————-
 +
*4. There will be AI++ (before too long, absent defeaters).
  
*[http://www.youtube.com/watch?v=IfbOyw3CT6A Ray Kurzweil] Singularity Ted Talk by Ray Kurzweil
+
He then proceeds to search for arguments for these 3 premises. Premise 1 seems to be grounded in either [[Evolutionary argument for human-level AI]] or [[Emulation argument for human-level AI]]. Premise 2 is grounded in the existence and feasibility of an [[Extensibility argument for greater-than-human intelligence|extensibility method for greater-than-human intelligence]]. Premise 3 is a more general version of premise 2. His analysis of how the singularity could occur defends the likelihood of an intelligence explosion. He also discusses the nature of general intelligence, and possible obstacles to a singularity. A good deal of discussion is given to the dangers of an intelligence explosion, and Chalmers concludes that we must negotiate it very carefully by building the correct values into the initial AIs.
* [http://www.wired.com/wired/archive/8.04/joy.html Why the future doesn’t need us] Bill Joy’s artcle for Wired magazine.
+
 
*[http://www.youtube.com/watch?v=mDhdt58ySJA Eliezer Yudkowsky] The Singularity Three Major Schools of Thought from Singularity Summit YouTube
+
==Blog posts==
*[http://www.youtube.com/watch?v=LN2shXeJNz8 Bill Joy] Ted Talk “What I’m Worried about, what I’m excited about” by Bill Joy
+
 
 +
*[http://yudkowsky.net/singularity/schools Three Major Singularity Schools] by Eliezer Yudkowsky
 +
*[http://lesswrong.com/lw/wf/hard_takeoff/ Hard Take-Off Blog]
 +
 
 +
==References==
 +
 
 +
*[http://www.stat.vt.edu/tech_reports/2005/GoodTechReport.pdf Speculations Concerning the First Ultraintelligent Machine] by I.J. Good
 +
*[http://www-rohan.sdsu.edu/faculty/vinge/misc/singularity.html The Coming Technological Singularity] Essay by Vernor Vinge
 +
*[http://web.archive.org/web/20110716035716/http://agi-conf.org/2010/wp-content/uploads/2009/06/agi10singmodels2.pdf An overview of models of technological singularity] by Anders Sandberg
 +
*[http://consc.net/papers/singularity.pdf The Singularity: A Philosophical Analysis] by David J. Chalmers
 +
*[http://www.kurzweilai.net/artificial-superintelligence-a-futuristic-approach Artificial Superintelligence: A Futuristic Approach] by Roman V. Yampolskiy
 +
 
 +
==External links==
 +
 
 +
*[http://www.youtube.com/watch?v=IfbOyw3CT6A Singularity TED Talk] by Ray Kurzweil (YouTube)
 +
*[http://www.youtube.com/watch?v=mDhdt58ySJA The Singularity Three Major Schools of Thought] Singularity Summit Talk by Eliezer Yudkowsky
 +
*[http://cser.org/resources.html Centre for the study of Existential Risk] Web site of University of Cambridge
 +
 
 +
==See also==
 +
 
 +
*[[Intelligence explosion]]
 +
*[[Event horizon thesis]]
 +
*[[Hard takeoff]], [[Soft takeoff]]

Latest revision as of 13:45, 10 February 2014

Smallwikipedialogo.png
Wikipedia has an article about

The Singularity or Technological Singularity is a term with a number of different meanings, ranging from a period of rapid change to the creation of greater-than-human intelligence.

Three Singularity schools

Eliezer Yudkowsky has observed that the varying perspectives on the Singularity can be broadly split into three "major schools" - Accelerating Change (Ray Kurzweil), the Event Horizon (Vernor Vinge), and the Intelligence Explosion (I.J. Good).

The Accelerating Change School observes that, contrary to our intuitive linear expectations about the future, the rate of change of information technology grows exponentially. In the last 200 years we have seen more technological revolutions than in the last 20.000 before that. Clear examples of this exponentiality includes, but is not restricted to: Moore’s law, Internet speed, gene sequencing and the spatial resolution of brain scanning. By projecting these technology growths into the future it becomes possible to imagine what will be possible to engineer in the future. Ray Kurzweil specifically dates the Singularity happening in 2045.

The Event Horizon School asserts that for the entirety of Earth’s history all technological and social progress has been the product of the human mind. However, Vernor Vinge asserts that technology will soon improve on human intelligence either via brain-computer interfaces or Artificial Intelligence or both. Vinge argues since one must be at least as smart as the agent to be predicted, after we create smarter than human agents technological progress will be beyond the comprehension of anything a mere human can imagine now. He called this point in time the Singularity.

The Intelligence explosion School asserts that a positive feedback loop could be created in which an intelligence is making itself smarter, thus getting better at making itself even smarter. A strong version of this idea suggests that once the positive feedback starts to play a role, it will lead to a dramatic leap in capability very quickly. This scenario does not necessarily rely upon an entirely computing substrate for the explosion to occur, humans with computer augmented brains or genetically altered may also be methods to engineer an Intelligence Explosion. It is this interpretation of the Singularity that Less Wrong broadly focuses on.

Chalmers' analysis

Philosopher David Chalmers published a significant analysis of the Singularity, focusing on intelligence explosions, in Journal of Consciousness Studies. He performed a very careful analysis of the main premises and arguments for the existence of the singularity. According to him, the main argument is:

  • 1. There will be AI (before long, absent defeaters).
  • 2. If there is AI, there will be AI+ (soon after, absent defeaters).
  • 3. If there is AI+, there will be AI++ (soon after, absent defeaters).

—————-

  • 4. There will be AI++ (before too long, absent defeaters).

He then proceeds to search for arguments for these 3 premises. Premise 1 seems to be grounded in either Evolutionary argument for human-level AI or Emulation argument for human-level AI. Premise 2 is grounded in the existence and feasibility of an extensibility method for greater-than-human intelligence. Premise 3 is a more general version of premise 2. His analysis of how the singularity could occur defends the likelihood of an intelligence explosion. He also discusses the nature of general intelligence, and possible obstacles to a singularity. A good deal of discussion is given to the dangers of an intelligence explosion, and Chalmers concludes that we must negotiate it very carefully by building the correct values into the initial AIs.

Blog posts

References

External links

See also