Difference between revisions of "Seed AI"

From Lesswrongwiki
Jump to: navigation, search
m (See Also)
Line 1: Line 1:
 
{{wikilink}}
 
{{wikilink}}
'''Seed AI''' is a term coined by Eliezer Yudkowsky for a program that would act as the starting point for a recursively self-improving AGI. Initially this program would have a sub-human intelligence. The key for successful [[AI takeoff]] would lie in creating adequate starting conditions, this would not just mean a program capable of self-improving, but also doing so in a way that would produce [[Friendly AI]].
 
  
'''Seed AI''' differs from previously suggested methods of AI architecture, such as Asimov's 3 Laws of Robotics, in that it is assumed a suitably motivated SAI would be able to circumvent any core principles forced upon it. Instead, it would be free to harm a human, but would strongly hold the desire not to. This would allow for circumstances where some greater good may result by causing harm. However this raises issues of moral relativism.
+
'''Seed AI''' is a term coined by [[Eliezer Yudkowsky]] for an [[AGI]] that would act as the starting point for a recursively self-improving AGI. Initially this program may have a sub-human intelligence. The key for successful [[AI takeoff]] would lie in creating adequate starting conditions.
 +
 
 +
The capabilities of a Seed AI may be contrasted with those of a human. While humans can increase their intelligence by, for example, learning mathematics, they cannot ''increase their ability to learn''. That is, humans cannot currently produce drugs that make us learn faster, nor can we implant intelligence increasing chips into our brains. Therefore we are not currently recursively self-improving. This is because we were evolved; brains were evolved before deliberative thought, and evolution cannot refactor its method of creating intelligence afterwards.
 +
 
 +
An AI on the other hand, is created by humans' deliberative intelligence. Therefore we can in theory program a simple but general AI which has access to all its own programming. While is it true that any sufficiently intelligent being could determine how to recursively self-improve, some architectures, such as neural networks or evolutionary algorithms, may have a much harder time doing so. Seed AI is distinguished by being built to self-modify from the start.
 +
 
 +
One critical consideration in Seed AI is that its goal system must remain stable under modifications. The architecture must be proven to faithfully preserve its utility function while becoming more intelligent. If the first iteration of the Seed AI has a [[Friendly AI|friendly]] goal, and is sufficiently able to make predictions, then it will remain safe indefinitely; if it predicted that modifying would change its goal, it would not want that according to its current goal, and it would not self-modify.
  
 
==External Links==
 
==External Links==
 
 
*[http://intelligence.org/upload/LOGI/seedAI.html Seed AI] design description by Eliezer Yudkowsky.
 
*[http://intelligence.org/upload/LOGI/seedAI.html Seed AI] design description by Eliezer Yudkowsky.
  
 
==See Also==
 
==See Also==
*[[Goedel machine]]
+
*[[Gödel machine]]
*[[Hard takeoff]]
+
*[[AI takeoff]]
*[[Soft takeoff]]
+
*[[Recursive self-improvement]]
*[[Friendly AI]]
+
*[[Intelligence explosion]]
*[[Unfriendly AI]]
 
*[[Recursive Self-Improvement]]
 

Revision as of 10:01, 30 June 2012

Smallwikipedialogo.png
Wikipedia has an article about


Seed AI is a term coined by Eliezer Yudkowsky for an AGI that would act as the starting point for a recursively self-improving AGI. Initially this program may have a sub-human intelligence. The key for successful AI takeoff would lie in creating adequate starting conditions.

The capabilities of a Seed AI may be contrasted with those of a human. While humans can increase their intelligence by, for example, learning mathematics, they cannot increase their ability to learn. That is, humans cannot currently produce drugs that make us learn faster, nor can we implant intelligence increasing chips into our brains. Therefore we are not currently recursively self-improving. This is because we were evolved; brains were evolved before deliberative thought, and evolution cannot refactor its method of creating intelligence afterwards.

An AI on the other hand, is created by humans' deliberative intelligence. Therefore we can in theory program a simple but general AI which has access to all its own programming. While is it true that any sufficiently intelligent being could determine how to recursively self-improve, some architectures, such as neural networks or evolutionary algorithms, may have a much harder time doing so. Seed AI is distinguished by being built to self-modify from the start.

One critical consideration in Seed AI is that its goal system must remain stable under modifications. The architecture must be proven to faithfully preserve its utility function while becoming more intelligent. If the first iteration of the Seed AI has a friendly goal, and is sufficiently able to make predictions, then it will remain safe indefinitely; if it predicted that modifying would change its goal, it would not want that according to its current goal, and it would not self-modify.

External Links

  • Seed AI design description by Eliezer Yudkowsky.

See Also