Difference between revisions of "Seed AI"

From Lesswrongwiki
Jump to: navigation, search
(See Also)
(See Also)
Line 14: Line 14:
*[[Friendly AI]]
*[[Friendly AI]]
*[[Unfriendly AI]]
*[[Unfriendly AI]]
*[[Recursive Self-Improvement]]

Revision as of 01:08, 15 June 2012

Wikipedia has an article about

Seed AI is a term coined by Eliezer Yudkowsky for a program that would act as the starting point for a recursively self-improving AGI. Initially this program would have a sub-human intelligence. The key for successful AI takeoff would lie in creating adequate starting conditions, this would not just mean a program capable of self-improving, but also doing so in a way that would produce Friendly AI.

Seed AI differs from previously suggested methods of AI control, such as Asimov's 3 Laws of Robotics, in that it is assumed a suitably motivated SAI would be able to circumvent any core principles forced upon it. Instead, it would be free to harm a human, but would strongly hold the desire not to. This would allow for circumstances where some greater good may result by causing harm. However this raises issues of moral relativism.

External Links

  • Seed AI design description by Eliezer Yudkowsky.

See Also