Difference between revisions of "Recursive Self-Improvement"
From Lesswrongwiki
(→External Links) |
|||
Line 10: | Line 10: | ||
==External Links== | ==External Links== | ||
+ | |||
+ | *[http://intelligence.org/upload/LOGI/seedAI.html Seed A.I] description from Singularity Institute. | ||
+ | |||
+ | *[http://intelligence.org/upload/artificial-intelligence-risk.pd Risks from Artificial Intelligence] by Eliezer Yudkowsky. | ||
+ | |||
+ | *[http://www.xuenay.net/Papers/DigitalAdvantages.pdf Advantages of Artificial Intelligence] by Kaj Sotala | ||
*[http://commonsenseatheism.com/wp-content/uploads/2011/02/Good-Speculations-Concerning-the-First-Ultraintelligent-Machine.pdf Speculations Concerning the First Ultraintelligent Machine] by I.J. Good | *[http://commonsenseatheism.com/wp-content/uploads/2011/02/Good-Speculations-Concerning-the-First-Ultraintelligent-Machine.pdf Speculations Concerning the First Ultraintelligent Machine] by I.J. Good |
Revision as of 22:38, 18 June 2012
Recursive Self-Improvement is an approach to Artificial Intelligence that allows a system to make adjustments to its own functionality resulting in improved performance. The system could then feedback on itself with each cycle reaching ever higher levels of intelligence resulting in either a Hard or Soft AI takeoff.
Such a system would be bound by the limitations of it's hardware. The software would be searching for cognitive algorithms that were the most optimal.
Blog Posts
- Recursive Self Improvement by Eliezer Yudkowsky
- Cascades, Cycles, Insight... by Eliezer Yudkowsky
- ...Recursion, Magic by Eliezer Yudkowsky
External Links
- Seed A.I description from Singularity Institute.
- Risks from Artificial Intelligence by Eliezer Yudkowsky.
- Advantages of Artificial Intelligence by Kaj Sotala