|
|
(4 intermediate revisions by one other user not shown) |
Line 1: |
Line 1: |
− | '''Recursive Self-Improvement''' is an approach to [[Artificial Intelligence]] that allows a system to make adjustments to its own functionality resulting in improved performance. The system can then feedback on itself with each cycle reaching ever higher levels of intelligence.
| + | #REDIRECT [[Recursive self-improvement]] |
− | | |
− | Assuming a machine of sufficient power was available, once an initial “escape velocity” had been reached the intelligence of such a system would be expected to accelerate rapidly and according to [[Eliezer Yudkowsky]] become capable in a matter of “weeks or hours” of achieving what would take humanity decades.
| |
− | | |
− | Ultimately such a system would be bound by the limitations of it's hardware. The software would be searching for cognitive algorithms that were the most optimal.
| |
− | | |
− | ==Blog Posts==
| |
− | | |
− | *[http://lesswrong.com/lw/we/recursive_selfimprovement/ Recursive Self Improvement] by Eliezer Yudkowsky
| |
− | *[http://lesswrong.com/lw/w5/cascades_cycles_insight/ Cascades, Cycles, Insight...] by Eliezer Yudkowsky
| |
− | *[http://lesswrong.com/lw/w6/recursion_magic/ ...Recursion, Magic] by Eliezer Yudkowsky
| |
− | | |
− | ==External Links==
| |
− | | |
− | *[http://commonsenseatheism.com/wp-content/uploads/2011/02/Good-Speculations-Concerning-the-First-Ultraintelligent-Machine.pdf Speculations Concerning the First Ultraintelligent Machine] by I.J. Good
| |
− | | |
− | ==See Also==
| |
− | | |
− | *[[Evolutionary Algorithm]]
| |
− | *[[Intelligence explosion]]
| |
− | *[[Singularity]]
| |