Seed AI is a term coined by Eliezer Yudkowsky for an AGI that would act as the starting point for a recursively self-improving AGI. Initially this program may have a sub-human intelligence. The key for successful AI takeoff would lie in creating adequate starting conditions.
The capabilities of a Seed AI may be contrasted with those of a human. While humans can increase their intelligence by, for example, learning mathematics, they cannot increase their ability to learn. That is, humans cannot currently produce drugs that make us learn faster, nor can we implant intelligence increasing chips into our brains. Therefore we are not currently recursively self-improving. This is because we were evolved; brains were evolved before deliberative thought, and evolution cannot refactor its method of creating intelligence afterwards.
An AI on the other hand, is created by humans' deliberative intelligence. Therefore we can in theory program a simple but general AI which has access to all its own programming. While is it true that any sufficiently intelligent being could determine how to recursively self-improve, some architectures, such as neural networks or evolutionary algorithms, may have a much harder time doing so. Seed AI is distinguished by being built to self-modify from the start.
One critical consideration in Seed AI is that its goal system must remain stable under modifications. The architecture must be proven to faithfully preserve its utility function while becoming more intelligent. If the first iteration of the Seed AI has a friendly goal, and is sufficiently able to make predictions, then it will remain safe indefinitely; if it predicted that modifying would change its goal, it would not want that according to its current goal, and it would not self-modify.
- Seed AI design description by Eliezer Yudkowsky.