Possible duplicate of Intelligence Explosion and Recursive Self-Improvement
This combination of abilities would, in theory, allow an AGI to recursively improve itself by becoming smarter within its original purpose. A Gödel machine rigorously defines a specification for such an AGI.
A Seed AI (a term coined by Eliezer Yudkowsky) is an Artificial General Intelligence (AGI) which improves itself by recursively rewriting its own source code without human intervention. Initially this program willwould likely have a minimal intelligence, but over the course of many iterations it willwould evolve to human-equivalent or even trans-human reasoning. The key for successful AI takeoff would lie in creating adequate starting conditions.
By contrast, most current approachesThe notion of machine learning without human intervention has been around nearly as long as the computers themselves. In 1959, Arthur Samuel stated that "Machine learning is the field of study that gives computers the ability to AGI attemptlearn without being explicitly programmed."1 Since that time, computers have been able to succeedlearn by creating a mind immediately capablevariety of human-equivalent intelligence. Two of the more popular efforts are IBM'smethods, including Watsonneural networks and Cycorp's CycBayesian inference. Both
While these approaches have enabled machines to become better at various tasks23, it has not enabled them to overcome the limitations of these programstechniques, nor has it given them the ability to understand their own programming and make improvements. Hence, they are improved throughnot able to adapt to new situations without human manipulationassistance.
A Seed AI has abilities that previous approaches lack:
Oneto comprehend its utility and thus preserve it.
This combination of abilities would, in theory, allow an AGI to faithfully preserverecursively improve itself by becoming smarter within its utility function while becoming more intelligent. Iforiginal purpose.
Currently, there are no known Seed AIs in existence, but it is an active field of research. Several organizations continue to pursue this goal, such as the first iteration of the SeedSingularity Institute, OpenCog, and Adaptive AI has a friendly goal, and is sufficiently able to make predictions, then it will remain safe indefinitely; if it predicted that modifying would change its goal, it would not want that according to its current goal, and it would not self-modify..
A Seed AI is a term coinedan Artificial General Intelligence (AGI) which improves itself by Eliezer Yudkowsky for an AGI that would act as the starting point for a recursively self-improving AGI.rewriting its own source code without human intervention. Initially this program maywill likely have a sub-minimal intelligence, but over the course of many iterations it will evolve to human-equivalent or even trans-human intelligence.reasoning. The key for successful AI takeoff would lie in creating adequate starting conditions.
The capabilitiesBy contrast, most current approaches to AGI attempt to succeed by creating a mind immediately capable of a Seed AI may be contrasted with thosehuman-equivalent intelligence. Two of a human. While humans can increase their intelligence by, for example, learning mathematics, they cannot increase their ability to learnthe more popular efforts are IBM's Watson and Cycorp's Cyc. That is, humans cannot currently produce drugs that make us learn faster, nor can we implant intelligence increasing chips into our brains. Therefore weBoth of these programs are not currently recursively self-improving. This is because we were evolved; brains were evolved before deliberative thought, and evolution cannot refactor its methodimproved through human manipulation of creating intelligence afterwards.
An AI on the other hand, is createdsource code rather than by humans' deliberative intelligence. Therefore we can in theory program a simple but general AI which has access to all its own programming. While is it true that any sufficiently intelligent being could determine how to recursively self-improve, some architectures, such as neural networks or evolutionary algorithms, may have a much harder time doing so. Seed AI is distinguished by being built to self-modify from the start.AGI itself.
Seed AI is a term coined by Eliezer Yudkowsky for a programan AGI that would act as the starting point for a recursively self-improving AGI. Initially this program wouldmay have a sub-human intelligence. The key for successful AI takeoff would lie in creating adequate starting conditions, thisconditions.
The capabilities of a Seed AI may be contrasted with those of a human. While humans can increase their intelligence by, for example, learning mathematics, they cannot increase their ability to learn. That is, humans cannot currently produce drugs that make us learn faster, nor can we implant intelligence increasing chips into our brains. Therefore we are not currently recursively self-improving. This is because we were evolved; brains were evolved before deliberative thought, and evolution cannot refactor its method of creating intelligence afterwards.
An AI on the other hand, is created by humans' deliberative intelligence. Therefore we can in theory program a simple but general AI which has access to all its own programming. While is it true that any sufficiently intelligent being could determine how to recursively self-improve, some architectures, such as neural networks or evolutionary algorithms, may have a much harder time doing so. Seed AI is distinguished by being built to self-modify from the start.
One critical consideration in Seed AI is that its goal system must remain stable under modifications. The architecture must be proven to faithfully preserve its utility function while becoming more intelligent. If the first iteration of the Seed AI has a friendly goal, and is sufficiently able to make predictions, then it will remain safe indefinitely; if it predicted that modifying would change its goal, it would not just mean a program capable of self-improving, but also doing so in a waywant that would produce Friendly AI.
Seed AI differs from previously suggested methods of AI architecture, such as Asimov's 3 Laws of Robotics, in that it is assumed a suitably motivated SAI would be ableaccording to circumvent any core principles forced upon it. Instead,its current goal, and it would be free to harm a human, but would strongly hold the desire not to. This would allow for circumstances where some greater good may result by causing harm. However this raises issues of moral relativism.self-modify.
Seed AI differs from previously suggested methods of AI control,architecture, such as Asimov's 3 Laws of Robotics, in that it is assumed a suitably motivated SAI would be able to circumvent any core principles forced upon it. Instead, it would be free to harm a human, but would strongly hold the desire not to. This would allow for circumstances where some greater good may result by causing harm. However this raises issues of moral relativism.
If not removed as duplicate, I think this should be under AI, likely Alignment Theory.