wedrifid comments on What if AI doesn't quite go FOOM? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (186)
This is a brief summary of what I consider the software based non-fooming scenario.
Terminology
Impact - How much the agent can make the world how it wants (hit a small target in search space, etc) Knowledge = correct programming.
General Outlook/philosophy
Do not assume that an agent knows everything, assume that an agent has to start off with no program to run. Try and figure out where and how it gets the information for the program. And whether it can be mislead by sources of information.
General Suppositions
High impact requires that someone (either the creator or the agent) have a high knowledge of the world in order for the system to be appropriate. And the right knowledge; knowing trillions of digits of pi is generally not as useful as where the oil reserves are when trying to take over the world.
Usefulness of knowledge is not inherently obvious. It is also changeable. The knowledge of how to efficiently get blubber from a whale is less useful now we have oil.
Knowledge can be acquired through social means, derived from prior knowledge and experience or experimentally.
Moving beyond your current useful knowledge requires luck in picking the right things to analyse for statistical correlations.
Knowledge can rely on other knowledge.
Historical Explanations
Evolution can gather knowledge.
Brains can gather knowledge. It is monumentally wasteful if the individual dies and the knowledge is not passed on, as the next generation has to reinvent the wheel each time.
Lots of the increase in the impact of individual that has happened through evolution has been due to passing of knowledge between individuals, not improvements in base algorithms for deriving knowledge from sense data. This is especially true in the case of humans with language. High G/IQ may be based on the ability to get knowledge from other humans, thus being able to have higher impact.
Computational resources are only useful in so much as you have the correct knowledge to make use of them. If you can only find simple useful models you don't need more resources.
Future
The fastest scenario is that AIs can quickly expand computational resources to enable it to make use of all of human knowledge. After that it is "slow".