Transfuturist comments on Rationality Quotes Thread September 2015 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (482)
No, but I do downvote people who appear to be completely mind-killed.
Rather, identifying agents using algorithms with reasonable running time is a hard problem.
Also, consider the following relatively uncontroversial beliefs around here:
1) The universe has low Kolmogorov complexity.
2) An AGI is likely to be developed and when it does it'll take over the universe.
Now let's consider some implications of these beliefs:
3) An AGI has low Kolmogorov complexity since it can be specified as "run this low Kolmogorov complexity universe for a sufficiently long period of time".
Also the AGI to be successful is going to have to be good at detecting agents so it can dedicated sufficient resources to defeating/subverting them. Thus detecting agents must have low Kolmogorov complexity.
That's a fundamental misunderstanding of complexity. The laws of physics are simple, but the configurations of the universe that runs on it can be incredibly complex. The amount of information needed to specify the configuration of any single cubic centimeter of space is literally unfathomable to human minds. Running a simulation of the universe until intelligences develop inside of it is not the same as specifying those intelligences, or intelligence in general.
The convenience of some hypothetical property of intelligence does not act as a proof of that property. Please note that we are in a highly specific environment, where humans are the only sapients around, and animals are the only immediately recognizable agents. There are sci-fi stories about your "necessary" condition being exactly false; where humans do not recognize some intelligence because it is not structured in a way that humans are capable of recognizing.