Transfuturist comments on Rationality Quotes Thread September 2015 - Less Wrong

3 Post author: elharo 02 September 2015 09:25AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (482)

You are viewing a single comment's thread. Show more comments above.

Comment author: VoiceOfRa 04 October 2015 03:03:35AM -2 points [-]

Perhaps it would; your consistent strategy of downvoting everyone who disagrees with you

No, but I do downvote people who appear to be completely mind-killed.

Identifying game-like interactions is also (so far as I can tell) a problem no one has any inkling how to solve, especially if we don't have the prior ability to identify the agents.

Rather, identifying agents using algorithms with reasonable running time is a hard problem.

Also, consider the following relatively uncontroversial beliefs around here:

1) The universe has low Kolmogorov complexity.

2) An AGI is likely to be developed and when it does it'll take over the universe.

Now let's consider some implications of these beliefs:

3) An AGI has low Kolmogorov complexity since it can be specified as "run this low Kolmogorov complexity universe for a sufficiently long period of time".

Also the AGI to be successful is going to have to be good at detecting agents so it can dedicated sufficient resources to defeating/subverting them. Thus detecting agents must have low Kolmogorov complexity.

Comment author: Transfuturist 04 October 2015 05:51:34AM *  1 point [-]

An AGI has low Kolmogorov complexity since it can be specified as "run this low Kolmogorov complexity universe for a sufficiently long period of time".

That's a fundamental misunderstanding of complexity. The laws of physics are simple, but the configurations of the universe that runs on it can be incredibly complex. The amount of information needed to specify the configuration of any single cubic centimeter of space is literally unfathomable to human minds. Running a simulation of the universe until intelligences develop inside of it is not the same as specifying those intelligences, or intelligence in general.

Also the AGI to be successful is going to have to be good at detecting agents so it can dedicated sufficient resources to defeating/subverting them. Thus detecting agents must have low Kolmogorov complexity.

The convenience of some hypothetical property of intelligence does not act as a proof of that property. Please note that we are in a highly specific environment, where humans are the only sapients around, and animals are the only immediately recognizable agents. There are sci-fi stories about your "necessary" condition being exactly false; where humans do not recognize some intelligence because it is not structured in a way that humans are capable of recognizing.