Transfuturist comments on Rationality Quotes Thread September 2015 - Less Wrong

3 Post author: elharo 02 September 2015 09:25AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (482)

You are viewing a single comment's thread. Show more comments above.

Comment author: VoiceOfRa 05 October 2015 02:44:10AM -2 points [-]

For your first premise to be uncontroversial around here, I think you need to either take it as applying only to the form of the laws of physics and not to initial conditions, arbitrary constants, etc. (in which case you can't identify "this universe" and still have it be of low complexity)

Doesn't that undermine the premise of the whole "a godless universe has low Kolmogorov complexity" argument that you're trying to make?

adopt something like Tegmark's MUH that amounts to running every version of the universe (all boundary conditions, all values for the constants, etc.) in parallel (in which case what gets taken over by a superintelligent AI is no longer the whole thing but a possibly-tiny part, and specifying that part costs a lot of complexity).

Well, all the universes that support can life are likely wind up taken over by AGI's.

unless you are depending on it taking over the whole universe so that you can just point at the whole caboodle and say "that thing" -- but then presumably its agent-detection facilities are a tiny part of the whole (not necessarily a spatially localized part, of course), and singling those out so you can say "agents are things that that identifies as agents" again has a large complexity cost from locating them.

But, the AGI can. Agentiness is going to be a very important concept for it. Thus it's likely to have a short referent to it.

Comment author: Transfuturist 05 October 2015 04:18:39AM *  2 points [-]

Doesn't that undermine the premise of the whole "a godless universe has low Kolmogorov complexity" argument that you're trying to make?

Again, there is a difference between the complexity of the dynamics defining state transitions, and the complexity of the states themselves.

But, the AGI can. Agentiness is going to be a very important concept for it. Thus it's likely to have a short referent to it.

What do you mean by "short referent?" Yes, it will likely be an often-used concept, so the internal symbol signifying the concept is likely to be short, but that says absolutely nothing about the complexity of the concept itself. If you want to say that "agentiness" is a K-simple concept, perhaps you should demonstrate that by explicating a precise computational definition for an agent detector, and show that it doesn't fail on any conceivable edge-cases.

Saying that it's important doesn't mean it's simple. "For an AGI to be successful it is going to have to be good at reducing entropy globally. Thus reducing entropy globally must have low Kolmogorov complexity."

Comment author: VoiceOfRa 06 October 2015 12:58:11AM -1 points [-]

Saying that it's important doesn't mean it's simple.

You're confusing the intuitive notion of "simple" with "low Kolmogorov complexity". For example, the Mandelbrot set is "complicated" in the intuitive sense, but has low Kolmogorov complexity since it can be constructed by a simple process.

What do you mean by "short referent?" Yes, it will likely be an often-used concept, so the internal symbol signifying the concept is likely to be short, but that says absolutely nothing about the complexity of the concept itself.

It does if you look at the rest of my argument.

If you want to say that "agentiness" is a K-simple concept, perhaps you should demonstrate that by explicating a precise computational definition for an agent detector,

Step 1: Stimulation the universe for a sufficiently long time.

Step 2: Ask the entity now filling up the universe "is this an agent?".

Thus reducing entropy globally must have low Kolmogorov complexity.

What do you mean by that statement? Kolmogorov complexity is a property of a concept. Well "reducing entropy" as a concept does have low Kolmogorov complexity.

Comment author: Transfuturist 06 October 2015 01:32:34AM *  0 points [-]

You're confusing the intuitive notion of "simple" with "low Kolmogorov complexity"

I am using the word "simple" to refer to "low K-complexity." That is the context of this discussion.

It does if you look at the rest of my argument.

The rest of your argument is fundamentally misinformed.

Step 1: Stimulation the universe for a sufficiently long time.

Step 2: Ask the entity now filling up the universe "is this an agent?".

Simulating the universe to identify an agent is the exact opposite of a short referent. Anyway, even if simulating a universe were tractable, it does not provide a low complexity for identifying agents in the first place. Once you're done specifying all of and only the universes where filling all of space with computronium is both possible and optimal, all of and only the initial conditions in which an AGI will fill the universe with computronium, and all of and only the states of those universes where they are actually filled with computronium, you are then left with the concept of universe-filling AGIs, not agents.

You seem to be attempting to say that a descriptor of agents would be simple because the physics of our universe is simple. Again, the complexity of the transition function and the complexity of the configuration states are different. If you do not understand this, then everything that follows from this is bad argumentation.

What do you mean by that statement? Kolmogorov complexity is a property of a concept. Well "reducing entropy" as a concept does have low Kolmogorov complexity.

It is framed after your own argument, as you must be aware. Forgive me, for I too closely patterned it after your own writing. "For an AGI to be successful it is going to have to be good at reducing entropy globally. Thus reducing entropy globally must be possible." That is false, just as your own argument for a K-simple general agent specification is false. It is perfectly possible that an AGI will not need to be good at recognizing agents to be successful, or that an AGI that can recognize agents generally is not possible. To show that it is, you have to give a simple algorithm, which your universe-filling algorithm is not.