You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

kilobug comments on My summary of Eliezer's position on free will - Less Wrong Discussion

16 Post author: Solvent 28 February 2012 05:53AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (100)

You are viewing a single comment's thread.

Comment author: kilobug 28 February 2012 02:16:36PM 4 points [-]

They key argument to me in Eliezer's "Free Will" sequence, is the fact that causality doesn't work from past to future, but from past to present and present to future. For the same reason, there is (usually) no way to know the future from the past without simulating the present.

Now, let's apply that to Free Will. You are in a state S (with a knowledge of the world and a set of inputs), you run an algorithm that will decide what action A you'll do.

It is deterministic, so given the state S, something (Omega) can predict what action A you'll do. But by doing so, if he wants to be sure to always reach the same conclusion you would, he'll have to run an algorithm that will always map the same inputs to the same outputs than you do. Said otherwise, because of how determinism works (step by step, not jumping directly from past to future), there is no way to know what you'll do without running an algorithm which is totally equivalent to you - that is, without running you.

Not sure I'm very clear, it's hard to summarize something like that in a few sentence.

Comment author: [deleted] 28 February 2012 03:15:37PM 1 point [-]

That is incorrect. If I tell you to add up the numbers from 1 to 100 and you start counting, I know by a completely different algorithm that you're going to get 5050. This generalizes: Omega need only prove that the output of your algorithm is the same as the output of a simpler algorithm (without, I may note, running it), and run that instead.

Comment author: asr 28 February 2012 04:49:56PM 4 points [-]

Omega cannot do this in general. Given an arbitrary algorithm with some asymptotic complexity, there is no general procedure that can get the same result faster.

Computational complexity puts limits even on superintelligences.

Comment author: kilobug 28 February 2012 04:01:06PM 3 points [-]

That's true for simple cases, yes, and that's why I added "usually" in "there is (usually) no way to know the future from the past without simulating the present".

But if you have an algorithm able to produce exactly the same output than I would (say exactly the same things, including talks about free will and consciousness) from the same inputs, then it'll have the same amount of consciousness and free will than I do - or you believe in zombies.

Comment author: [deleted] 28 February 2012 06:08:50PM 2 points [-]

True, but I think you're making a bigger deal of that, than it is. Suppose our Omega is the one from Newcomb's problem, and all it wants to know is whether you'll one-box or two-box. It doesn't need to run an algorithm that produces the same output as you in all instances. It needs to determine one specific bit of the output you will produce in a specific state S. There is a good chance that a quick scan of your algorithm is enough to figure this out, without needing to simulate anything at all.

The reason this is a big deal is that "free will" means two things to us. On the one hand, it's this philosophical concept. On the other hand, we think of having free will in opposition to being manipulated and coerced into doing something. These are obviously related. But just because we have free will in the philosophical sense, doesn't mean that we have free will in the second sense. So it's important to keep these as separate as possible.

Because Omega can totally play you like a fiddle, you know.