You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

asr comments on My summary of Eliezer's position on free will - Less Wrong Discussion

16 Post author: Solvent 28 February 2012 05:53AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (100)

You are viewing a single comment's thread. Show more comments above.

Comment author: asr 29 February 2012 06:38:39AM 0 points [-]

In general, no. To predict the output of an arbitrary algorithm, you have to have equivalent states to the algorithm. If I give you a Turing machine and ask you what its output is, you can't do any better than running it.

You can do various trivial transforms to it and say "no, I'm running a new machine", but it's as expensive as running the original and will have to be effectively isomorphic to it, I suspect.

Comment author: Jonathan_Graehl 29 February 2012 11:36:43PM 1 point [-]

I agree.

But if you're saying that it's proven that "you have to ...", I wasn't aware of that.

Comment author: asr 01 March 2012 07:47:39PM *  2 points [-]

I don't have a proof of that claim, either, just a strong intuition. I should have specified that more clearly. If some other LWer has a proof in mind, I'd love to see it.

Here are some related things I do have proofs for.

There's no general procedure for figuring out whether two programs do the same thing -- this follows from Rice's theorem. (In fact, this is undecidable for arbitrary context-free languages, let alone more general programs.)

For any given problem, there is some set of asymptotically optimal algorithms, in terms of space or time complexity. And a simulator can't improve on that bound by more than a constant factor. So if you have an optimal algorithm for something, no AI can improve on that.

Now suppose I give you an arbitrary program and promise that there's a more efficient program that produces the same output. There can't be a general way to find the optimal program that produces the answer.*

  • Proof by reduction from the halting problem: Suppose we had a an oracle for minimizing programs. For an arbitrary Turing machine T and input I, create a program P that ignores its input and simulates T on I. If we could minimize this program, we'd have a halting oracle.)
Comment author: Jonathan_Graehl 01 March 2012 10:31:37PM 1 point [-]

When you say "optimal" you mean space or time used.

When you say "minimize" you mean "shortest equivalent program"?

I love this list of undecidable problems.

Comment author: asr 02 March 2012 02:12:17PM 0 points [-]

I meant to minimize in terms of asympototic time or space complexity, not length-of-program.

Comment author: [deleted] 29 February 2012 11:00:12PM 0 points [-]

Oh. That makes sense (since you can't even tell if the machine will halt or not). I'm still not convinced it applies to free will, but I will not argue the point further since I agree with the conclusion anyway.