You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Jiro comments on Simulation argument meets decision theory - Less Wrong Discussion

14 Post author: pallas 24 September 2014 10:47AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (54)

You are viewing a single comment's thread. Show more comments above.

Comment author: Jiro 24 September 2014 07:14:55PM 1 point [-]

If we start to consider real-world constraints, such as being unable to solve the halting problem

What? Being unable to solve the halting problem is a theoretical constraint, not a real-world constraint.

Comment author: lackofcheese 24 September 2014 07:57:53PM 2 points [-]

If it's purely theoretical then why can't I have a hypercomputer? What's wrong with simply solving the halting problem by using an oracle, or by running a Turing machine for infinitely many steps before I make my decision?

If I can't have infinite time, then I might as well have 5 seconds.

Comment author: Jiro 24 September 2014 09:05:23PM *  1 point [-]

If it's purely theoretical then why can't I have a hypercomputer? What's wrong with simply solving the halting problem by using an oracle, or by running a Turing machine for infinitely many steps before I make my decision?

You're asking the same question three times.

Anyway, an oracle can determine whether a program in a Turing machine can halt. It can't determine whether it itself can halt.

Any attempt to use an oracle could lead to X predicting Y who tries to predict X using an oracle. That can be equivalent to the oracle trying to determine whether it itself can halt.

If I can't have infinite time, then I might as well have 5 seconds.

This is of course true, but it just means that both finite time and 5 seconds are bad.

Comment author: lackofcheese 24 September 2014 09:22:46PM *  1 point [-]

OK, I think I've found a source of confusion here.

There's two fundamentally different questions one could ask:
- what is the optimal action for X/X* to perform?
- what computations should X/X* perform in order to work out which action ve should perform?

The first question is the standard decision-theoretic question, and in that context the halting problem is of no relevance because we're solving the problem from the outside, not from the inside.

On the other hand, there is no point to taking the inside or "embedded" view unless we specifically want to consider computational or real-world constraints. In that context, the answer is that it's pretty stupid for the agent to run a simulation of itself because that obviously won't work.

Any decision-making algorithm in the real world has to be smart enough not to go into infinite loops. Of course, such an algorithm won't be optimal, but it would be very silly to expect it to be optimal except in relatively easy cases.