You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

lackofcheese comments on Simulation argument meets decision theory - Less Wrong Discussion

14 Post author: pallas 24 September 2014 10:47AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (54)

You are viewing a single comment's thread. Show more comments above.

Comment author: Jiro 24 September 2014 05:53:03PM 0 points [-]

Isn't this another case of the halting problem in disguise? When the computer simulates you, it also simulates your attempt to figure out what the computer would do.

Comment author: lackofcheese 24 September 2014 05:58:16PM 1 point [-]

Problems of that nature are pretty easy to resolve. For example:

You have five seconds to make your decision; if you run out of time, the computer chops your head off.

Comment author: Jiro 24 September 2014 06:24:12PM *  1 point [-]

Assuming the subject doesn't want to get his head chopped off, then you're no longer asking the question "what does decision theory say you should do", you're asking "what does decision theory say you should do, given that certain types of analysis to determine what decision is the best are not allowed". Such a question may provide an incentive for the person sitting there in front of a homicidal computer, but doesn't really illuminate decision theory much.

Also, the human can't avoid getting his head chopped off by saying "I'll just not make any decisions that trigger the halting problem"--trying to determine if a line of reasoning will trigger the halting problem would itself trigger the halting problem. You can't think of this as "either the human answers in a split second, or he knows he's doing something that won't produce an answer".

(Of course, the human could say "I'll just not make any decisions that are even close to the halting problem", and avoid triggering the halting problem by also avoiding a big halo of other analyses around it. If he does that, then my first objection is even worse.)

Comment author: Lumifer 24 September 2014 06:31:03PM 4 points [-]

but doesn't really illuminate decision theory much.

I don't know about that. The study of making decisions under significant constraints (e.g. time) looks very useful to me.

Comment author: lackofcheese 24 September 2014 06:30:35PM *  2 points [-]

You're the one who brought computational constraints into the problem, not me. In the abstract sense, a decision-theoretically optimal agent has to be able to solve the halting problem in order to be optimal.

If we start to consider real-world constraints, such as being unable to solve the halting problem, then real-world constraints like having a limit of five seconds to make a decision are totally reasonable as well.

As for how to avoid getting your head chopped off, it's pretty easy; just press a button within five seconds.

Comment author: Jiro 24 September 2014 07:14:55PM 1 point [-]

If we start to consider real-world constraints, such as being unable to solve the halting problem

What? Being unable to solve the halting problem is a theoretical constraint, not a real-world constraint.

Comment author: lackofcheese 24 September 2014 07:57:53PM 2 points [-]

If it's purely theoretical then why can't I have a hypercomputer? What's wrong with simply solving the halting problem by using an oracle, or by running a Turing machine for infinitely many steps before I make my decision?

If I can't have infinite time, then I might as well have 5 seconds.

Comment author: Jiro 24 September 2014 09:05:23PM *  1 point [-]

If it's purely theoretical then why can't I have a hypercomputer? What's wrong with simply solving the halting problem by using an oracle, or by running a Turing machine for infinitely many steps before I make my decision?

You're asking the same question three times.

Anyway, an oracle can determine whether a program in a Turing machine can halt. It can't determine whether it itself can halt.

Any attempt to use an oracle could lead to X predicting Y who tries to predict X using an oracle. That can be equivalent to the oracle trying to determine whether it itself can halt.

If I can't have infinite time, then I might as well have 5 seconds.

This is of course true, but it just means that both finite time and 5 seconds are bad.

Comment author: lackofcheese 24 September 2014 09:22:46PM *  1 point [-]

OK, I think I've found a source of confusion here.

There's two fundamentally different questions one could ask:
- what is the optimal action for X/X* to perform?
- what computations should X/X* perform in order to work out which action ve should perform?

The first question is the standard decision-theoretic question, and in that context the halting problem is of no relevance because we're solving the problem from the outside, not from the inside.

On the other hand, there is no point to taking the inside or "embedded" view unless we specifically want to consider computational or real-world constraints. In that context, the answer is that it's pretty stupid for the agent to run a simulation of itself because that obviously won't work.

Any decision-making algorithm in the real world has to be smart enough not to go into infinite loops. Of course, such an algorithm won't be optimal, but it would be very silly to expect it to be optimal except in relatively easy cases.