thomblake comments on Rationality is Systematized Winning - Less Wrong

48 Post author: Eliezer_Yudkowsky 03 April 2009 02:41PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (252)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 03 April 2009 07:08:31PM 0 points [-]

I agree with you that people shouldn't drink fatal poison, and that 2+2=4. Should you feel worried because of that?

Comment author: thomblake 03 April 2009 08:02:15PM 4 points [-]

If it were also the case that your friends all agreed with you, but the "mainstream/dominant position in modern philosophy and decision theory" disagreed with you, then yes, you should probably feel a bit worried.

Comment author: Vladimir_Nesov 03 April 2009 09:15:16PM 1 point [-]

Good point, my reply didn't take it into account. It all depends on the depth of understanding, so to answer your remark consider e.g. supernatural, UFOs.

Comment author: timtyler 03 April 2009 09:37:02PM -1 points [-]

Is there really such a disagreement about Newcomb's problem?

The issue seems to be whether agents can convincingly signal to a powerful agent that they will act in some way in the future - i.e. whether it is possible to make credible promises to such a powerful agent.

I think that this is possible - at least in principle. Eliezer also seems to think this is possible. I personally am not sure that such a powerful agent could achieve the proposed success rate on unmodified humans - but in the context of artificial agents, I see few problems - especially if Omega can leave the artificial agent with the boxes in a chosen controlled environment, where Omega can be fairly confident that they will not be interfered with by interested third parties.

Do many in "modern philosophy and decision theory" really disagree with that?

More to the point, do they have a coherent counter-argument?

Comment author: cousin_it 04 April 2009 12:57:22AM 1 point [-]

Thanks for mentioning artificial agents. If they can run arbitrary computations, Omega itself isn't implementable as a program due to the halting problem. Maybe this is relevant to Newcomb's problem in general, I can't tell.

Comment author: timtyler 04 April 2009 04:56:40AM *  1 point [-]

Surely not a serious problem: if the agent is going to hang around until the universal heat death before picking a box, then Omega's predcition of its actions doesn't matter.