Eliezer_Yudkowsky comments on The Level Above Mine - Less Wrong

42 Post author: Eliezer_Yudkowsky 26 September 2008 09:18AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (387)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 23 January 2013 01:16:02AM 6 points [-]

Order-dependence and butterfly effects - knew about this and had it in mind when I wrote CEV, I think it should be in the text.

Counterfactual Mugging - check, I don't think I was calling TDT a complete solution before then but the Counterfactual Mugging was a class of possibilities I hadn't considered. (It does seem related to Parfit's Hitchhiker which I knew was a problem.)

Solomonoff Induction - again, I think you may be overestimating how much weight I put on that in the first place. It's not a workable AI answer for at least two obvious reasons I'm pretty sure I knew about from almost-day-one, (a) it's uncomputable and (b) it can't handle utility functions over the environment. However, your particular contributions about halting-oracles-shouldn't-be-unimaginable did indeed influence me in toward my current notion of second-order logical natural induction over possible models of axioms in which you could be embedded. Albeit I stand by my old reply that Solomonoff Induction would encompass any computable predictions or learning you could do about halting oracles in the environment. (The problem of porting yourself onto any environmental object is something I already knew AIXI would fail at.)

Comment author: Wei_Dai 23 January 2013 11:28:11PM 3 points [-]

Order-dependence and butterfly effects - knew about this and had it in mind when I wrote CEV, I think it should be in the text.

Ok, I checked the CEV writeup and you did mention these briefly. But that makes me unsure why you claimed to have solved metaethics. What should you do if your FAI comes back and says that your EV shows no coherence due to order dependence and butterfly effects (assuming it's not some kind of implementation error)? If you're not sure the answer is "nothing", and you don't have another answer, doesn't that mean your solution (about the meaning of "should") is at least incomplete, and possibly wrong?

Counterfactual Mugging - check, I don't think I was calling TDT a complete solution before then but the Counterfactual Mugging was a class of possibilities I hadn't considered. (It does seem related to Parfit's Hitchhiker which I knew was a problem.)

You said that TDT solves Parfit's Hitchhiker, so I don't know if you would have kept looking for more problems related to Parfit's Hitchhiker and eventually come upon Counterfactual Mugging.

Solomonoff Induction - again, I think you may be overestimating how much weight I put on that in the first place. It's not a workable AI answer for at least two obvious reasons I'm pretty sure I knew about from almost-day-one, (a) it's uncomputable and (b) it can't handle utility functions over the environment

Both of these can be solved without also solving halting-oracles-shouldn't-be-unimaginable. For (a), solve logical uncertainty. For (b), switch to UDT-with-world-programs.

Also, here is another problem that maybe you weren't already aware of.

Comment author: MugaSofer 24 January 2013 10:31:01AM -1 points [-]

What should you do if your FAI comes back and says that your EV shows no coherence due to order dependence and butterfly effects (assuming it's not some kind of implementation error)?

Wouldn't that kind of make moral reasoning impossible?