Perplexed comments on Morality as Parfitian-filtered Decision Theory? - Less Wrong

24 Post author: SilasBarta 30 August 2010 09:37PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (270)

You are viewing a single comment's thread. Show more comments above.

Comment author: Perplexed 01 September 2010 02:32:09AM 1 point [-]

I'm pretty sure I have read all of the free will sequence. I am a compatibilist, and have been since before EY was born. I am quite happy with analyses that have something assumed free at one level (of reduction) and determined at another level. I still get a very bad feeling about Omega scenarios. My intuition tells me that there is some kind of mind projection fallacy being committed. But I can't put my finger on exactly where it is.

I appreciate that the key question in any form of decision theory is how you handle the counter-factual "surgery". I like Pearl's rules for counter-factual surgery: If you are going to assume that some node is free, and to be modeled as controled by someone's "free decision" rather than by its ordinary causal links, then the thing to do is to surgically sever the causal links as close to the decision node as possible. This modeling policy strikes me as simply common sense. My gut tells me that something is being done wrong when the surgery is pushed back "causally upstream" - to a point in time before the modeled "free decision".

I understand that if we are talking about the published "decision making" source code of a robot, then the true "free decision" is actually made back there upstream in the past. And that if Omega reads the code, then he can make pretty good predictions. What I don't understand is why the problem is not expressed this way from the beginning.

"A robot in the desert need its battery charged soon. A motorist passes by, checks the model number, looks up the robot specs online, and then drives on, knowing this robot doesn't do reciprocity." A nice simple story. Maybe the robot designer should have built in reciprocity. Maybe he will design differently next time. No muss, no fuss, no paradox.

I suppose there is not much point continuing to argue about it. Omega strikes me as both wrong and useless, but I am not having much luck convincing others. What I really should do is just shut up on the subject and simply cringe quietly whenever Omega's name is mentioned.

Thanks for a good conversation on the subject, though.

Comment author: timtyler 01 September 2010 09:16:38AM *  3 points [-]

What I don't understand is why the problem is not expressed this way from the beginning.

I don't know for sure - but perhaps a memetic analaysis of paradoxes might throw light on the issue:

Famous paradoxes are often the ones that cause the most confusion and discussion. Debates and arguments make for good fun and drama - and so are copied around by the participants. If you think about it that way, finding a "paradox" that is confusingly expressed may not be such a surprise.

Another example would be: why does the mirror reverse left and right but not up and down?

There, the wrong way of looking at the problem seems to be built into the question.

( Feynman's answer ).

Comment author: Lightwave 01 September 2010 06:46:02PM *  2 points [-]

What I don't understand is why the problem is not expressed this way from the beginning.

Because the point is to explain to the robot why it's not getting its battery charged?

Comment author: Perplexed 01 September 2010 07:11:52PM 2 points [-]

That is either profound, or it is absurd. I will have to consider it.

I've always assumed that the whole point of decision theory is to give normative guidance to decision makers. But in this case, I guess we have two decision makers to consider - robot and robot designer - operating at different levels of reduction and at different times. To say nothing of any decisions that may or may not be being made by this Omega fellow.

My head aches. Up to now, I have thought that we don't need to think about "meta-decision theory". Now I am not sure.

Comment author: timtyler 01 September 2010 07:28:26PM -2 points [-]

Mostly we want well-behaved robots - so the moral seems to be to get the robot maker to build a better robot that has a good reputation and can make credible commitments.

Comment author: SilasBarta 01 September 2010 02:51:33AM 0 points [-]

Hm, that robot example would actually be a better way to go about it...