wedrifid comments on Deontology for Consequentialists - Less Wrong

46 Post author: Alicorn 30 January 2010 05:58PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (247)

You are viewing a single comment's thread. Show more comments above.

Comment author: wedrifid 03 February 2010 06:04:14PM 0 points [-]

Wow. You would try to stop me from saving the world. You are evil. How curious.

Comment author: Alicorn 03 February 2010 06:06:40PM *  0 points [-]

Why, what wrong acts do you plan to commit in attempting to save the world?

Do you believe that the world's inhabitants have a right to your protection? Because if they do, that'll excuse some things.

Comment author: wedrifid 03 February 2010 06:49:26PM 1 point [-]

Why, what wrong acts do you plan to commit in attempting to save the world?

Evil and cunning. No! I'll shall not be revealing my secret anti-diabolical plans. Now is the time for me to assert with the utmost sincerity my devotion to a compatible deontological system of rights (and then go ahead and act like a consequentialist anyway).

Do you believe that the world's inhabitants have a right to your protection? Because if they do, that'll excuse some things.

Absolutely!

Ok, give me some perspective here. Just how many babies worth of excuse? Consider this counterfactual:

Robin has been working in secret with a crack team of biomedical scientists in his basement. He has fully functioning brain uploading and emulating technology at his fingertips. He believes wholeheartedly that releasing em technology into the world will bring about some kind of economist utopia, a 'subsistence paradise'. The only chance I have to prevent the release is to beat him to death with a cute little puppy. Would that be wrong?

Perhaps a more interesting question is would it be wrong for you not to intervene and stop me from beating Robin to death with a puppy?

Does it matter whether you have been warned of my intent? Assume that all you knew was that I assign a low utility to the future Robin seeks, Robin has a puppy weakness and I have just discovered that Robin has completed his research. Would you be morally obliged to intervene?

Now, Robin is standing with his hand poised over the button, about to turn the future of our species into a hardscrapple dystopia. I'm standing right behind him wielding a puppy in a two handed grip and you are right there with me. Would you kill the puppy to save Robin?

Comment author: Alicorn 03 February 2010 06:58:30PM 3 points [-]

Evil and cunning.

Aw, thanks...?

If there in fact something morally wrong about releasing the tech (your summary doesn't indicate it clearly, but I'd expect it from most drastic actions Robin seems like he would be disposed to take), you can prevent it by, if necessary, murderously wielding a puppy, since attempting to release the tech would be a contextually relevant wrong act. Even if I thought it was obligatory to stop you, I might not do it. I'm imperfect.

Comment author: wedrifid 03 February 2010 07:11:37PM *  -1 points [-]

If there in fact something morally wrong about releasing the tech

I don't know about morals, but I hope it was clear that the consequences were assigned a low expected utility. The potential concern would be that your morals interfered with me seeking desirable future outcomes for the planet.

Comment author: wedrifid 03 February 2010 07:25:47PM 0 points [-]

Even if I thought it was obligatory to stop you, I might not do it. I'm imperfect.

That is promising. Would you let me kill Dave too?

Comment author: Alicorn 03 February 2010 08:06:02PM 2 points [-]

If you're in the room with Dave, why wouldn't you just push the AI's reset button yourself?

Comment author: wedrifid 04 February 2010 02:22:36AM *  -1 points [-]

See link. Depends on how I think he would update. I would kill him too if necessary.