Alicorn comments on Deontology for Consequentialists - Less Wrong

46 Post author: Alicorn 30 January 2010 05:58PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (247)

You are viewing a single comment's thread. Show more comments above.

Comment author: Alicorn 03 February 2010 06:58:30PM 3 points [-]

Evil and cunning.

Aw, thanks...?

If there in fact something morally wrong about releasing the tech (your summary doesn't indicate it clearly, but I'd expect it from most drastic actions Robin seems like he would be disposed to take), you can prevent it by, if necessary, murderously wielding a puppy, since attempting to release the tech would be a contextually relevant wrong act. Even if I thought it was obligatory to stop you, I might not do it. I'm imperfect.

Comment author: wedrifid 03 February 2010 07:11:37PM *  -1 points [-]

If there in fact something morally wrong about releasing the tech

I don't know about morals, but I hope it was clear that the consequences were assigned a low expected utility. The potential concern would be that your morals interfered with me seeking desirable future outcomes for the planet.

Comment author: wedrifid 03 February 2010 07:25:47PM 0 points [-]

Even if I thought it was obligatory to stop you, I might not do it. I'm imperfect.

That is promising. Would you let me kill Dave too?

Comment author: Alicorn 03 February 2010 08:06:02PM 2 points [-]

If you're in the room with Dave, why wouldn't you just push the AI's reset button yourself?

Comment author: wedrifid 04 February 2010 02:22:36AM *  -1 points [-]

See link. Depends on how I think he would update. I would kill him too if necessary.