You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

qwerte comments on Seeking Estimates for P(Hell) - Less Wrong Discussion

4 Post author: Mac 21 March 2015 03:44PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (21)

You are viewing a single comment's thread.

Comment author: jimrandomh 21 March 2015 07:39:47PM 4 points [-]

While others will probably answer your question as-is, I'd just like to point out that for most people who care about AI and who support MIRI, this is not the line of reasoning that convinced them nor is it the best reason to care. FAI is important because it would fix most of the world's problems and ensure us all very long, fulfilling lives, and because without it, we'd probably fail to capture the stars' potential and wither to death of old age.

Torture mostly comes up because philosophical thought-experiments tend to need a shorthand for "very bad thing not otherwise specified", and it's an instance of that which won't interact with other parts of the thought experiments or invite digressions.

Comment author: Mac 22 March 2015 12:20:40AM 0 points [-]

If you believe my moral system (not the topic of this post) is patently absurd, please PM me the full version of your argument. I promise to review it with an open mind. Note: I am naturally afraid of torture outcomes, but that doesn't mean I'm not excited about FAI. That would be patently absurd.

Torture mostly comes up because philosophical thought-experiments...

To clarify: are you saying there is no chance of torture?

Comment author: jimrandomh 22 March 2015 04:17:31AM *  1 point [-]

Yes, I am saying that the scenario you allude to is vanishingly unlikely.

But there's another point, which cuts close to the core of my values, and I suspect it cuts close to the core of your values, too. Rather than explain it myself, I'm going to suggest reading Scott Alexander's Who By Very Slow Decay, which is about aging.

That's the status quo. That's one of the main the reasons I, personally, care about AI: because if it's done right, then the thing Scott describes won't be a part of the world anymore.

Comment author: Mac 04 April 2015 08:06:42PM 0 points [-]

Good piece, thank you for sharing it.

I agree with you and Scott Alexander - painful death from aging is awful.

Comment author: G0W51 21 March 2015 09:38:19PM 0 points [-]

I second this. Mac, I suggest you read "Existential Risk Prevention as a Global Priority" if you haven't already to further understand why an AI killing all life (even painlessly) would be extremely harmful.