You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Gurkenglas comments on Open Thread, Jun. 15 - Jun. 21, 2015 - Less Wrong Discussion

5 Post author: Gondolinian 15 June 2015 12:02AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (302)

You are viewing a single comment's thread. Show more comments above.

Comment author: Gurkenglas 16 June 2015 09:57:05AM *  0 points [-]

Let's steelman his argument into "Which is more likely to succeed, actually stopping all research associated with existential risk or inventing a Friendly AI?". If you find another reason why the first option wouldn't work, include the desperate effort needed to overcome that problem in the calculation.

Comment author: ChristianKl 16 June 2015 11:54:21AM 1 point [-]

I don't think "existential risk research" and "research associated with existential risks" are the same thing.

Comment author: Gurkenglas 16 June 2015 12:06:07PM 0 points [-]

Yes, that's what I meant. Let me edit that.

Comment author: Gurkenglas 25 June 2015 02:05:53AM 0 points [-]

Me minutes after writing that: "I precommit to post this at most a week from now. I predict someone will give a clever answer along the lines of driving humanity extinct in order to stop existential risk research."