You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

hairyfigment comments on Why I will Win my Bet with Eliezer Yudkowsky - Less Wrong Discussion

-2 Post author: Unknowns 27 November 2014 06:15AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (25)

You are viewing a single comment's thread. Show more comments above.

Comment author: hairyfigment 05 December 2014 10:48:57PM 0 points [-]

Do you realize you failed to specify any of that? I feel I'm being slightly generous by interpreting "and the world doesn't end" to mean a causal relationship, e.g. the existence of the first AGI has to inspire someone else to create a more dangerous version if the AI doesn't do so itself. (Though I can't pay if the world ends for some other reason, and I might die beforehand.) Of course, you might persuade whatever judge we agree on to rule in your favor before I would consider the question settled.

(In case it's not clear, the comment I just linked comes from 2010 or thereabouts. This is not a worry I made up on the spot.)

Comment author: Unknowns 06 December 2014 02:16:45AM 0 points [-]

Given the the fact that the bet is 100 to 1 in my favor, I would be happy to let you judge the result yourself.

Or you could agree to whatever result Eliezer agrees with. However, with Eliezer the conditions are specified, and "the world doesn't end" just means that we're still alive with the artificial intelligence running for a week.