You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

tmosley comments on Superintelligence Reading Group 2: Forecasting AI - Less Wrong Discussion

10 Post author: KatjaGrace 23 September 2014 01:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (109)

You are viewing a single comment's thread. Show more comments above.

Comment author: tmosley 24 September 2014 04:12:42AM 3 points [-]

I don't understand this question. The best time for the emergence of a great optimizer would be shortly after you were born (earlier if your existence were assured somehow).

If an AI is a friendly optimizer, then you want it as soon as possible. If it is randomly friendly or unfriendly, then you don't want it at all (the quandary we all face). Seems like asking "when" is a lot less relevant than asking "what". "What" I want is a friendly AI. "When" I get it is of little relevance, so long as it is long enough before my death to grant me "immortality" while maximally (or sufficiently) fulfilling my values.

Comment author: leplen 24 September 2014 03:09:29PM 7 points [-]

The question isn't asking when the best time for the AI to be created is. It's asking what the best time to predict the AI will be created is. E.g. What prediction sounds close enough to be exciting and to get me that book deal, but far enough away as to be not obviously wrong and so that people will have forgotten about my prediction by the time it hasn't actually come true. This is an attempt to determine how much the predictions may be influenced by self-interest bias, etc.