You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

KatjaGrace comments on Superintelligence Reading Group 2: Forecasting AI - Less Wrong Discussion

10 Post author: KatjaGrace 23 September 2014 01:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (109)

You are viewing a single comment's thread. Show more comments above.

Comment author: KatjaGrace 23 September 2014 02:11:39AM 3 points [-]

I agree with the general sentiment. Though if human-level AI is very far away, I think there might be better things to do now than work on very direct safety measures. For instance, improve society's general mechanisms for dealing with existential risks, or get more information about what's going to happen and how to best prepare. I'm not sure if you meant to include these kinds of things.

Comment author: Jeff_Alexander 23 September 2014 06:57:10AM 1 point [-]

Though if human-level AI is very fary away, I think there might be better things to do now than work on very direct safety measures.

Agreed. That is the meaning I intended by

estimates comparing this against the value of other existential risk reduction efforts would be needed to determine this [i.e. whether effort might be better used elsewhere]