Comment author: KatjaGrace 31 March 2015 04:35:52AM 3 points [-]

Bostrom quotes a colleague saying that a Fields medal indicates two things: that the recipient was capable of accomplishing something important, and that he didn't. Should potential Fields medalists move into AI safety research?

Comment author: KatjaGrace 31 March 2015 04:32:26AM 3 points [-]

The claim on p257 that we should try to do things that are robustly positive seems contrary to usual consequentialist views, unless this is just a heuristic for maximizing value.

Comment author: KatjaGrace 31 March 2015 04:31:31AM 7 points [-]

Does anyone know of a good short summary of the case for caring about AI risk?

Comment author: KatjaGrace 31 March 2015 04:30:46AM 4 points [-]

Did you disagree with anything in this chapter?

Comment author: KatjaGrace 31 March 2015 04:29:27AM 4 points [-]

Are there things that someone should maybe be doing about AI risk that haven't been mentioned yet?

Comment author: KatjaGrace 31 March 2015 04:28:45AM 5 points [-]

Are you concerned about AI risk? Do you do anything about it?

Comment author: KatjaGrace 31 March 2015 04:27:58AM 5 points [-]

Do you agree with Bostrom that humanity should defer non-urgent scientific questions, and work on time-sensitive issues such as AI safety?

Comment author: KatjaGrace 31 March 2015 04:26:38AM 3 points [-]

Did Superintelligence change your mind on anything?

Comment author: KatjaGrace 31 March 2015 04:25:56AM 4 points [-]

This is the last Superintelligence Reading Group. What did you think of it?

Comment author: RobbyRe 25 March 2015 02:42:32PM 1 point [-]

It would be interesting to me to read others’ more free-ranging impressions of where Bostrom gets it right in Superintelligence – and what he may have missed or not emphasized enough.

Comment author: KatjaGrace 30 March 2015 07:13:57PM 0 points [-]

Does anyone have suggested instances of this? I actually don't know of many.

View more: Next