The claim on p257 that we should try to do things that are robustly positive seems contrary to usual consequentialist views, unless this is just a heuristic for maximizing value.
Does anyone know of a good short summary of the case for caring about AI risk?
Are there things that someone should maybe be doing about AI risk that haven't been mentioned yet?
Are you concerned about AI risk? Do you do anything about it?
Do you agree with Bostrom that humanity should defer non-urgent scientific questions, and work on time-sensitive issues such as AI safety?
Did Superintelligence change your mind on anything?
This is the last Superintelligence Reading Group. What did you think of it?
It would be interesting to me to read others’ more free-ranging impressions of where Bostrom gets it right in Superintelligence – and what he may have missed or not emphasized enough.
Does anyone have suggested instances of this? I actually don't know of many.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Bostrom quotes a colleague saying that a Fields medal indicates two things: that the recipient was capable of accomplishing something important, and that he didn't. Should potential Fields medalists move into AI safety research?