Daniel_Burfoot comments on Q&A with experts on risks from AI #1 - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (66)
Or, if you have reason to believe that things are not going to be fine it may be appropriate to lower your estimate that humanity will survive the next century. People not being aware (or denying) threats are less likely to do what is necessary to prevent them. If we accept XiXidu's implied premise that these guys are particularly relevant then their belief that things are fine is an existential risk.
(It happens that I don't accept the premise. Narrow AI is a completely different subject to GAI and experts are notorious for overestimating the extent that their expertise applies to loosely related areas.)
Okay, but this seems to violate conservation of expected evidence. Either you can be depressed by the answer "we're all going to die" or, less plausibly, by the answer "Everything is going to be fine", but not both.
No it doesn't.
I only suggested the latter, never the former. I'd be encouraged if the AI researchers acknowledged more risk. (Only slightly given the lack of importance I have ascribed to these individuals elsewhere.)