XiXiDu comments on People neglect small probability events - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (67)
I think that it is not clear enough how they are wrong in this case. That is why I wrote the OP, to hint at the possibility that risks from AI in and of itself is not the problem but something that has to do with the risk aversion and the discounting of low-probability events.
What do you think is the underlying reason for the disagreement of organisations like GiveWell or people like John Baez, Robin Hanson, Greg Egan, Douglas Hofstadter etc.?
Eliezer Yudkowsky wrote:
Why don't they accept this line of reasoning? There must be a reason other than the existence of existential risks, because all of them agree that existential risks do exist.
Because they are irrational, or haven't been exposed to it?
If I remember correctly, even Eliezer himself had a hard time biting the bullet on the St. Petersburg'd version. Actually, come to think of it I'm not sure if he ever did...
They all have been exposed to it. John Baez, GiveWell, Robin Hanson, Katja Grace, Greg Egan, Douglas Hofstadter and many others. John Baez has interviewed Eliezer Yudkowsky (part 1, 2, 3). Greg Egan wrote a book where he disses the SIAI. GiveWell interviewed the SIAI. Katja Grace has been a visiting fellow. Robin Hanson started Overcoming Bias with Eliezer. And Douglas Hofstadter talked at the Singularity Summit. None of them believes that risks from AI are terrible important. And there are many other people. And those are just the few that even care to comment on it.
Are they all irrational? If so, how can we fix that?
No idea. Colour me confused.