timtyler comments on AI Risk and Opportunity: A Strategic Analysis - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (161)
Where can we read FHI's analysis of AI risk? Why are they not as worried as you and SIAI people? Has there ever been a debate between FHI and SIAI on this? What threats are they most worried about? What technologies do they want to push or slow down?
AI is high on the list - one of the top risks, even if their objective assessment is lower than SIAI. Nuclear war, synthetic biology, nanotech, pandemics, social collapse: these are the other ones we're looking it.
Basically they don't buy the "AI inevitably goes foom and inevitably takes over". They see definite probabilities of these happening, but their estimates are closer to 50% than to 100%.
They estimate it at 50%???
And there are other things they are more concerned about?
What are those other things?
They estimate a variety of of conditional statements ("AI possible this century", "if AI then FOOM", "if FOOM then DOOM", etc...) with magnitudes between 20% and 80% (I had the figures somewhere, but can't find them). I think when it was all multiplied out it was in the 10-20% range.
And I didn't say they thought other things were more worrying; just that AI wasn't the single overwhelming risk/reward factor that SIAI (and me) believe it to be.
A wild guess. FHI believes that the best what can reasonably be done about existential risks at this point in time is to do research into existential risks, including possible unknown unknowns, and into strategies to reduce current existential risks. This somewhat agrees with their FAQ:
In other words, FHI seems to focus on meta issues, existential risks in general, rather than associated specifics.