Wei_Dai comments on Safety Culture and the Marginal Effect of a Dollar - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (105)
Surely the most existential-risk-reduction-per-buck at this point is not "thinking and writing about AI safety", but thinking up more strategies like it in order to possibly find even better ones? Shouldn't SIAI (or perhaps FHI, depending on the comparative advantage between them) fund and publish some sort of systematic search-and-comparison of existential risk reduction strategies in order to have high confidence that the strategies it ends up pursuing are the optimal ones?
ETA: To be more constructive, has anyone done a similar analysis for "pushing for world-wide safety regulations on AI research" or "spending money directly on building FAI"?
The number one point of comparison for safety regulations is the cryptography export regulations. I am pretty sceptical about something similar being attempted for machine intelligence. It is possible to imagine the export of smart robots to "bad" countries being banned - for fear that they will reverse-engineer their secrets - but not easy to imagine that anyone will bother. Machine intelligence will ultimately be more useful than cryptography was. It seems pretty difficult to imagine an effective ban. So far, I haven't seen any serious proposals to do that.
Governments seem likely to continue promoting this kind of thing, not banning it.