JoshuaFox comments on AI risk, new executive summary - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (76)
This is great! For a long time I've been saying that we need summaries at different lengths, and I see it's coming together now.
This one is good as an executive summary.
The next step is to produce a short summary with emotional appeal; a call to action. It's been noted that simply stating the problem of AI existential risk does not bring people on-board. Staring into the Singularity is an example of a emotionally appealing call to action (for outdated policies, however).
But I do not have any specific ideas for implementation, and again, this is excellent for the purpose it was designed for.