Instead of the Singularity Institute having to prove that AGI can potentially be dangerous, really AGI researchers should have to prove the opposite.
How's about we prove that teens texting can not result in emergence of hivemind that would subsequently invent better hardware to run itself on, and rid of everyone?
How's about you take AIXI, and analyze it, and see that it doesn't relate itself to it's computational substrate, subsequently being unable to understand self preservation? There are other, much more relevant ways of being safe than "ohh it talks so moral".
It looks as though lukeprog has finished his series on how to purchase AI risk reduction. But the ideas lukeprog shares are not the only available strategies. Can Less Wrong come up with more?
A summary of recommendations from Exploring the Idea Space Efficiently:
If you're strictly a lurker, you can send your best ideas to lukeprog anonymously using his feedback box. Or send them to me anonymously using my feedback box so I can post them here and get all your karma.
Thread Usage
Please reply here if you wish to comment on the idea of this thread.
You're encouraged to discuss the ideas of others in addition to coming up with your own ideas.
If you split your ideas into individual comments, they can be voted on individually and you will probably increase your karma haul.