It looks as though lukeprog has finished his series on how to purchase AI risk reduction. But the ideas lukeprog shares are not the only available strategies. Can Less Wrong come up with more?
A summary of recommendations from Exploring the Idea Space Efficiently:
- Deliberately avoid exposing yourself to existing lines of thought on how to solve a problem. (The idea here is to defeat anchoring and the availability heuristic.) So don't review lukeprog's series or read the comments on this thread before generating ideas.
- Start by identifying broad categories where ideas might be found. If you're trying to think of calculus word problems, your broad categories might be "jobs, personal life, the natural world, engineering, other".
- With these initial broad categories, try to include all the categories that might contain a solution and none that will not.
- Then generate subcategories. Subcategories of "jobs" might include "agriculture, teaching, customer service, manufacturing, research, IT, other". You're also encouraged to generate subsubcategories and so on.
- Spend more time on those categories that seem promising.
- You may wish to map your categories and subcategories on a piece of paper.
If you're strictly a lurker, you can send your best ideas to lukeprog anonymously using his feedback box. Or send them to me anonymously using my feedback box so I can post them here and get all your karma.
Thread Usage
Please reply here if you wish to comment on the idea of this thread.
You're encouraged to discuss the ideas of others in addition to coming up with your own ideas.
If you split your ideas into individual comments, they can be voted on individually and you will probably increase your karma haul.
Make a "moral expert system" contest
Have a set of moral dilemmas, and
1) Through an online form, humans say what choice they would make in that situation
2) There's a contest to write a program that would choose like a human in those situations.
(Or alternatively, a program that given some of the choices that a human made, guesses which other choices he made in other situations)
A contest like this could be a nice way to put the rhetorical onus on AI researchers to demonstrate that their approach to AI can be safe. Instead of the Singularity Institute having to prove that AGI can potentially be dangerous, really AGI researchers should have to prove the opposite.
It's also pretty digestible from a publicity standpoint. You don't have to know anything about the intelligence explosion to notice that robots are being used in warfare and worry about this.
(I suspect that if SI found the right way to communicate their core message, they c... (read more)