It looks as though lukeprog has finished his series on how to purchase AI risk reduction. But the ideas lukeprog shares are not the only available strategies. Can Less Wrong come up with more?
A summary of recommendations from Exploring the Idea Space Efficiently:
- Deliberately avoid exposing yourself to existing lines of thought on how to solve a problem. (The idea here is to defeat anchoring and the availability heuristic.) So don't review lukeprog's series or read the comments on this thread before generating ideas.
- Start by identifying broad categories where ideas might be found. If you're trying to think of calculus word problems, your broad categories might be "jobs, personal life, the natural world, engineering, other".
- With these initial broad categories, try to include all the categories that might contain a solution and none that will not.
- Then generate subcategories. Subcategories of "jobs" might include "agriculture, teaching, customer service, manufacturing, research, IT, other". You're also encouraged to generate subsubcategories and so on.
- Spend more time on those categories that seem promising.
- You may wish to map your categories and subcategories on a piece of paper.
If you're strictly a lurker, you can send your best ideas to lukeprog anonymously using his feedback box. Or send them to me anonymously using my feedback box so I can post them here and get all your karma.
Thread Usage
Please reply here if you wish to comment on the idea of this thread.
You're encouraged to discuss the ideas of others in addition to coming up with your own ideas.
If you split your ideas into individual comments, they can be voted on individually and you will probably increase your karma haul.
Lobby the Government for security regulations for AI
Just like there are security regulations on bioengineering, chemicals, nuclear power, weapons, etc. - there could be regulations on AI, with official auditing of risks, etc. This would create more demand for officially recognized "AI Risk" experts; will force projects to pay more attention to those issues (even if it's only for coming up with rationalizations for why their project is safe), etc.
This doesn't have to mean banning "unsafe" research; the existence of a "safe AI" certification means it might become a prerequisite for certain grants, or a marketing argument (even if the security standards for "safe AI" are not sufficient to actually guarantee safety).