It looks as though lukeprog has finished his series on how to purchase AI risk reduction. But the ideas lukeprog shares are not the only available strategies. Can Less Wrong come up with more?
A summary of recommendations from Exploring the Idea Space Efficiently:
- Deliberately avoid exposing yourself to existing lines of thought on how to solve a problem. (The idea here is to defeat anchoring and the availability heuristic.) So don't review lukeprog's series or read the comments on this thread before generating ideas.
- Start by identifying broad categories where ideas might be found. If you're trying to think of calculus word problems, your broad categories might be "jobs, personal life, the natural world, engineering, other".
- With these initial broad categories, try to include all the categories that might contain a solution and none that will not.
- Then generate subcategories. Subcategories of "jobs" might include "agriculture, teaching, customer service, manufacturing, research, IT, other". You're also encouraged to generate subsubcategories and so on.
- Spend more time on those categories that seem promising.
- You may wish to map your categories and subcategories on a piece of paper.
If you're strictly a lurker, you can send your best ideas to lukeprog anonymously using his feedback box. Or send them to me anonymously using my feedback box so I can post them here and get all your karma.
Thread Usage
Please reply here if you wish to comment on the idea of this thread.
You're encouraged to discuss the ideas of others in addition to coming up with your own ideas.
If you split your ideas into individual comments, they can be voted on individually and you will probably increase your karma haul.
For example, the research into a: how to make AI relate it's computational structure to the substrate (AIXI does not, and fails to self preserve), b: how to prevent wireheading for AI that does relate it's computational structure to the substrate, and c: how to define real world goals for AI to pursue (currently the AIs are just mathematics that makes some abstract variables satisfy abstract properties that may be described in the real world terms in the annotations in the papers but implement no correspondence to the real world).
Such research is clearly dangerous, and also unnecessary for creation of practically useful AIs (so it is not done at large; perhaps it is only done by SI in which case persuading grantmaking organizations not to give any money to SI may do the trick)