These are quick notes on an idea for an indirect strategy to increase the likelihood of society acquiring robustly safe and beneficial AI.
Motivation:
-
Most challenges we can approach with trial-and-error, so many of our habits and social structures are set up to encourage this. There are some challenges where we may not get this opportunity, and it could be very helpful to know what methods help you to tackle a complex challenge that you need to get right first time.
-
Giving an artificial intelligence good values may be a particularly important challenge, and one where we need to be correct first time. (Distinct from creating systems that act intelligently at all, which can be done by trial and error.)
-
Building stronger societal knowledge about how to approach such problems may make us more robustly prepared for such challenges. Having more programmers in the AI field familiar with the techniques is likely to be particularly important.
Idea: Develop methods for training people to write code without bugs.
-
Trying to teach the skill of getting things right first time.
-
Writing or editing code that has to be bug-free without any testing is a fairly easy challenge to set up, and has several of the right kind of properties. There are some parallels between value specification and programming.
-
Set-up puts people in scenarios where they only get one chance -- no opportunity to test part/all of the code, just analyse closely before submitting.
-
Interested in personal habits as well as social norms or procedures that help this.
-
Daniel Dewey points to standards for code on the space shuttle as a good example of getting high reliability code edits.
-
-
How to implement:
-
Ideal: Offer this training to staff at software companies, for profit.
-
Although it’s teaching a skill under artificial hardship, it seems plausible that it could teach enough good habits and lines of thinking to noticeably increase productivity, so people would be willing to pay for this.
-
Because such training could create social value in the short run, this might give a good opportunity to launch as a business that is simultaneously doing valuable direct work.
-
Similarly, there might be a market for a consultancy that helped organisations to get general tasks right the first time, if we knew how to teach that skill.
-
-
More funding-intensive, less labour intensive: run competitions with cash prizes
-
Try to establish it as something like a competitive sport for teams.
-
Outsource the work of determining good methods to the contestants.
-
This is all quite preliminary and I’d love to get more thoughts on it. I offer up this idea because I think it would be valuable but not my comparative advantage. If anyone is interested in a project in this direction, I’m very happy to talk about it.
http://cr.yp.to/cv/activities-20050107.pdf (apparently this guy's accomplishments are legendary in crypto circles)
http://www.fastcompany.com/28121/they-write-right-stuff
Personal experience: I found that I was able to reduce my bug rate pretty dramatically through application of moderate effort (~6 months of paying attention to what I was doing and trying to improve my workflow without doing anything advanced like screencasting myself or even taking dedicated self-improvement time), and I think it could probably be increased even more by adding many layers of process.
In any case, I think it makes sense to favor the development of bug reduction techs like version control, testing systems, type systems, etc. as part of a broad program of differential technological development. (I wonder how far you could go by analyzing almost every AGI failure mode as a bug of some sort, in the "do what I mean, not what I say" sense. The key issues being that bugs don't always manifest instantly and sometimes change behavior subtly instead of immediately halting program execution. Maybe the "superintelligence would have tricky bugs" framing would be an easier sell for AI risks to computer scientists. The view would imply that we need to learn to write bug free code, including anticipating & preventing all AGI-specific bugs like wireheading, before building an AGI.)
See also: My proposal for how to structure FAI development.
Thanks, this is a great collection of relevant information.
I agree with your framing of this as differential tech development. Do you have any thoughts on the best routes to push on this?
I will want to think more about framing AGI failures as (subtle) bugs. My initial impression is positive, but I have some worry that it would introduce a new set of misconceptions.