Above is a link to an interesting blog post by Shane Legg. It was written before he started DeepMind, and he earns a hell of a lot of points for accomplishing a lot of the insanely ambitious goals set out in the post.


This part is particularly interesting:

The impression I get from the outside is that SIAI [now MIRI] views AGI design and construction as so inherently dangerous that only a centrally coordinated design effort towards a provably correct system has any hope of producing something that is safe.  My view is that betting on one horse, and a highly constrained horse at that, spells almost certain failure.  A better approach would be to act as a parent organisation, a kind of AGI VC company, that backs a number of promising teams.  Teams that fail to make progress get dropped and new teams with new ideas are picked up.  General ideas of AGI safety are also developed in the background until such a time when one of the teams starts to make serious progress.  At this time the focus would be to make the emerging AGI design as safe as possible.

New Comment
2 comments, sorted by Click to highlight new comments since:

Sounds like the justification for gain of function research, lol.

Thanks for sharing this! It's inspiring to dig up old posts where people talk about their goals and to see how well it worked out.