TheAncientGeek comments on AlphaGo versus Lee Sedol - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (183)
I suspect that this dates back to a time when MIRI believed the answer to AI safety was to both build an agentive, maximal supeintelligence and align its values with ours, and put it in charge of all the other AIs.
The first idea has been effectively shelved, since MIRI had produced about zero lines of code,..but the idea that AI safety is value alignment continues with considerable momentum. And value alignment only makes sense if you are building an agentive AI (and have given up on corrigibility).