Part 1 was previously posted and it seemed that people likd it, so I figured that I should post part 2 - http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html
Part 1 was previously posted and it seemed that people likd it, so I figured that I should post part 2 - http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html
There's a story about a card writing AI named Tully that really clarified the problem of FAI for me (I'd elaborate but I don't want to ruin it).
I still don't understand optimizer threats like this. I like mint choc ice cream a lot. If I were suddenly gifted with the power to modify my hardware and the environment however I want, I wouldn't suddenly optimize for consumption of ice cream because I the intelligence to know that my enjoyment of ice cream consumption comes entirely from my reward circuit. I would optimize myself to maximize my reward, not whatever current behavior triggers the reward. Why would an ASI be different? It's smarter and more powerful, why wouldn't it recognize that anything except getting the reward is instrumental?
A reinforcement learning AI, who's only goal is to maximize some "reward" input, in and of itself, would do that. Usually the paperclip maximizer thought experiments propose an AI that has actual goals. It wants actual paperclips, not just a sensor that detects numPaperclips.