Part 1 was previously posted and it seemed that people likd it, so I figured that I should post part 2 - http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html
Part 1 was previously posted and it seemed that people likd it, so I figured that I should post part 2 - http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html
There's a story about a card writing AI named Tully that really clarified the problem of FAI for me (I'd elaborate but I don't want to ruin it).
Clippy and other thought experiments in its genre depend on a solution to the value stability problem, without which the goals of self-modifying agents tend to collapse into a loose equivalent of wireheading. That just doesn't get as much attention, both because it's less dramatic and because it's far less dangerous in most implementations.
Can you elaborate on this or provide link(s) to further reading?