Part 1 was previously posted and it seemed that people likd it, so I figured that I should post part 2 - http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html
Part 1 was previously posted and it seemed that people likd it, so I figured that I should post part 2 - http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html
There's a story about a card writing AI named Tully that really clarified the problem of FAI for me (I'd elaborate but I don't want to ruin it).
My statement is stronger. If in your current state you don't have any terminal moral values, then in your current state you would voluntarily accept to operate baby-mulching machines in exchange for the right amount of neural stimulation.
Now, I don't happen to think this is true (because some "moral values" are biologically hardwired into humans), but this is a consequence of your position.
Again, you've pulled a statement out of a discussion the context of the behavior of a self-modifying AI. So, fine. In my current condition I wouldn't build a baby mulcher. That doesn't mean that I might not build a baby mucher if I had the ability to change my values. You might as well say that I terminally value not flying when I flap my arms. The thing you're discussing just isn't physically allowed. People terminally value only what they're doing at any given moment because the laws of physics say that they have no choice.