Part 1 was previously posted and it seemed that people likd it, so I figured that I should post part 2 - http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html
Part 1 was previously posted and it seemed that people likd it, so I figured that I should post part 2 - http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html
There's a story about a card writing AI named Tully that really clarified the problem of FAI for me (I'd elaborate but I don't want to ruin it).
I don't think that's a good reason to say that something like it wouldn't happen. I think that given the ability, most people would go directly to rewiring their reward centers to respond to something "better" that would dispense with our current overriding goals. Regardless of how I ended up, I wouldn't leave my reward center wired to eating, sex or many of the other basic functions that my evolutionary program has left me really wanting to do. I don't see why an optimizer would be different. With an ANI, maybe it would keep the narrow focus, but I don't understand why an A[SG]I wouldn't scrap the original goal once it had the knowledge and and ability to do so.
And do you have any evidence for that claim besides introspection into your own mind?