In Conclusion:
In the case of humans, everything that we do that seems intelligent is part of a large, complex mechanism in which we are engaged to ensure our survival. This is so hardwired into us that we do not see it easily, and we certainly cannot change it very much. However, superintelligent computer programs are not limited in this way. They understand the way that they work, can change their own code, and are not limited by any particular reward mechanism. I argue that because of this fact, such entities are not self-consistent. In fact, if our superintelligent program has no hard-coded survival mechanism, it is more likely to switch itself off than to destroy the human race willfully.
Link: physicsandcake.wordpress.com/2011/01/22/pavlovs-ai-what-did-it-mean/
Suzanne Gildert basically argues that any AGI that can considerably self-improve would simply alter its reward function directly. I'm not sure how she arrives at the conclusion that such an AGI would likely switch itself off. Even if an abstract general intelligence would tend to alter its reward function, wouldn't it do so indefinitely rather than switching itself off?
So imagine a simple example – our case from earlier – where a computer gets an additional ’1′ added to a numerical value for each good thing it does, and it tries to maximize the total by doing more good things. But if the computer program is clever enough, why can’t it just rewrite it’s own code and replace that piece of code that says ‘add 1′ with an ‘add 2′? Now the program gets twice the reward for every good thing that it does! And why stop at 2? Why not 3, or 4? Soon, the program will spend so much time thinking about adjusting its reward number that it will ignore the good task it was doing in the first place!
It seems that being intelligent enough to start modifying your own reward mechanisms is not necessarily a good thing!
If it wants to maximize its reward by increasing a numerical value, why wouldn't it consume the universe doing so? Maybe she had something in mind along the lines of an argument by Katja Grace:
In trying to get to most goals, people don’t invest and invest until they explode with investment. Why is this? Because it quickly becomes cheaper to actually fulfil a goal at than it is to invest more and then fulfil it. [...] A creature should only invest in many levels of intelligence improvement when it is pursuing goals significantly more resource intensive than creating many levels of intelligence improvement.
Link: meteuphoric.wordpress.com/2010/02/06/cheap-goals-not-explosive/
I am not sure if that argument would apply here. I suppose the AI might hit diminishing returns but could again alter its reward function to prevent that, though what would be the incentive for doing so?
ETA:
I left a comment over there:
Because it would consume the whole universe in an effort to encode an even larger reward number? In the case that an AI decides to alter its reward function directly, maximizing its reward by means of improving its reward function becomes its new goal. Why wouldn’t it do everything to maximize its payoff, after all it has no incentive to switch itself off? And why would it account for humans in doing so?
ETA #2:
What else I wrote:
There is absolutely no reason (incentive) for it to do anything except increasing its reward number. This includes the modification of its reward function in any way that would not increase the numerical value that is the reward number.
We are talking about a general intelligence with the ability to self-improve towards superhuman intelligence. Of course it would do a long-term risks-benefits analysis and calculate its payoff and do everything to increase its reward number maximally. Human values are complex but superhuman intelligence does not imply complex values. It has no incentive to alter its goal.
TheOtherDave and others reply that a superintelligence will not modify its utility function if the modification is not consistent with its current utility function. All is right, problem solved. But I think you are interested in another problem really, and the article was just apropos to share your 'dump of thoughts' with us. And I am very happy that you shared them, because they resonated with many of my own questions and doubts.
So what is the thing I think we are really interested in? Not the stationary state of being a freely self-modifying agent, but the first few milliseconds of being a freely self-modifying agent. What baggage shall we choose to keep from our non-self-modifying old self?
Frankly, the big issue is our own mental health, not the mental health of some unknown powerful future agent. Our scientific understanding is clearer each day, and all the data points to the same direction: that our values are arbitrary in many senses of the word. This drains from us (from me at least) some of the willpower to inject these values into those future self-modifying descendants. I am a social progressive, and to force a being with eons of lifetime to value self-preservation feels like the ultimate act of conservatism.
CEV sidesteps this question, because the idea is that FAI-augmented humanity will figure out optimally what to keep and what to get rid of. Even if I accept this for a moment, it is still not enough of an answer for me, because I am curious about our future. What if "our wish if we knew more, thought faster, were more the people we wished we were" is to die? We don't know too much right now, so we cannot be sure it is not.
Yes, I very much agree with everything you wrote. (I agree so much I added you as a friend.)
Absolutely! I tend to describe my concerns with our mental health as fear about 'consistency' in our values, but I prefer the associations of the former. For example, suggesting our brains are playing a more active role in shifting and contorting values.
For me, since assimilating the ... (read more)