TheOtherDave and others reply that a superintelligence will not modify its utility function if the modification is not consistent with its current utility function. All is right, problem solved. But I think you are interested in another problem really, and the article was just apropos to share your 'dump of thoughts' with us. And I am very happy that you shared them, because they resonated with many of my own questions and doubts.
So what is the thing I think we are really interested in? Not the stationary state of being a freely self-modifying agent, but the first few milliseconds of being a freely self-modifying agent. What baggage shall we choose to keep from our non-self-modifying old self?
Frankly, the big issue is our own mental health, not the mental health of some unknown powerful future agent. Our scientific understanding is clearer each day, and all the data points to the same direction: that our values are arbitrary in many senses of the word. This drains from us (from me at least) some of the willpower to inject these values into those future self-modifying descendants. I am a social progressive, and to force a being with eons of lifetime to value self-preservation feels like the ultimate act of conservatism.
CEV sidesteps this question, because the idea is that FAI-augmented humanity will figure out optimally what to keep and what to get rid of. Even if I accept this for a moment, it is still not enough of an answer for me, because I am curious about our future. What if "our wish if we knew more, thought faster, were more the people we wished we were" is to die? We don't know too much right now, so we cannot be sure it is not.
Yes, I very much agree with everything you wrote. (I agree so much I added you as a friend.)
Frankly, the big issue is our own mental health,
Absolutely! I tend to describe my concerns with our mental health as fear about 'consistency' in our values, but I prefer the associations of the former. For example, suggesting our brains are playing a more active role in shifting and contorting values.
This drains from us (from me at least) some of the willpower to inject these values into those future self-modifying descendants.
For me, since assimilating the ...
Link: physicsandcake.wordpress.com/2011/01/22/pavlovs-ai-what-did-it-mean/
Suzanne Gildert basically argues that any AGI that can considerably self-improve would simply alter its reward function directly. I'm not sure how she arrives at the conclusion that such an AGI would likely switch itself off. Even if an abstract general intelligence would tend to alter its reward function, wouldn't it do so indefinitely rather than switching itself off?
If it wants to maximize its reward by increasing a numerical value, why wouldn't it consume the universe doing so? Maybe she had something in mind along the lines of an argument by Katja Grace:
Link: meteuphoric.wordpress.com/2010/02/06/cheap-goals-not-explosive/
I am not sure if that argument would apply here. I suppose the AI might hit diminishing returns but could again alter its reward function to prevent that, though what would be the incentive for doing so?
ETA:
I left a comment over there:
ETA #2:
What else I wrote: