Jonii comments on Open Thread: December 2009 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (263)
I'm interested in values. Rationality is usually defined as something like an agent that tries to maximize its own utility function. But, humans, as far as I can tell, don't really have anything like "values" beside "stay alive, get immediatly satisfying sensory input".
This, afaict, results to lip servive to "greater good", when people just select some nice values that they signal they want to promote, when in reality they haven't done the math by which these selected "values" derive from these "stay alive"-like values. And so, their actions seem irrational, but only because they signal of having values they don't actually have or care about.
This probably boils down to finding something to protect, but overall this issue is really confusing.
So, I've been thinking about this for some time now, and here's what I've got:
First, the point here is to self-reflect to want what you really want. This presumably converges to some specific set of first degree desires for each one of us. However, now I'm a bit lost on what do we call "values", are they the set of first degree desires we have(not?), set of first degree desires we would reach after infinity of self-reflection, or set of first degree desires we know we want to have at any given time?
As far as I can tell, akrasia would be a subproblem of this.
So, this should be about right. However, I think it's weird that here people talk a bit about akrasia, and how to achieve those n-degree desires, but I haven't seen anything about actually reflecting and updating what you want. Seems to me that people trust a tiny bit too much to the power of cognitive dissonance fixing the problem between wanting to want and actually wanting, this resulting to the lack of actual desire in achieving what you know you should want(akrasia).
I really dunno how to overcome this, but this gap seems worth discussing.
Also, since we need eternity of self-reflection to reach what we really want, this looks kinda bad for FAI: Figuring out where our self-reflection would converge in infinity seems pretty much impossible to compute, and so, we're left with compromises that can and probably will eventually lead to something we really don't want.