lukeprog comments on The Human's Hidden Utility Function (Maybe) - Less Wrong

44 Post author: lukeprog 23 January 2012 07:39PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (87)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 23 January 2012 10:55:29PM 0 points [-]

OK, in that case I'm confused, since I don't see any connection between the first and the second sentences...

Comment author: lukeprog 23 January 2012 10:59:08PM 2 points [-]

Let me try again:

Two-step process = (1) Extract preferences, (2) Extrapolate preferences. This may not work. This is one reason that this discovery about three valuation systems in the brain is so weak and preliminary for the purposes of CEV. I'm not sure it will turn out to be relevant to CEV at all.

Comment author: Vladimir_Nesov 23 January 2012 11:31:16PM *  5 points [-]

I see, so the two-step thing acts as a precondition. Is it right that you are thinking of descriptive idealization/analysis of human brain as a path that might lead to definition of "current" (extracted) preferences, which is then to be corrected by "extrapolation"? If so, that would clarify for me your motivation for hoping to get anything FAI-relevant out of neuroscience: extrapolation step would correct the fatal flaws of the extraction step.

(I think extrapolation step (in this context) is magic that can't work, and instead analysis of human brain must extract/define the right decision problem "directly", that is formally/automatically, without losing information during descriptive idealization performed by humans, which any object-level study of neuroscience requires.)

Comment author: lukeprog 24 January 2012 12:37:02AM 4 points [-]

Extraction + extrapolation is one possibility, though at this stage in the game it still looks incoherent to me. But sometimes things look incoherent before somebody smart comes along and makes them coherent and tractable.

Another possibility is that an FAI uploads some subset of humans and has them reason through their own preferences for a million subjective years and does something with their resulting judgments and preferences. This might also be basically incoherent.

Another possibility is that a single correct response to preferences falls out of game theory and decision theory, as Drescher attempts in Good and Real. This might also be incoherent.

Comment author: Vladimir_Nesov 24 January 2012 12:58:43AM *  2 points [-]

In these terms, the plan I see as the most promising is that the correct way of extracting preferences from humans that doesn't require further "extrapolation" falls out of decision theory.

(Not sure what you meant by Drescher's option (what's "response to preferences"?): does the book suggest that it's unnecessary to use humans as utility definition material? In any case, this doesn't sound like something he would currently believe.)

Comment author: lukeprog 24 January 2012 01:03:23AM 0 points [-]

As I recall, Drescher still used humans as utility definition material but thought that there might be a single correct response to these utilities — one which falls out of decision theory and game theory.

Comment author: Vladimir_Nesov 24 January 2012 01:19:26AM *  1 point [-]

What's "response to utilities" (in grandparent you used "response to preferences" which I also didn't understand)? Response of what for what purpose? (Perhaps, the right question is about what you mean by "utilities" here, as in extracted/descriptive or extrapolated/normative.)

Comment author: lukeprog 24 January 2012 07:28:23AM 1 point [-]

Response of what for what purpose?

Yeah, I don't know. It's kind of like asking what "should" or "ought" means. I don't know.

Comment author: Vladimir_Nesov 24 January 2012 01:40:58PM *  3 points [-]

No, it's not a clarifying question about subtleties of that construction, I have no inkling of what you mean (seriously, no irony), and hence fail to parse what you wrote (related to "response to utilities" and "response to preferences") at the most basic level. This is what I see in the grandparent:

Drescher still used humans as utility definition material but thought that there might be a single correct borogove — one which falls out of decision theory and game theory.

Comment author: lukeprog 25 January 2012 01:51:46AM 0 points [-]

For our purposes, how about...

Drescher still used humans as utility definition material but thought that there might be a single, morally correct way to derive normative requirements from values — one which falls out of decision theory and game theory.

Comment author: pjeby 23 January 2012 11:07:15PM 2 points [-]

I think you've also missed the possibility that all three "systems" might just be the observably inconsistent behavior of one system in different edge cases, or at least that the systems are far more entangled and far less independent than they seem.

(I think you may have also ignored the part where, to the extent that the model-based system has values, they are often more satisficing than maximizing.)