Mitchell_Porter comments on A Master-Slave Model of Human Preferences - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (80)
If we are going to have a serious discussion about these matters, at some point we must face the fact that the physical description of the world contains no such thing as a preference or a want - or a utility function. So the difficulty of such extractions or extrapolations is twofold. Not only is the act of extraction or extrapolation itself conditional upon a value system (i.e. normative metamorality is just as "relative" as is basic morality), but there is nothing in the physical description to tell us what the existing preferences of an agent are. Given the physical ontology we have, the ascription of preferences to a physical system is always a matter of interpretation or imputation, just as is the ascription of semantic or representational content to its states.
It's easy to miss this in a decision-theoretic discussion, because decision theory already assumes some concept like "goal" or "utility", always. Decision theory is the rigorous theory of decision-making, but it does not tell you what a decision is. It may even be possible to create a rigorous "reflective decision theory" which tells you how a decision architecture should choose among possible alterations to itself, or a rigorous theory of normative metamorality, the general theory of what preferences agents should have towards decision-architecture-modifying changes in other agents. But meta-decision theory will not bring you any closer to finding "decisions" in an ontology that doesn't already have them.
But to what extent does the result depend on the initial "seed" of interpretation? Maybe, very little. For example, prediction of behavior of a given physical system strictly speaking rests on the problem of induction, but that doesn't exactly say that anything goes or that what will actually happen is to any reasonable extent ambiguous.
I'd upvote this comment twice if I could.
p(wedrifid would upvote a comment twice | he upvoted it once) > 0.95
Would other people have a different approach?
I'd use some loose scale where the quality of the comment correlated with the amount of upvotes it got. Assuming that a user could give up to two upvotes per comment, then a funny one-liner or a moderately interesting comment would get one vote, truly insightful ones two.
p(Kaj would upvote a comment twice | he upvoted it once) would probably be somewhere around [.3, .6]
That's the scale I use. Unfortunately, my ability to (directly) influence how many upvotes it gets is limited to a plus or minus one shift.
I don't think this simple characterisation resembles the truth: the whole point of this enterprise is to make sure things go differently, in a way they just couldn't proceed by themselves. Thus, observing existing "tendencies" doesn't quite capture the idea of preference.
And there's your "opinion or interpretation" --- not just in how you draw the boundary (which didn't exist in the original ontology), but in your choice of the theory that you use to evaluate your counterfactuals.
Of course, such theories can be better or worse, but only with respect to some prior system of evaluation.
Still, probably a question of Aristotelian vs. Newtonian mechanics, i.e. not hard to see who wins.
Agreed, but not responsive to Mitchell Porter's original point. (ETA: . . . unless I'm missing your point.)
I don't hear differently... I even suspect that preference is introspective, that is depends on a way the system works "internally", not just on how it interacts with environment. That is, two agents with different preferences may do exactly the same thing in all contexts. Even if not, it's a long way between how the agent (in its craziness and stupidity) actually changes the environment, and how it would prefer (on reflection, if it was smarter and saner) the environment to change.
Yeah, maybe. But it doesn't.
Beware: you are making a common sense-based prediction about what would be the output of a process that you don't even have the right concepts for specifying! (See my reply to your other comment.)
Wow. Too bad I missed this when it was first posted. It's what I wish I'd said when justifying my reply to Wei_Dai's attempted belief/values dichotomy here and here.
One lesson of reductionism and success of simple-laws-based science and technology is that for the real-world systems, there might be no simple way of describing them, but there could be a simple way of manipulating their data-rich descriptions. (What's the yield strength of a car? -- Wrong question!) Given a gigabyte's worth of problem statement and the right simple formula, you could get an answer to your query. There is a weak analogy with misapplication of Occam's razor where one tries to reduce the amount of stuff rather than the amount of detail in the ways of thinking about this stuff.
In the case of beliefs/desires separation, you are looking for a simple problem statement, for a separation in the data describing the person itself. But what you should be looking for is a simple way of implementing the make-smarter-and-better extrapolation on a given pile of data. The beliefs/desires separation, if it's ever going to be made precise, is going to reside in the structure of this simple transformation, not in the people themselves.
I want to point out that in the interpretation of prior as weights on possible universes, specifically as how much one cares about different universes, we can't just replace "incorrect" beliefs with "the truth". In this interpretation, there can still be errors in one's beliefs caused by things like past computational mistakes, and I think fixing those errors would constitute helping, but the prior perhaps needs to be preserved as part of preference.
I agree this is part of the problem, but like others here I think you might be making it out to be harder than it is. We know, in principle, how to translate a utility function into a physical description of an object: by coding it as an AI and then specifying the AI along with its substrate down to the quantum level. So, again in principle, we can go backwards: take a physical description of an object, consider all possible implementations of all possible utility functions, and see if any of them matches the object.
I think it's enough to consider computer programs and dispense with details of physics -- everything else can be discovered by the program. You are assuming the "bottom" level of physics, "quantum level", but there is no bottom, not really, there is only the beginning where our own minds are implemented, and the process of discovery that defines the way we see the rest of the world.
If you start with an AI design parameterized by preference, you are not going to enumerate all programs, only a small fraction of programs that have the specific form of your AI with some preference, and so for a given arbitrary program there will be no match. Furthermore, you are not interested in finding a match: if a human was equal to the AI, you are already done! It's necessary to explicitly go the other way, starting from arbitrary programs and understanding what a program is, deeply enough to see preference in it. This understanding may give an idea of a mapping for translating a crazy ape into an efficient FAI.
When I said "all possible implementations of all possible utility functions", I meant to include flawed implementations. But then two different utility functions might map onto the same physical object, so we'd also need a theory of implementation flaws that tells us, given two implementations of a utility function, which is more flawed.
This is WAY too hand-wavy an explanation for "in principle, we can go backwards" (from a system to its preference). I believe that in principle, we can, but not via injecting fuzziness of "implementation flaws".
Here's another statement of the problem: One agent's bias is another agent's heuristic. And the "two agents" might be physically the same, but just interpreted differently.