Wei_Dai comments on What Are Probabilities, Anyway? - Less Wrong

22 Post author: Wei_Dai 11 December 2009 12:25AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (78)

You are viewing a single comment's thread. Show more comments above.

Comment author: Wei_Dai 11 December 2009 08:27:43PM 2 points [-]

The different interpretations suggest different approaches to answer the question of "what is the right prior?" and also different approaches to decision theory. I mentioned that the "caring" interpretation fits well with UDT.

Comment author: DanArmak 11 December 2009 09:47:18PM 0 points [-]

Can't you choose your (arational) preferences to get any behaviour (decision theory) no matter what interpretation you choose?

Comment author: Wei_Dai 11 December 2009 10:01:32PM 1 point [-]

Preferences may be arational, but they're not completely arbitrary. In moral philosophy there are still arguments for what one's preferences should be, even if they are generally much weaker than the arguments in rationality. Different interpretations influence what kinds of arguments apply or make sense to you, and therefore influence your preferences.

Comment author: DanArmak 11 December 2009 10:12:15PM *  0 points [-]

How can there be arguments about what preferences should be? Aren't they, well, a sort of unmoved mover, a primal cause? (To use some erstwhile philosophical terms :-)

I can understand meta-arguments that say your preferences should be consistent in some sense, or that argue about subgoal preferences given some supergoals. But even under strict constraints of that kind, you have a lot of latitude, from humans to paperclip maximizers on out. Within that range, does interpreting probabilities differently really give you extra power you can't get by finetuning your prefs?

Edit: the reason I'd perfer editing prefs is that talking about the Meaning of Probabilities sets off my materialism sensors. It leads to things like multiple-world theories because they're easy to think about as an inetrpretation of QM, regardless of whether they actually exist. Then they can actually negatively affect our prefs or behavior.

Comment author: Wei_Dai 11 December 2009 10:24:31PM 0 points [-]

How can there be arguments about what preferences should be?

Well, I don't know what many of my preferences should be. How can I find out except by looking for and listening to arguments?

Aren't they, well, a sort of unmoved mover, a primal cause? (To use some erstwhile philosophical terms :-)

No, not for humans anyway.

Comment author: DanArmak 11 December 2009 10:37:43PM 0 points [-]

Well, I don't know what many of my preferences should be. How can I find out except by looking for and listening to arguments?

That implies there's some objectively-definable standard for preferences which you'll be able to recognize once you see it. Also, it begs the question of what in your current preferences says "I have to go out and get some more/different preferences!" From a goal-driven intelligence's POV, asking others to modify your prefs in unspecified ways is pretty much the anti-rational act.

Comment author: Wei_Dai 12 December 2009 12:23:39AM 1 point [-]

I think we need to distinguish between what a rational agent should do, and what a non-rational human should do to become more rational. Nesov's reply to you also concerns the former, I think, but I'm more interested in the latter here.

Unlike a rational agent, we don't have well-defined preferences, and the preferences that we think we have can be changed by arguments. What to do about this situation? Should we stop thinking up or listening to arguments, and just fill in the fuzzy parts of our preferences with randomness or indifference, in order to emulate a rational agent in the most direct manner possible? That doesn't make much sense to me.

I'm not sure what we should do exactly, but whatever it is, it seems like arguments must make up a large part of it.

Comment author: DanArmak 12 December 2009 12:43:46AM 1 point [-]

Please see my reply to Nesov above, too.

I think we shouldn't try to emulate rational agents at all, in the sense that we shouldn't pretend to have rationality-style preferences and supergoals; as a matter of fact we don't have them.

Up to here we seem to agree, we just use different terminology. I just don't want to conflate rational preferences with human preferences because they the two systems behave very differently.

Just as an example, in signalling theories of behaviour, you may consciously believe that your preferences are very different from what your behaviour is actually optimizing for when noone is looking. A rational agent wouldn't normally have separate conscious/unconscious minds unless only the conscious part was sbuject to outside inspection. In this example, it makes sense to update signalling-preferences sometimes, because they're not your actual acting-preferences.

But if you consciously intend to act out your (conscious) preferences, and also intend to keep changing them in not-always-foreseeable ways, then that isn't rationality, and when there could be confusion due to context (such as on LW most of the time) I'd prefer not to use the term "preferences" about humans, or to make clear what is meant.

Comment author: Vladimir_Nesov 12 December 2009 12:36:24AM *  1 point [-]

That arguments modify preference means that you are (denotationally) arriving at different preferences depending on arguments. This means that, from the perspective of a specific given preference (or "true" neutral preference not biased by specific arguments), you fail to obtain optimal rational decision algorithm, and thus to achieve high-preference strategy. But at the same time, "absence of action" is also an action, so not exploring the arguments may as well be a worse choice, since you won't be moving forward towards more clear understanding of your own preference, even if the preference that you are going to understand will be somewhat biased compared to the unknown original one.

Thus, there is a tradeoff:

  • Irrational perception of arguments leads to modification of preference, which is bad for original preference, but
  • Considering moral arguments leads to a more clear understanding of some preference close to the original one, which allows to make more rational decisions, which is good for the original preference.
Comment author: timtyler 12 December 2009 10:07:34AM *  -1 points [-]

FWIW, my preferences have not been changed by arguments in the last 20 years. So I don't think your "we" includes me.

Comment author: Vladimir_Nesov 11 December 2009 11:13:09PM *  0 points [-]

As an example, consider the arguments in form of proofs/disproofs of the statements that you are interested in. Information doesn't necessarily "change" or "determine arbitrarily" the things you take from it, it may help you to compute an object in which you are already interested, without changing that object, and at the same time be essential in moving forward. If you have an algorithm, it doesn't mean that you know what this algorithm will give you in the end, what the algorithm "means". Resist the illusion of transparency.

Comment author: DanArmak 12 December 2009 12:13:00AM 0 points [-]

I don't understand what you're saying as applied to this argument. That Wei Dai has an algorithm for modifying his preferences and he doesn't know what the end output of that algorithm will be?

Comment author: Vladimir_Nesov 12 December 2009 12:21:09AM 1 point [-]

There will always be something about preference that you don't know, and it's not the question of modifying preference, it's a question of figuring out what the fixed unmodifiable preference implies. Modifying preference is exactly the wrong way of going about this.

If we figure out the conceptual issues of FAI, we'd basically have the algorithm that is our preferences, but not in infinite and unknowable normal "execution trace" denotational "form".

Comment author: DanArmak 12 December 2009 12:35:48AM 0 points [-]

As Wei says below, we should consider rational agents (who have explicit preferences separate from the rest of their cognitive architecture) separately from humans who want to approximate that in some ways.

I think that if we first define separate preferences, and then proceed to modify them over and over again, this is so different from rational agents that we shouldn't call it preferences at all. We can talk about e.g. morals instead, or about habits, or biases.

On the other hand if we define human preferences as 'whatever human behavior happens to optimize', then there's nothing interesting about changing our preferences, this is something that happens all the time whether we want it to or not. Under this definition Wei's statement that he deliberately makes it happen is unclear (the totality of a human's behaviour, knowledge, etc. is subtly changing over time in any case) so I assumed he was using the former definition.

Comment author: timtyler 12 December 2009 10:03:43AM 0 points [-]

Re: "How can there be arguments about what preferences should be?"

The idea that some preferences are "better" than other ones is known as "moral realism".

Comment author: DanArmak 12 December 2009 02:44:06PM -1 points [-]

Wikipedia says moral realists (in general) claim that moral propositions can be true or false as objective facts but their truth cannot be observed or verified. This doesn't make any sense. Sounds like religion.

Comment author: timtyler 12 December 2009 03:57:49PM *  1 point [-]

Are you looking at http://en.wikipedia.org/wiki/Moral_realism ...?

Care to quote an offending section about moral truths not being observervable or verifiable?

Comment author: DanArmak 12 December 2009 04:51:53PM *  -1 points [-]

Under the section "Criticisms":

Others are critical of moral realism because it postulates the existence of a kind of "moral fact" which is nonmaterial and does not appear to be accessible to the scientific method. Moral truths cannot be observed in the same way as material facts (which are objective), so it seems odd to count them in the same category. One emotivist counterargument (although emotivism is usually non-cognitivist) alleges that "wrong" actions produce measurable results in the form of negative emotional reactions, either within the individual transgressor, within the person or people most directly affected by the act, or within a (preferably wide) consensus of direct or indirect observers.

Regarding the emotivist criticism, it begs a lot of questions. Surely not all negative emotional reactions signal wrong moral actions. Besides, emotivism isn't aligned with moral realism.

Comment author: timtyler 12 December 2009 06:15:13PM *  1 point [-]

I see - thanks.

That some criticisms of moral realism appear to lack coherence does not seem to me to be a point that counts against the idea.

I expect moral realists would deny that morality is any more nonmaterial than any other kind of information - and would also deny that it does not appear to be accessible to the scientific method.

Comment author: DanArmak 12 December 2009 07:09:01PM 0 points [-]

If moral realism acts as a system of logical propositions and deductions, then it has to have moral axioms. How are these grounded in material reality? How can they be anything more than "because i said so and I hope you'll agree"? Isn't the choice of axioms done using a moral theory nominally opposed to moral realism, such as emotivism, or (amoral) utilitarianism?

Comment author: Johnicholas 12 December 2009 06:05:37PM 0 points [-]

It might also sound like science - don't scientists generally claim that propositions about the world can be true or false, but cannot be directly observed or verified?

Joshua Greene's thesis "The Terrible, Horrible, No Good, Very Bad Truth about Morality and What to Do About it" might be a decent introduction to moral realism / irrealism. Overall it is an argument for irrealism.

Comment author: DanArmak 12 December 2009 07:15:09PM *  0 points [-]

In science, a proposition about the world can generally be proven or disproven with arbitrary probability, so you can become as sure about it as you like if you invest enough resources.

In moral realism, propositions are purely logical constructs, and can be proven true or false just like a mathematica proposition. Their truth is one with the truth of the axioms used, and the axioms can't be proven or disproven with any degree of certainty; they are simply accepted or not accepted. The morality is internally consistent, but you can't derive it from the real world, and you can't derive any fact about the real world from the morality. That sounds just like theology to me. (The difference between this and ordinary math or logic, is that mathematical constructs aren't supposed to lead to should or ought statements about behavior.)

I will read Greene's thesis, but as far as I can tell it argues against moral realism (and does it well), so it won't help me understand why anyone would believe in it.