Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: JonahSinick 20 March 2014 08:23:24PM 1 point [-]

Good points. Is my intended meaning clear?

Comment author: Sniffnoy 20 March 2014 09:14:57PM 3 points [-]

I mean, kind of? It's still all pretty mixed-up though. Enough people get consequentialism, expected utility maximization, and utilitarianism mixed up that I really don't think it's a good thing to further confuse them.

Comment author: Sniffnoy 20 March 2014 09:04:39AM *  8 points [-]
  • Consequentialism. In Yvain's Consequentialism FAQ, he argues that consequentialism follows from the intuitively obvious principles "Morality Lives In The World" and "Others Have Non Zero Value" upon reflection. Rationality seems useful for recognizing that there's a tension between these principles and other common moral intuitions, but this doesn't necessarily translate into a desire to resolve the tension nor a choice to resolve the tension in favor of these principles over others. So it seems that increased rationality does increase the likelihood that one will be a consequentialist, but that it's also not sufficient.

  • Expected value maximization. In Circular Altruism and elsewhere, Eliezer describes cognitive biases that people employ in scenarios with a probabilistic element, and how reflection can lead one to the notion that one should organize one's altruistic efforts to maximize expected value (in the technical sense), rather than making decisions based on these biases. Here too, rationality seems useful for recognizing that one's intuitions are in conflict because of cognitive biases, without necessarily entailing an inclination to resolve the tension. However, in this case, if one does seek to resolve the tension, the choice of expected value maximization over other alternatives is canonical, so rationality seems to take one further toward expected value maximization than to consequentialism.

This part seems a bit mixed up to me. This is partly because Yvain's Consequentialism FAQ is itself a bit mixed up, often conflating consequentialism with utilitarianism. "Others have nonzero value" really has nothing to do with consequentialism; one can be a consequentialist and be purely selfish, one can be non-consequentialist and be altruistic. "Morality lives in the world" is a pretty good argument for consequentialism all by itself; "others have nonzero value" is just about what type of consequences you should favor.

What's really mixed up here though is the end. When one talks about expected value maximization, one is always talking about the expected value over consequences; if you accept expected value maximization (for moral matters, anyway), you're already a consequentialist. Basically, what you've written is kind of backwards. If, on the other hand, we assume that by "consequentialism" you really meant "utilitarianism" (which, for those who have forgotten, does not mean maximizing expected utility in the sense discussed here but rather something else entirely[0]), then it would make sense; it takes you further towards maximizing expected value (consequentialism) than utilitarianism.

[0]Though it still is a flavor of consequentialism.

Comment author: Sniffnoy 01 March 2014 10:32:12PM 5 points [-]

I've made no secret of the fact that I'm not a big fan of the principle of charity—often defined as the rule that you should interpret other people's arguments on the assumption that they are not saying anything stupid. The problem with this is that other people are often saying something stupid. Because of that, I think charitable is over-rated compared to fair and accurate reading. When someone says something stupid, you don't have to pretend otherwise, but it's really important not to attribute to people stupid things they never said.

Getting principle of charity right can be hard in general. A common problem is when something can be interpreted as stupid in two different ways; namely, it has an interpretation which is obviously false, and another interpretation which is vacuous or trivial. (E.g.: "People are entirely selfish.") In cases like this, where it's not clear what the charitable reading is, it may just be best to point out what's going on. ("I'm not certain what you mean by that. I see two ways of interpreting your statement, but one is obviously false, and the other is vacuous.") Assuming they don't mean the wrong thing is not the right answer, as if they do, you're sidestepping actual debate. Assuming they don't mean the trivial thing is not the right answer, because sometimes these statements are worth making. Whether a statement is considered trivial or not depends on who you're talking to, and so what statements your interlocutor considers trivial will depend on who they've been talking to and reading. E.g., if they've been hanging around with non-reductionists, they might find it worthwhile to restate the basic principles of reductionism, which here we would consider trivial; and so it's easy to make a mistake and be "charitable" to them by assuming they're arguing for a stronger but incorrect position (like some sort of greedy reductionism). Meanwhile people are using the same words to mean different things because they haven't calibrated abstract words against actual specifics and the debate becomes terribly unproductive.

Really, being explicit about how you're interpreting something if it's not the obvious way is probably best in general. ("I'm going to assume you mean [...], because as written what you said has an obvious error, namely, [...]".) A silent principle of charity doesn't seem very helpful.

But for a helpful principle of charity, I don't think I'd go for anything about what assumptions you should be making. ("Assume the other person is arguing in good faith" is a common one, and this is a good idea, but if you don't already know what it means, it's not concrete enough to be helpful; what does that actually cash out to?) Rather, I'd go for one about what assumptions you shouldn't make. That is to say: If the other person is saying something obviously stupid (or vacuous, or whatever), consider the possibility that you are misinterpreting them. And it would probably be a good idea to ask for clarification. ("Apologies, but it seems to me you're making a statement that's just clearly false, because [...]. Am I misunderstanding you? Perhaps your definition of [...] differs from mine?") Then perhaps you can get down to figuring out where your assumptions differ and where you're using the same words in different ways.

But honestly a lot of the help of the principle of charity may just be to get people to not use the "principle of anti-charity", where you assume your interlocutor means the worst possible (in whatever sense) thing they could possibly mean. Even a bad principle of charity is a huge improvement on that.

Comment author: Sniffnoy 01 March 2014 10:06:03PM *  1 point [-]

For example, people say "I have a poor mental model of..." when they could have just said they don't understand it very well.

That... isn't jargon? There are probably plenty of actual examples you could have used here, but that isn't one.

Edit: OK, you did give an actual example below that ("blue-green politics"). Nonetheless, "mental model" is not jargon. It wasn't coined here, it doesn't have some specialized meaning here that differs from its use outside, it's entirely compositional and thus transparent -- nobody has to explain to you what it means -- and at least in my own experience it just isn't a rare phrase in the first place.

Comment author: Sniffnoy 20 February 2014 05:49:24AM *  2 points [-]

Where is all this "local optimum" / "global optimum" stuff coming from? While I'm not familiar with the complete class theorem, going by the rough statement given in the article... local vs. global optima is just not the issue here, and is entirely the wrong language to talk about this here?

That is to say, talking about local maximum requires that A. things are being measured wrt some total order (though I suppose could be relaxed to a partial order, but you'd have to be clear whether you meant "locally maximum" or just "locally maximal"; I don't know that this is standard terminology) and B. some sort of topological structure on the domain so that you can say what's near a given position. The statement of the complete class theorem, as given, doesn't say anything about any sort of topological structure, or any total order.

Rather, it's a statement about a partial order (or preorder? Since people seem to use preorders in decision theory). And I mean I said above, the definition of "local maximum" could certainly be relaxed to that case, but that's not relevant here because there's just no localness anywhere there. Rather it's simply saying, "Every maximal element is Bayesian."

In particular, this implies that if there is a maximum element, it must be Bayesian, as certainly a maximum element must be maximal. Of course there's no guarantee that there is a maximum element, but I suppose you're considering the case where the partial order is extended to a total (pre)order with some maximum element.

Hell, even in the original post, with the local/global language that makes no sense, the logic is wrong: If we assume that the notions of "local optimum" and "global optimum" make sense here, well, a global maximum certainly is a local maximum! So if every local maximum is Bayesian, every global maximum is Bayesian.

None of this takes away from your point that knowing there exists a better Bayesian method doesn't tell you how to find it, let alone find it with bounded resources. And that just because a maximum, if it exists, must be Bayesian, doesn't imply there is anything good about other Bayesian points, and you may well be better off with a known-good frequentist method. But as best I can tell, all the stuff about local optima is just nonsense, and really just distracts from the underlying point. (So basically, you're wrong about Myth #2.)

Comment author: RichardKennaway 06 February 2014 09:42:54PM 2 points [-]

there are many varieties of IQ test, and their results mostly agree.

If a proposed test didn't agree with the existing ones, it wouldn't be used as an IQ test.

Comment author: Sniffnoy 10 February 2014 10:19:47AM 4 points [-]

I'm not certain how true this is. It's not exactly the same thing, but Dalliard discusses something similar here (see section "Shalizi's first error"). Specifically, a number of IQ tests have been designed with the intention that they would not produce a positive manifold (which would I think to at least some extent imply not agreeing with existing tests). Instead they end up producing a positive manifold and mostly agreeing with existing tests.

Again, this isn't exactly the same thing, because it's not like they were intended to produce a single number that disagreed with existing tests, so much as to go beyond the single-number-IQ model. Also, it's possible that even though they were in some sense designed to agree with existing tests, they only get used because they instead agree (but for CAS this appears to be false (at least going by the article) and for some of the others it doesn't apply). Still, it's similar enough that I thought it was worth noting.

Comment author: Sniffnoy 09 January 2014 04:42:54AM 13 points [-]

It's good to learn from your failures, but I prefer to learn from the failures of others.

-- Jace Beleren

Comment author: Sniffnoy 28 December 2013 08:31:58PM *  1 point [-]

Can't make it, sorry. (Not Sunday either.)

Comment author: IlyaShpitser 14 December 2013 03:22:51AM *  2 points [-]

I am simultaneously having a conversation with someone who doesn't see why interventions cannot be modeled using conditional probabilities, and someone who doesn't see why evidential decision theory can't just use interventions for calculating what the right thing to do is.

Let it never be said that LW has a groupthink problem!


CDT does not have a monopoly on certain kinds of mathematics.

Yes, actually it does. If you use causal calculus, you are either using CDT or an extension of CDT. That's what CDT means.

P(outcome | I do X, data)

I don't know what the event 'I do X" is for you. If it satisfies the standard axioms of do(x) (consistency, effectiveness, etc.) then you are just using a different syntax for causal decision theory. If it doesn't satisfy the standard axioms of do(x) it will give the wrong answers.

Are you saying it's impossible to write a paper that uses causal analysis to answer the purely epistemic question of whether a certain drug has an effect on cancer

Papers on effects of treatments in medicine are either almost universally written using Neyman's potential outcome framework (which is just another syntax for do(.)), or they don't bother with special causal syntax because they did an RCT directly (in which case a standard statistical model has a causal interpretation).

Comment author: Sniffnoy 14 December 2013 04:29:52AM 1 point [-]

Yes, actually it does. If you use causal calculus, you are either using CDT or an extension of CDT. That's what CDT means.

Couldn't you just be using some trivial decision theory that uses do() in a stupid way and doesn't extend CDT?

Comment author: Sniffnoy 09 December 2013 04:06:18AM 0 points [-]

This should be finitely additive probability measures, right? Just saying "probability measure" usually means countably additive.

View more: Next