Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Sniffnoy 22 July 2014 05:15:19AM *  3 points [-]
Comment author: Sniffnoy 08 July 2014 04:44:07AM 1 point [-]

For what it's worth, the "Sand" page doesn't seem to be linked anywhere on your homepage.

Comment author: Sniffnoy 21 June 2014 08:57:34PM 3 points [-]

Several of the links in this post point to Google redirects rather than directly to the actual website. Could you fix this please? Thank you!

Comment author: Sniffnoy 03 June 2014 09:37:56PM *  0 points [-]

Hm -- I might be able to show up, but I have no way of getting there, and I also would need to leave by a little after 7 or so, which means that if someone were to offer to drive me there/back it could put restrictions on them as well. So, uncertain.

Comment author: Sniffnoy 16 May 2014 04:33:55AM *  2 points [-]

However, since Arrow deals with social welfare functions which take a profile of preferences as input and outputs a full preference ranking, it really says something about aggregating a set of preferences into a single group preference.

I'm going to nitpick here -- it's possible to write down forms of Arrow's theorem where you do get a single output. Of course, in that case, unlike in the usual formulation, you have to make assumptions about what happens when candidates drop out -- considering what you have as a voting system that yields results for an election among any subset of the candidates, rather than just that particular set of candidates. So it's a less convenient formulation for proving things. Formulated this way, though, the IIA condition actually becomes the thing it's usually paraphrased as -- "If someone other than the winner drops out, the winner stays the same."

Edit: Spelling

Comment author: Sniffnoy 09 May 2014 10:14:07PM 11 points [-]

Minor nitpick: You mean convex combinations or affine combinations; linear combinations would allow arbitrary numbers of carrots and potatoes.

Comment author: JonahSinick 20 March 2014 08:23:24PM 1 point [-]

Good points. Is my intended meaning clear?

Comment author: Sniffnoy 20 March 2014 09:14:57PM 3 points [-]

I mean, kind of? It's still all pretty mixed-up though. Enough people get consequentialism, expected utility maximization, and utilitarianism mixed up that I really don't think it's a good thing to further confuse them.

Comment author: Sniffnoy 20 March 2014 09:04:39AM *  9 points [-]
  • Consequentialism. In Yvain's Consequentialism FAQ, he argues that consequentialism follows from the intuitively obvious principles "Morality Lives In The World" and "Others Have Non Zero Value" upon reflection. Rationality seems useful for recognizing that there's a tension between these principles and other common moral intuitions, but this doesn't necessarily translate into a desire to resolve the tension nor a choice to resolve the tension in favor of these principles over others. So it seems that increased rationality does increase the likelihood that one will be a consequentialist, but that it's also not sufficient.

  • Expected value maximization. In Circular Altruism and elsewhere, Eliezer describes cognitive biases that people employ in scenarios with a probabilistic element, and how reflection can lead one to the notion that one should organize one's altruistic efforts to maximize expected value (in the technical sense), rather than making decisions based on these biases. Here too, rationality seems useful for recognizing that one's intuitions are in conflict because of cognitive biases, without necessarily entailing an inclination to resolve the tension. However, in this case, if one does seek to resolve the tension, the choice of expected value maximization over other alternatives is canonical, so rationality seems to take one further toward expected value maximization than to consequentialism.

This part seems a bit mixed up to me. This is partly because Yvain's Consequentialism FAQ is itself a bit mixed up, often conflating consequentialism with utilitarianism. "Others have nonzero value" really has nothing to do with consequentialism; one can be a consequentialist and be purely selfish, one can be non-consequentialist and be altruistic. "Morality lives in the world" is a pretty good argument for consequentialism all by itself; "others have nonzero value" is just about what type of consequences you should favor.

What's really mixed up here though is the end. When one talks about expected value maximization, one is always talking about the expected value over consequences; if you accept expected value maximization (for moral matters, anyway), you're already a consequentialist. Basically, what you've written is kind of backwards. If, on the other hand, we assume that by "consequentialism" you really meant "utilitarianism" (which, for those who have forgotten, does not mean maximizing expected utility in the sense discussed here but rather something else entirely[0]), then it would make sense; it takes you further towards maximizing expected value (consequentialism) than utilitarianism.

[0]Though it still is a flavor of consequentialism.

Comment author: Sniffnoy 01 March 2014 10:32:12PM 6 points [-]

I've made no secret of the fact that I'm not a big fan of the principle of charity—often defined as the rule that you should interpret other people's arguments on the assumption that they are not saying anything stupid. The problem with this is that other people are often saying something stupid. Because of that, I think charitable is over-rated compared to fair and accurate reading. When someone says something stupid, you don't have to pretend otherwise, but it's really important not to attribute to people stupid things they never said.

Getting principle of charity right can be hard in general. A common problem is when something can be interpreted as stupid in two different ways; namely, it has an interpretation which is obviously false, and another interpretation which is vacuous or trivial. (E.g.: "People are entirely selfish.") In cases like this, where it's not clear what the charitable reading is, it may just be best to point out what's going on. ("I'm not certain what you mean by that. I see two ways of interpreting your statement, but one is obviously false, and the other is vacuous.") Assuming they don't mean the wrong thing is not the right answer, as if they do, you're sidestepping actual debate. Assuming they don't mean the trivial thing is not the right answer, because sometimes these statements are worth making. Whether a statement is considered trivial or not depends on who you're talking to, and so what statements your interlocutor considers trivial will depend on who they've been talking to and reading. E.g., if they've been hanging around with non-reductionists, they might find it worthwhile to restate the basic principles of reductionism, which here we would consider trivial; and so it's easy to make a mistake and be "charitable" to them by assuming they're arguing for a stronger but incorrect position (like some sort of greedy reductionism). Meanwhile people are using the same words to mean different things because they haven't calibrated abstract words against actual specifics and the debate becomes terribly unproductive.

Really, being explicit about how you're interpreting something if it's not the obvious way is probably best in general. ("I'm going to assume you mean [...], because as written what you said has an obvious error, namely, [...]".) A silent principle of charity doesn't seem very helpful.

But for a helpful principle of charity, I don't think I'd go for anything about what assumptions you should be making. ("Assume the other person is arguing in good faith" is a common one, and this is a good idea, but if you don't already know what it means, it's not concrete enough to be helpful; what does that actually cash out to?) Rather, I'd go for one about what assumptions you shouldn't make. That is to say: If the other person is saying something obviously stupid (or vacuous, or whatever), consider the possibility that you are misinterpreting them. And it would probably be a good idea to ask for clarification. ("Apologies, but it seems to me you're making a statement that's just clearly false, because [...]. Am I misunderstanding you? Perhaps your definition of [...] differs from mine?") Then perhaps you can get down to figuring out where your assumptions differ and where you're using the same words in different ways.

But honestly a lot of the help of the principle of charity may just be to get people to not use the "principle of anti-charity", where you assume your interlocutor means the worst possible (in whatever sense) thing they could possibly mean. Even a bad principle of charity is a huge improvement on that.

Comment author: Sniffnoy 01 March 2014 10:06:03PM *  1 point [-]

For example, people say "I have a poor mental model of..." when they could have just said they don't understand it very well.

That... isn't jargon? There are probably plenty of actual examples you could have used here, but that isn't one.

Edit: OK, you did give an actual example below that ("blue-green politics"). Nonetheless, "mental model" is not jargon. It wasn't coined here, it doesn't have some specialized meaning here that differs from its use outside, it's entirely compositional and thus transparent -- nobody has to explain to you what it means -- and at least in my own experience it just isn't a rare phrase in the first place.

View more: Next