Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: blumsha 02 September 2014 08:03:14PM 2 points [-]

Where from? I'll be coming from Ann Arbor.

Comment author: Sniffnoy 03 September 2014 04:42:57AM 1 point [-]

Oh, thanks! I'm in Ann Arbor too! Since the comment above though torekp offered me a ride, so it's not necessary anymore. Thanks again though!

Comment author: Sniffnoy 02 September 2014 07:09:15AM 2 points [-]

I'd like to go, but at present I don't have any way of getting there. Would anyone be willing to give me a ride like last time?

Comment author: Sniffnoy 22 July 2014 05:15:19AM *  3 points [-]
Comment author: Sniffnoy 08 July 2014 04:44:07AM 1 point [-]

For what it's worth, the "Sand" page doesn't seem to be linked anywhere on your homepage.

Comment author: Sniffnoy 21 June 2014 08:57:34PM 3 points [-]

Several of the links in this post point to Google redirects rather than directly to the actual website. Could you fix this please? Thank you!

Comment author: Sniffnoy 03 June 2014 09:37:56PM *  0 points [-]

Hm -- I might be able to show up, but I have no way of getting there, and I also would need to leave by a little after 7 or so, which means that if someone were to offer to drive me there/back it could put restrictions on them as well. So, uncertain.

Comment author: Sniffnoy 16 May 2014 04:33:55AM *  2 points [-]

However, since Arrow deals with social welfare functions which take a profile of preferences as input and outputs a full preference ranking, it really says something about aggregating a set of preferences into a single group preference.

I'm going to nitpick here -- it's possible to write down forms of Arrow's theorem where you do get a single output. Of course, in that case, unlike in the usual formulation, you have to make assumptions about what happens when candidates drop out -- considering what you have as a voting system that yields results for an election among any subset of the candidates, rather than just that particular set of candidates. So it's a less convenient formulation for proving things. Formulated this way, though, the IIA condition actually becomes the thing it's usually paraphrased as -- "If someone other than the winner drops out, the winner stays the same."

Edit: Spelling

Comment author: Sniffnoy 09 May 2014 10:14:07PM 11 points [-]

Minor nitpick: You mean convex combinations or affine combinations; linear combinations would allow arbitrary numbers of carrots and potatoes.

Comment author: JonahSinick 20 March 2014 08:23:24PM 1 point [-]

Good points. Is my intended meaning clear?

Comment author: Sniffnoy 20 March 2014 09:14:57PM 3 points [-]

I mean, kind of? It's still all pretty mixed-up though. Enough people get consequentialism, expected utility maximization, and utilitarianism mixed up that I really don't think it's a good thing to further confuse them.

Comment author: Sniffnoy 20 March 2014 09:04:39AM *  9 points [-]
  • Consequentialism. In Yvain's Consequentialism FAQ, he argues that consequentialism follows from the intuitively obvious principles "Morality Lives In The World" and "Others Have Non Zero Value" upon reflection. Rationality seems useful for recognizing that there's a tension between these principles and other common moral intuitions, but this doesn't necessarily translate into a desire to resolve the tension nor a choice to resolve the tension in favor of these principles over others. So it seems that increased rationality does increase the likelihood that one will be a consequentialist, but that it's also not sufficient.

  • Expected value maximization. In Circular Altruism and elsewhere, Eliezer describes cognitive biases that people employ in scenarios with a probabilistic element, and how reflection can lead one to the notion that one should organize one's altruistic efforts to maximize expected value (in the technical sense), rather than making decisions based on these biases. Here too, rationality seems useful for recognizing that one's intuitions are in conflict because of cognitive biases, without necessarily entailing an inclination to resolve the tension. However, in this case, if one does seek to resolve the tension, the choice of expected value maximization over other alternatives is canonical, so rationality seems to take one further toward expected value maximization than to consequentialism.

This part seems a bit mixed up to me. This is partly because Yvain's Consequentialism FAQ is itself a bit mixed up, often conflating consequentialism with utilitarianism. "Others have nonzero value" really has nothing to do with consequentialism; one can be a consequentialist and be purely selfish, one can be non-consequentialist and be altruistic. "Morality lives in the world" is a pretty good argument for consequentialism all by itself; "others have nonzero value" is just about what type of consequences you should favor.

What's really mixed up here though is the end. When one talks about expected value maximization, one is always talking about the expected value over consequences; if you accept expected value maximization (for moral matters, anyway), you're already a consequentialist. Basically, what you've written is kind of backwards. If, on the other hand, we assume that by "consequentialism" you really meant "utilitarianism" (which, for those who have forgotten, does not mean maximizing expected utility in the sense discussed here but rather something else entirely[0]), then it would make sense; it takes you further towards maximizing expected value (consequentialism) than utilitarianism.

[0]Though it still is a flavor of consequentialism.

View more: Next