Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Open thread, September 15-21, 2014

6 gjm 15 September 2014 12:24PM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

Proportional Giving

10 gjm 02 March 2014 09:09PM

Executive summary: The practice of giving a fixed fraction of one's income to charity is near-universal but possibly indefensible. I describe one approach that certainly doesn't defend it, speculate vaguely about a possible way of fixing it up, and invite better ideas from others.


Many of us give a certain fraction of our income to charitable causes. This sort of practice has a long history:

Deuteronomy 14:22 Thou shalt truly tithe all the increase of thy seed, that the field bringeth forth year by year.

(note that "tithe" here means "give one-tenth of") and is widely practised today:

GWWC Pledge: I recognise that I can use part of my income to do a significant amount of good in the developing world. Since I can live well enough on a smaller income, I pledge that from today until the day I retire, I shall give at least ten percent of what I earn to whichever organizations can most effectively use it to help people in developing countries. I make this pledge freely, openly, and without regret.

And of course it's roughly how typical taxation systems (which are kinda-sorta like charitable donation, if you squint) operate. But does it make sense? Is there some underlying principle from which a policy of giving away a certain fraction of one's income (not necessarily the traditional 10%, of course) follows?

The most obvious candidate for such a principle would be what we might call

Weighted Utilitarianism: Act so as to maximize a weighted sum of utility, where (e.g.) one's own utility may be weighted much higher than that of random far-away people.

But this can't produce anything remotely like a policy of proportional giving. Assuming you aren't giving away many millions per year (which is a fair assumption if you're thinking in terms of a fraction of your salary) then the level of utility-per-unit-money achievable by your giving is basically independent of what you give, and so is the weight you attach to the utility of the beneficiaries.

So suppose that when your income, after taking out donations, is $X, your utility (all else equal) is u(X), so that your utility per marginal dollar is u'(X); and suppose you attach weight 1 to your own utility and weight w to that of the people who'd benefit from your donations; and suppose their gain in utility per marginal dollar given is t. Then when your income is S you will set your giving g so that u'(S-g) = wt.

What this says is that a weighted-utilitarian should keep a fixed absolute amount S-g of his or her income, and give all the rest away. The fixed absolute amount will depend on the weight w (hence, on exactly which people are benefited by the donations) and on the utility per dollar given t (hence, on exactly what charities are serving them and how severe their need is), but not on the person's pre-donation income S.

(Here's a quick oversimplified example. Suppose that utility is proportional to log(income), that the people your donations will help have an income equivalent to $1k/year, that you care 100x more about your utility than about theirs, and that your donations are the equivalent of direct cash transfers to those people. Then u' = 1/income, so you should keep everything up to $100k/year and give the rest away. The generalization to other weighting factors and beneficiary incomes should be obvious.)

This argument seems reasonably watertight given its premises, but proportional giving is so well-established a phenomenon that we might reasonably trust our predisposition in its favour more than our arguments against. Can we salvage it somehow?

Here's one possibility. One effect of income is (supposedly) to incentivize work, and maybe (mumble near mode mumble) this effect is governed entirely by anticipated personal utility and not by any benefit conferred on others. Then the policy derived above, which above the threshold makes personal utility independent of effort, would lead to minimum effort and hence maybe less net weighted utility than could be attained with a different policy. Does this lead to anything like proportional giving, at least for some semi-plausible assumptions about the relationship between effort and income?

At the moment, I don't know. I have a page full of scribbled attempts to derive something of the kind, but they didn't work out. And of course there might be some better way to get proportional giving out of plausible ethical principles. Anyone want to do better?

A few remarks about mass-downvoting

16 gjm 13 February 2014 05:06PM

To whoever has for the last several days been downvoting ~10 of my old comments per day:

It is possible that your intention is to discourage me from commenting on Less Wrong.

The actual effect is the reverse. My comments still end up positive on average, and I am therefore motivated to post more of them in order to compensate for the steady karma drain you are causing.

If you are mass-downvoting other people, the effect on some of them is probably the same.

To the LW admins, if any are reading:

Look, can we really not do anything about this behaviour? It's childish and stupid, and it makes the karma system less useful (e.g., for comment-sorting), and it gives bad actors a disproportionate influence on Less Wrong. It seems like there are lots of obvious things that would go some way towards helping, many of which have been discussed in past threads about this.

Failing that, can we at least agree that it's bad behaviour and that it would be good in principle to stop it or make it more visible and/or inconvenient?

Failing that, can we at least have an official statement from an LW administrator that mass-downvoting is not considered an undesirable behaviour here? I really hope this isn't the opinion of the LW admins, but as the topic has been discussed from time to time with never any admin response I've been thinking it increasingly likely that it is. If so, let's at least be honest about it.

To anyone else reading this:

If you should happen to notice that a sizeable fraction of my comments are at -1, this is probably why. (Though of course I may just have posted a bunch of silly things. I expect it happens from time to time.)

My apologies for cluttering up Discussion with this. (But not very many apologies; this sort of mass-downvoting seems to me to be one of the more toxic phenomena on Less Wrong, and I retain some small hope that eventually something may be done about it.)

[Link] False memories of fabricated political events

16 gjm 10 February 2013 10:25PM

Another one for the memory-is-really-unreliable file. Some researchers at UC Irvine (one of them is Elizabeth Loftus, whose name I've seen attached to other fake-memory studies) asked about 5000 subjects about their recollection of four political events. One of the political events never actually happened. About half the subjects said they remembered the fake event. Subjects were more likely to pseudo-remember events congruent with their political preferences (e.g., Bush or Obama doing something embarrassing).

Link to papers.ssrn.com (paper is freely downloadable).

The subjects were recruited from the readership of Slate, which unsurprisingly means they aren't a very representative sample of the US population (never mind the rest of the world). In particular, about 5% identified as conservative and about 60% as progressive.

Each real event was remembered by 90-98% of subjects. Self-identified conservatives remembered the real events a little less well. Self-identified progressives were much more likely to "remember" a fake event in which G W Bush took a vacation in Texas while Hurricane Katrina was devastating New Orleans. Self-identified conservatives were somewhat more likely to "remember" a fake event in which Barack Obama shook the hand of Mahmoud Ahmedinejad.

About half of the subjects who "remembered" fake events were unable to identify the fake event correctly when they were told that one of the events in the study was fake.

[LINK] Breaking the illusion of understanding

19 gjm 26 October 2012 11:09PM

This writeup at Ars Technica about a recently published paper in the Journal of Consumer Research may be of interest. Super-brief summary:

  • Consumers with higher scores on a cognitive reflection test are more inclined to buy products when told more about them; for consumers with lower CRT scores it's the reverse.
  • Consumers with higher CRT scores felt that they understood the products better after being told more; consumers with lower CRT scores felt that they understood them worse.
  • If subjects are asked to give an explanation of how products work and then asked how well they understand and how willing they'd be to pay, high-CR subjects don't change much in either but low-CR subjects report feeling that they understand worse and that they're willing to pay less.
  • Conclusion: it looks as if when you give low-CR subjects more information about a product, they feel they understand it less, don't like that feeling, and become less willing to pay.

If this is right (which seems plausible enough) then it presumably applies more broadly: e.g., to what tactics are most effective in political debate. Though it's hardly news in that area that making people feel stupid isn't the best way to persuade them of things.

Abstract of the paper:

People differ in their threshold for satisfactory causal understanding and therefore in the type of explanation that will engender understanding and maximize the appeal of a novel product. Explanation fiends are dissatisfied with surface understanding and desire detailed mechanistic explanations of how products work. In contrast, explanation foes derive less understanding from detailed than coarse explanations and downgrade products that are explained in detail. Consumers’ attitude toward explanation is predicted by their tendency to deliberate, as measured by the cognitive reflection test. Cognitive reflection also predicts susceptibility to the illusion of explanatory depth, the unjustified belief that one understands how things work. When explanation foes attempt to explain, it exposes the illusion, which leads to a decrease in willingness to pay. In contrast, explanation fiends are willing to pay more after generating explanations. We hypothesize that those low in cognitive reflection are explanation foes because explanatory detail shatters their illusion of understanding.

The Problem of Thinking Too Much [LINK]

5 gjm 27 April 2012 02:31PM

This was linked to twice recently, once in a Rationality Quotes thread and once in the article about mindfulness meditation, and I thought it deserved its own article.

It's a transcript of a talk by Persi Diaconis, called "The problem of thinking too much". The general theme is more or less what you'd expect from the title: often our explicit models of things are wrong enough that trying to think them through rationally gives worse results than (e.g.) just guessing. There are some nice examples in it.

General textbook comparison thread

8 gjm 26 August 2011 01:27PM

We've already had a lengthy (and still active) thread attempting to address the question "What are the best textbooks, and why are they better than their rivals?". That's excellent, but no one is going to post there unless they're prepared to claim: Textbook X is the best on its subject. But surely many of us have read many texts for which we couldn't say that but could say "I've read X and Y, and here's how they differ". A good supply of such comparisons would be extremely useful.

I propose this thread for that purpose. Rules:

  • Each top-level reply should concern two or more texts on a single subject, and provide enough information about how they compare to one another that an interested would-be reader should be able to tell which is likely to be better for his or her purposes.
  • Replies to these offering or soliciting further comparisons in the same domain are encouraged.
  • At least one book in each comparison should either
    • be a very good one, or at least
    • look like a very good one even though it isn't.

If this gets enough responses that simply looking through them becomes tiresome, I'll update the article with (something like) a list of textbooks, arranged by subject and then by author, with links for the comments in which they're compared to other books and a brief summary of what's said about them. (I might include links to comments in Luke's thread too, since anything that deserves its place there would also be acceptable here.)

See also: magfrump's request for recommendations of basic science books; "Recommended Rationalist Reading" (narrower subject focus, and without the element of comparison).

Harry Potter and the Methods of Rationality discussion thread, part 4

3 gjm 07 October 2010 09:12PM

[Update: and now there's a fifth discussion thread, which you should probably use in preference to this one. Later update: and a sixth -- in the discussion section, which is where these threads are living for now on. Also: tag for HP threads in the main section, and tag for HP threads in the discussion section.]

The third discussion thread is above 500 comments now, just like the others, so it's time for a new one. Predecessors: one, two, three. For anyone who's been on Mars and doesn't know what this is about: it's Eliezer's remarkable Harry Potter fanfic.

Spoiler warning and helpful suggestion (copied from those in the earlier threads):

Spoiler Warning:  this thread contains unrot13'd spoilers for Harry Potter and the Methods of Rationality up to the current chapter and for the original Harry Potter series.  Please continue to use rot13 for spoilers to other works of fiction, or if you have insider knowledge of future chapters of Harry Potter and the Methods of Rationality.

A suggestion: mention at the top of your comment which chapter you're commenting on, or what chapter you're up to, so that people can understand the context of your comment even after more chapters have been posted.  This can also help people avoid reading spoilers for a new chapter before they realize that there is a new chapter.

The uniquely awful example of theism

36 gjm 10 April 2009 12:30AM

When an LW contributor is in need of an example of something that (1) is plainly, uncontroversially (here on LW, at least) very wrong but (2) an otherwise reasonable person might get lured into believing by dint of inadequate epistemic hygiene, there seems to be only one example that everyone reaches for: belief in God. (Of course there are different sorts of god-belief, but I don't think that makes it count as more than one example.) Eliezer is particularly fond of this trope, but he's not alone.

How odd that there should be exactly one example. How convenient that there is one at all! How strange that there isn't more than one!

In the population at large (even the smarter parts of it) god-belief is sufficiently widespread that using it as a canonical example of irrationality would run the risk of annoying enough of your audience to be counterproductive. Not here, apparently. Perhaps we-here-on-LW are just better reasoners than everyone else ... but then, again, isn't it strange that there aren't a bunch of other popular beliefs that we've all seen through? In the realm of politics or economics, for instance, surely there ought to be some.

Also: it doesn't seem to me that I'm that a much better thinker than I was a few years ago when (alas) I was a theist; nor does it seem to me that everyone on LW is substantially better at thinking than I am; which makes it hard for me to believe that there's a certain level of rationality that almost everyone here has attained, and that makes theism vanishingly rare.

I offer the following uncomfortable conjecture: We all want to find (and advertise) things that our superior rationality has freed us from, or kept us free from. (Because the idea that Rationality Just Isn't That Great is disagreeable when one has invested time and/or effort and/or identity in rationality, and because we want to look impressive.) We observe our own atheism, and that everyone else here seems to be an atheist too, and not unnaturally we conclude that we've found such a thing. But in fact (I conjecture) LW is so full of atheists not only because atheism is more rational than theism (note for the avoidance of doubt: yes, I agree that atheism is more rational than theism, at least for people in our epistemic situation) but also because

continue reading »

Voting etiquette

7 gjm 05 April 2009 02:28PM

Not all that surprisingly, there's quite a lot of discussion on LW about questions like

  • just what should get voted up or down?
  • what conclusions can one reasonably draw from getting downvoted?
  • should downvotes (or even upvotes) be accompanied by explanations?
  • should the way karma and voting work be changed?

This generally happens in dribs and drabs, typically in response to more specific questions of the form

  • Waaaa, how come my supremely insightful comment above is currently sitting at -69?

and therefore tends to clutter up discussions that are meant to be about something else. So maybe it's worth seeing if we can arrive at some sort of consensus about the general issues, at which point maybe we can write that up and refer newcomers to it.

(The outcome may be that we find that there's no consensus to be had. That would be useful information too.)

I'll kick things off with a few unfocused thoughts.

continue reading »

View more: Next