Introducing an EA Donation Registry, covering existential risk donors

9 tog 21 October 2014 02:10PM

The idea that being public about your giving can help inspire others is widespread, particularly in the effective altruism movement. And it’s also true that sharing your choice of charities can have a positive influence, particularly when that choice takes into account their effectiveness. With this in mind, we’ve created created an EA Donation Registry through which people can share plans to donate (of any form, and to any cause), as well as record past donations that they’ve made. We did so partly in response to requests for a cause neutral venue for donation plans, so if you give or plan to give to organisations which work to alleviate existential risk or aim to improve the far future in other ways then you may be interested in signing up.

You can already see hundreds of people’s past and planned donations on the Registry. There’s some inspiring material there, from the over $40 million that Jim Greenbaum has given over his lifetime, to the many people aiming to donate substantial portions of their income, such as Peter Singer. You can filter people’s donation plans by their cause area so as to see those planning to donate towards existential risk alleviation and other far future causes, as well as to charities working on animal welfare and global poverty.

Donations from members of the effective altruist community

If you’d like to read more about the reasons to share your giving, Peter Hurford’s post To Inspire People to Give, Be Public About Your Giving provides a good summary. As he discusses, it shows that giving large amounts to effective charities is something that people actually do, providing social proof and normalising and encouraging this, particularly among peer groups. We also hope that the EA Donation Registry can serve as a gentle prompt to action and commitment device, although understanding that plans change we’ve given donors the ability to edit them at any time - it'd be both expected and understood that many will do so. This a registry of plans, not necessarily pledges.

The registry is an open, community-owned project coordinated through .impact, so we’d love to hear of any uses that you might make of it, and you can also send us suggestions or feedback via our contact form. But most of all, we’d encourage you to share your past or planned donations on it for the reasons above. You can share plans of any form and size via a free text field, so take a moment to consider if there are any that you’d like to share - and if you’ve yet to think about where you might donate, we hope that this will provide a great opportunity to do so!

Weekly LW Meetups

3 FrankAdamek 10 October 2014 05:38PM

This summary was posted to LW Main on October 3rd. The following week's summary is here.

New meetups (or meetups with a hiatus of more than a year) are happening in:

Irregularly scheduled Less Wrong meetups are taking place in:

The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:

Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Boston, Brussels, Buffalo, Cambridge UK, Canberra, Columbus, London, Madison WI, Melbourne, Moscow, Mountain View, New York, Philadelphia, Research Triangle NC, Seattle, Sydney, Toronto, Vienna, Washington DC, Waterloo, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers.

continue reading »

Eight Short Studies On Excuses

210 Yvain 20 April 2010 11:01PM

The Clumsy Game-Player

You and a partner are playing an Iterated Prisoner's Dilemma. Both of you have publicly pre-committed to the tit-for-tat strategy. By iteration 5, you're going happily along, raking up the bonuses of cooperation, when your partner unexpectedly presses the "defect" button.

"Uh, sorry," says your partner. "My finger slipped."

"I still have to punish you just in case," you say. "I'm going to defect next turn, and we'll see how you like it."

"Well," said your partner, "knowing that, I guess I'll defect next turn too, and we'll both lose out. But hey, it was just a slipped finger. By not trusting me, you're costing us both the benefits of one turn of cooperation."

"True", you respond "but if I don't do it, you'll feel free to defect whenever you feel like it, using the 'finger slipped' excuse."

"How about this?" proposes your partner. "I promise to take extra care that my finger won't slip again. You promise that if my finger does slip again, you will punish me terribly, defecting for a bunch of turns. That way, we trust each other again, and we can still get the benefits of cooperation next turn."

You don't believe that your partner's finger really slipped, not for an instant. But the plan still seems like a good one. You accept the deal, and you continue cooperating until the experimenter ends the game.

After the game, you wonder what went wrong, and whether you could have played better. You decide that there was no better way to deal with your partner's "finger-slip" - after all, the plan you enacted gave you maximum possible utility under the circumstances. But you wish that you'd pre-committed, at the beginning, to saying "and I will punish finger slips equally to deliberate defections, so make sure you're careful."

continue reading »

An observation on cryocrastination

9 AndrewH 22 July 2009 08:41PM

Why do people cryocrastinate? The most common explanation I’ve heard from intelligent people for not getting cryonics is that the money is better spent on some altruistic cause. By itself there is nothing wrong with this belief, but irrationality lies near.

Before I continue, I am not here to argue that cryonics works or not. That has been done before. From this point on, I will assume cryonics derives expected utility from it giving a reasonable chance of continuing life past many currently terminal events, with life being a valuable thing.

We begin with a quick overview of the cost of cryonics. Let us break our cost analysis into two parts: acquisition of the cryonics and life insurance contracts and maintenance of these contracts.

continue reading »

Generalizing From One Example

259 Yvain 28 April 2009 10:00PM

Related to: The Psychological Unity of Humankind, Instrumental vs. Epistemic: A Bardic Perspective

"Everyone generalizes from one example. At least, I do."

   -- Vlad Taltos (Issola, Steven Brust)

My old professor, David Berman, liked to talk about what he called the "typical mind fallacy", which he illustrated through the following example:

There was a debate, in the late 1800s, about whether "imagination" was simply a turn of phrase or a real phenomenon. That is, can people actually create images in their minds which they see vividly, or do they simply say "I saw it in my mind" as a metaphor for considering what it looked like?

Upon hearing this, my response was "How the stars was this actually a real debate? Of course we have mental imagery. Anyone who doesn't think we have mental imagery is either such a fanatical Behaviorist that she doubts the evidence of her own senses, or simply insane." Unfortunately, the professor was able to parade a long list of famous people who denied mental imagery, including some leading scientists of the era. And this was all before Behaviorism even existed.

The debate was resolved by Francis Galton, a fascinating man who among other achievements invented eugenics, the "wisdom of crowds", and standard deviation. Galton gave people some very detailed surveys, and found that some people did have mental imagery and others didn't. The ones who did had simply assumed everyone did, and the ones who didn't had simply assumed everyone didn't, to the point of coming up with absurd justifications for why they were lying or misunderstanding the question. There was a wide spectrum of imaging ability, from about five percent of people with perfect eidetic imagery1 to three percent of people completely unable to form mental images2.

Dr. Berman dubbed this the Typical Mind Fallacy: the human tendency to believe that one's own mental structure can be generalized to apply to everyone else's.

continue reading »

Sayeth the Girl

47 Alicorn 19 July 2009 10:24PM

Disclaimer: If you are prone to dismissing women's complaints of gender-related problems as the women being whiny, emotionally unstable girls who see sexism where there is none, this post is unlikely to interest you.

For your convenience, links to followup posts: Roko says; orthonormal says; Eliezer says; Yvain says; Wei_Dai says

As far as I can tell, I am the most active female poster on Less Wrong.  (AnnaSalamon has higher karma than I, but she hasn't commented on anything for two months now.)  There are not many of us.  This is usually immaterial.  Heck, sometimes people don't even notice in spite of my girly username, my self-introduction, and the fact that I'm now apparently the feminism police of Less Wrong.

My life is not about being a girl.  In fact, I'm less preoccupied with feminism and women's special interest issues than most of the women I know, and some of the men.  It's not my pet topic.  I do not focus on feminist philosophy in school.  I took an "Early Modern Women Philosophers" course because I needed the history credit, had room for a suitable class in a semester when one was offered, and heard the teacher was nice, and I was pretty bored.  I wound up doing my midterm paper on Malebranche in that class because we'd covered him to give context to Mary Astell, and he was more interesting than she was.  I didn't vote for Hilary Clinton in the primary.  Given the choice, I have lots of things I'd rather be doing than ferreting out hidden or less-than-hidden sexism on one of my favorite websites.

Unfortunately, nobody else seems to want to do it either, and I'm not content to leave it undone.  I suppose I could abandon the site and leave it even more masculine so the guys could all talk in their own language, unimpeded by stupid chicks being stupidly offended by completely unproblematic things like objectification and just plain jerkitude.  I would almost certainly have vacated the site already if feminism were my pet issue, or if I were more easily offended.  (In general, I'm very hard to offend.  The fact that people here have succeeded in doing so anyway without even, apparently, going out of their way to do it should be a great big red flag that something's up.)  If you're wondering why half of the potential audience of the site seems to be conspicuously not here, this may have something to do with it.

continue reading »

The Blue-Minimizing Robot

162 Yvain 04 July 2011 10:26PM

Imagine a robot with a turret-mounted camera and laser. Each moment, it is programmed to move forward a certain distance and perform a sweep with its camera. As it sweeps, the robot continuously analyzes the average RGB value of the pixels in the camera image; if the blue component passes a certain threshold, the robot stops, fires its laser at the part of the world corresponding to the blue area in the camera image, and then continues on its way.

Watching the robot's behavior, we would conclude that this is a robot that destroys blue objects. Maybe it is a surgical robot that destroys cancer cells marked by a blue dye; maybe it was built by the Department of Homeland Security to fight a group of terrorists who wear blue uniforms. Whatever. The point is that we would analyze this robot in terms of its goals, and in those terms we would be tempted to call this robot a blue-minimizer: a machine that exists solely to reduce the amount of blue objects in the world.

Suppose the robot had human level intelligence in some side module, but no access to its own source code; that it could learn about itself only through observing its own actions. The robot might come to the same conclusions we did: that it is a blue-minimizer, set upon a holy quest to rid the world of the scourge of blue objects.

But now stick the robot in a room with a hologram projector. The hologram projector (which is itself gray) projects a hologram of a blue object five meters in front of it. The robot's camera detects the projector, but its RGB value is harmless and the robot does not fire. Then the robot's camera detects the blue hologram and zaps it. We arrange for the robot to enter this room several times, and each time it ignores the projector and zaps the hologram, without effect.

Here the robot is failing at its goal of being a blue-minimizer. The right way to reduce the amount of blue in the universe is to destroy the projector; instead its beams flit harmlessly through the hologram.

Again, give the robot human level intelligence. Teach it exactly what a hologram projector is and how it works. Now what happens? Exactly the same thing - the robot executes its code, which says to scan the room until its camera registers blue, then shoot its laser.

In fact, there are many ways to subvert this robot. What if we put a lens over its camera which inverts the image, so that white appears as black, red as green, blue as yellow, and so on? The robot will not shoot us with its laser to prevent such a violation (unless we happen to be wearing blue clothes when we approach) - its entire program was detailed in the first paragraph, and there's nothing about resisting lens alterations. Nor will the robot correct itself and shoot only at objects that appear yellow - its entire program was detailed in the first paragraph, and there's nothing about correcting its program for new lenses. The robot will continue to zap objects that register a blue RGB value; but now it'll be shooting at anything that is yellow.

The human-level intelligence version of the robot will notice its vision has been inverted. It will know it is shooting yellow objects. It will know it is failing at its original goal of blue-minimization. And maybe if it had previously decided it was on a holy quest to rid the world of blue, it will be deeply horrified and ashamed of its actions. It will wonder why it has suddenly started to deviate from this quest, and why it just can't work up the will to destroy blue objects anymore.

The robot goes to Quirinus Quirrell, who explains that robots don't really care about minimizing the color blue. They only care about status and power, and pretend to care about minimizing blue in order to impress potential allies.

The robot goes to Robin Hanson, who explains that there are really multiple agents within the robot. One of them wants to minimize the color blue, the other wants to minimize the color yellow. Maybe the two of them can make peace, and agree to minimize yellow one day and blue the next?

The robot goes to Anna Salamon, who explains that robots are not automatically strategic, and that if it wants to achieve its goal it will have to learn special techniques to keep focus on it.

I think all of these explanations hold part of the puzzle, but that the most fundamental explanation is that the mistake began as soon as we started calling it a "blue-minimizing robot". This is not because its utility function doesn't exactly correspond to blue-minimization: even if we try to assign it a ponderous function like "minimize the color represented as blue within your current visual system, except in the case of holograms" it will be a case of overfitting a curve. The robot is not maximizing or minimizing anything. It does exactly what it says in its program: find something that appears blue and shoot it with a laser. If its human handlers (or itself) want to interpret that as goal directed behavior, well, that's their problem.

It may be that the robot was created to achieve a specific goal. It may be that the Department of Homeland Security programmed it to attack blue-uniformed terrorists who had no access to hologram projectors or inversion lenses. But to assign the goal of "blue minimization" to the robot is a confusion of levels: this was a goal of the Department of Homeland Security, which became a lost purpose as soon as it was represented in the form of code.

The robot is a behavior-executor, not a utility-maximizer.

In the rest of this sequence, I want to expand upon this idea. I'll start by discussing some of the foundations of behaviorism, one of the earliest theories to treat people as behavior-executors. I'll go into some of the implications for the "easy problem" of consciousness and philosophy of mind. I'll very briefly discuss the philosophical debate around eliminativism and a few eliminativist schools. Then I'll go into why we feel like we have goals and preferences and what to do about them.

The Best Textbooks on Every Subject

167 lukeprog 16 January 2011 08:30AM

For years, my self-education was stupid and wasteful. I learned by consuming blog posts, Wikipedia articles, classic texts, podcast episodes, popular books, video lectures, peer-reviewed papers, Teaching Company courses, and Cliff's Notes. How inefficient!

I've since discovered that textbooks are usually the quickest and best way to learn new material. That's what they are designed to be, after all. Less Wrong has often recommended the "read textbooks!" method. Make progress by accumulation, not random walks.

But textbooks vary widely in quality. I was forced to read some awful textbooks in college. The ones on American history and sociology were memorably bad, in my case. Other textbooks are exciting, accurate, fair, well-paced, and immediately useful.

What if we could compile a list of the best textbooks on every subject? That would be extremely useful.

Let's do it.

There have been other pages of recommended reading on Less Wrong before (and elsewhere), but this post is unique. Here are the rules:

  1. Post the title of your favorite textbook on a given subject.
  2. You must have read at least two other textbooks on that same subject.
  3. You must briefly name the other books you've read on the subject and explain why you think your chosen textbook is superior to them.

Rules #2 and #3 are to protect against recommending a bad book that only seems impressive because it's the only book you've read on the subject. Once, a popular author on Less Wrong recommended Bertrand Russell's A History of Western Philosophy to me, but when I noted that it was more polemical and inaccurate than the other major histories of philosophy, he admitted he hadn't really done much other reading in the field, and only liked the book because it was exciting.

I'll start the list with three of my own recommendations...

continue reading »

The Costs of Rationality

32 RobinHanson 03 March 2009 06:13PM

The word "rational" is overloaded with associations, so let me be clear: to me [here], more "rational" means better believing what is true, given one's limited info and analysis resources. 

Rationality certainly can have instrumental advantages.  There are plenty of situations where being more rational helps one achieve a wide range of goals.  In those situtations, "winnners", i.e., those who better achieve their goals, should tend to be more rational.  In such cases, we might even estimate someone's rationality by looking at his or her "residual" belief-mediated success, i.e., after explaining that success via other observable factors.

But note: we humans were designed in many ways not to be rational, because believing the truth often got in the way of achieving goals evolution had for us.  So it is important for everyone who intends to seek truth to clearly understand: rationality has costs, not only in time and effort to achieve it, but also in conflicts with other common goals.

Yes, rationality might help you win that game or argument, get promoted, or win her heart.  Or more rationality for you might hinder those outcomes.  If what you really want is love, respect, beauty, inspiration, meaning, satisfaction, or success, as commonly understood, we just cannot assure you that rationality is your best approach toward those ends.  In fact we often know it is not.

The truth may well be messy, ugly, or dispriting; knowing it make you less popular, loved, or successful.  These are actually pretty likely outcomes in many identifiable situations.  You may think you want to know the truth no matter what, but how sure can you really be of that?  Maybe you just like the heroic image of someone who wants the truth no matter what; or maybe you only really want to know the truth if it is the bright shining glory you hope for. 

Be warned; the truth just is what it is.  If just knowing the truth is not reward enough, perhaps you'd be better off not knowing.  Before you join us in this quixotic quest, ask yourself: do you really want to be generally rational, on all topics?  Or might you be better off limiting your rationality to the usual practical topics where rationality is respected and welcomed?

Not for the Sake of Happiness (Alone)

48 Eliezer_Yudkowsky 22 November 2007 03:19AM

Followup toTerminal Values and Instrumental Values

When I met the futurist Greg Stock some years ago, he argued that the joy of scientific discovery would soon be replaced by pills that could simulate the joy of scientific discovery.  I approached him after his talk and said, "I agree that such pills are probably possible, but I wouldn't voluntarily take them."

And Stock said, "But they'll be so much better that the real thing won't be able to compete.  It will just be way more fun for you to take the pills than to do all the actual scientific work."

And I said, "I agree that's possible, so I'll make sure never to take them."

Stock seemed genuinely surprised by my attitude, which genuinely surprised me.

continue reading »

View more: Next