But Somebody Would Have Noticed

36 Alicorn 04 May 2010 06:56PM

When you hear a hypothesis that is completely new to you, and seems important enough that you want to dismiss it with "but somebody would have noticed!", beware this temptation.  If you're hearing it, somebody noticed.

Disclaimer: I do not believe in anything I would expect anyone here to call a "conspiracy theory" or similar.  I am not trying to "soften you up" for a future surprise with this post.

1. Wednesday

Suppose: Wednesday gets to be about eighteen, and goes on a trip to visit her Auntie Alicorn, who has hitherto refrained from bringing up religion around her out of respect for her parents1.  During the visit, Sunday rolls around, and Wednesday observes that Alicorn is (a) wearing pants, not a skirt or a dress - unsuitable church attire! and (b) does not appear to be making any move to go to church at all, while (c) not being sick or otherwise having a very good excuse to skip church.  Wednesday inquires as to why this is so, fearing she'll find that beloved Auntie has been excommunicated or something (gasp!  horror!).

Auntie Alicorn says, "Well, I never told you this because your parents asked me not to when you were a child, but I suppose now it's time you knew.  I'm an atheist, and I don't believe God exists, so I don't generally go to church."

And Wednesday says, "Don't be silly.  If God didn't exist, don't you think somebody would have noticed?"

continue reading »

The Craigslist Revolution: a real-world application of torture vs. dust specks OR How I learned to stop worrying and create one billion dollars out of nothing

47 Kevin 10 February 2010 03:15AM

...this is the first crazy idea I've ever heard for generating a billion dollars out of nothing that could actually work. I mean, ever.  -Eliezer Yudkowsky

We can reasonably debate torture vs. dust specks when it is one person being tortured versus 3^^^3 people being subjected to motes of dust.

However, there should be little debate when we are comparing the torture of one person to the minimal suffering of a mere millions of people. I propose a way to generate approximately one billion dollars for charity over five years: The Craigslist Revolution.

In 2006, Craigslist's CEO Jim Buckmaster said that if enough users told them to "raise revenue and plow it into charity" that they would consider doing it. I have more recently emailed Craig Newmark and he indicated that they remain receptive to the idea if that's what the users want.

A simple text advertising banner at the top of the Craigslist home or listing pages would generate enormous amounts of revenue. They could put a large "X" next to the ad, allowing you to permanently close it. There seems to be little objection to this idea. The optional banner is harmless, and a billion dollars could be enough to dramatically improve the lives of millions or make a serious impact in the causes we take seriously around here. As a moral calculus, the decision seems a no brainer. It's possible that some or many dollars would support bad charities, but the marginal impact of supporting some truly good charities makes the whole thing worthwhile.

I don't have access to Craigslist's detailed traffic data, but I think one billion USD over five years is a reasonable estimate for a single optional banner ad. With 20 billion pageviews a month, a Google Adwords banner would bring in about 200 million dollars a year. Over five years that will be well over a billion dollars. With employees selling the advertising rather than Google, that number could very well be multiplied. An extremely low bound for the amount of additional revenue that could be trivially generated over five years would be 100 million.

continue reading »

Shut Up and Divide?

60 Wei_Dai 09 February 2010 08:09PM

During a recent discussion with komponisto about why my fellow LWers are so interested in the Amanda Knox case, his answers made me realize that I had been asking the wrong question. After all, feeling interest or even outrage after seeing a possible case of injustice seems quite natural, so perhaps a better question to ask is why am I so uninterested in the case.

Reflecting upon that, it appears that I've been doing something like Eliezer's "Shut Up and Multiply", except in reverse. Both of us noticed the obvious craziness of scope insensitivity and tried to make our emotions work more rationally. But whereas he decided to multiply his concern for individuals human beings by the population size to an enormous concern for humanity as a whole, I did the opposite. I noticed that my concern for humanity is limited, and therefore decided that it's crazy to care much about random individuals that I happen to come across. (Although I probably haven't consciously thought about it in this way until now.)

The weird thing is that both of these emotional self-modification strategies seem to have worked, at least to a great extent. Eliezer has devoted his life to improving the lot of humanity, and I've managed to pass up news and discussions about Amanda Knox without a second thought. It can't be the case that both of these ways to change how our emotions work are the right thing to do, but the apparent symmetry between them seems hard to break.

What ethical principles can we use to decide between "Shut Up and Multiply" and "Shut Up and Divide"? Why should we derive our values from our native emotional responses to seeing individual suffering, and not from the equally human paucity of response at seeing large portions of humanity suffer in aggregate? Or should we just keep our scope insensitivity, like our boredom?

And an interesting meta-question arises here as well: how much of what we think our values are, is actually the result of not thinking things through, and not realizing the implications and symmetries that exist? And if many of our values are just the result of cognitive errors or limitations, have we lived with them long enough that they've become an essential part of us?

Debunking komponisto on Amanda Knox (long)

-5 rolf_nelson 02 February 2010 04:40AM

Rebuttal to: The Amanda Knox Test

If you don't care about Amanda Knox's guilt, or whether you have received unreliable information on the subject from komponisto's post, stop reading now.

[Edit: Let me note that, generally, I agree that discussion of current events should be discouraged in this site. It is only because "The Amanda Knox Test" was a featured post on this site that I claim this rebuttal of that post to be on-topic for this site.]

I shall here make the following claim:

C1. komponisto's post on Amanda Knox was misleading.

I could, additionally, choose to make the following claims:

C2. Amanda Knox is guilty of murder.
C3. The prosecution succeeded in proving Amanda's guilt beyond a reasonable doubt
C4. Amanda Knox received a fair trial

I believe claims C2 through C4 are also true; however, time constraints prevent me from laying out the cases and debating them with every single human being on the Internet, so I shall merely focus on C1. (That said, I would be willing to debate komponisto on C2, since I am curious whether I could get him to change his mind on the subject.)

continue reading »

Deontology for Consequentialists

46 Alicorn 30 January 2010 05:58PM

Consequentialists see morality through consequence-colored lenses.  I attempt to prise apart the two concepts to help consequentialists understand what deontologists are talking about.

Consequentialism1 is built around a group of variations on the following basic assumption:

  • The rightness of something depends on what happens subsequently.

It's a very diverse family of theories; see the Stanford Encyclopedia of Philosophy article.  "Classic utilitarianism" could go by the longer, more descriptive name "actual direct maximizing aggregative total universal equal-consideration agent-neutral hedonic act2 consequentialism".  I could even mention less frequently contested features, like the fact that this type of consequentialism doesn't have a temporal priority feature or side constraints.  All of this is is a very complicated bag of tricks for a theory whose proponents sometimes claim to like it because it's sleek and pretty and "simple".  But the bottom line is, to get a consequentialist theory, something that happens after the act you judge is the basis of your judgment.

To understand deontology as anything but a twisted, inexplicable mockery of consequentialism, you must discard this assumption.

Deontology relies on things that do not happen after the act judged to judge the act.  This leaves facts about times prior to and the time during the act to determine whether the act is right or wrong.  This may include, but is not limited to:

  • The agent's epistemic state, either actual or ideal (e.g. thinking that some act would have a certain result, or being in a position such that it would be reasonable to think that the act would have that result)
  • The reference class of the act (e.g. it being an act of murder, theft, lying, etc.)
  • Historical facts (e.g. having made a promise, sworn a vow)
  • Counterfactuals (e.g. what would happen if others performed similar acts more frequently than they actually do)
  • Features of the people affected by the act (e.g. moral rights, preferences, relationship to the agent)
  • The agent's intentions (e.g. meaning well or maliciously, or acting deliberately or accidentally)
continue reading »

Advice for AI makers

7 Stuart_Armstrong 14 January 2010 11:32AM

A friend of mine is about to launch himself heavily into the realm of AI programming. The details of his approach aren't important; probabilities dictate that he is unlikely to score a major success. He's asked me for advice, however, on how to design a safe(r) AI. I've been pointing him in the right directions and sending him links to useful posts on this blog and the SIAI.

Do people here have any recommendations they'd like me to pass on? Hopefully, these may form the basis of a condensed 'warning pack' for other AI makers.

Addendum: Advice along the lines of "don't do it" is vital and good, but unlikely to be followed. Coding will nearly certainly happen; is there any way of making it less genocidally risky?

Generalizing From One Example

259 Yvain 28 April 2009 10:00PM

Related to: The Psychological Unity of Humankind, Instrumental vs. Epistemic: A Bardic Perspective

"Everyone generalizes from one example. At least, I do."

   -- Vlad Taltos (Issola, Steven Brust)

My old professor, David Berman, liked to talk about what he called the "typical mind fallacy", which he illustrated through the following example:

There was a debate, in the late 1800s, about whether "imagination" was simply a turn of phrase or a real phenomenon. That is, can people actually create images in their minds which they see vividly, or do they simply say "I saw it in my mind" as a metaphor for considering what it looked like?

Upon hearing this, my response was "How the stars was this actually a real debate? Of course we have mental imagery. Anyone who doesn't think we have mental imagery is either such a fanatical Behaviorist that she doubts the evidence of her own senses, or simply insane." Unfortunately, the professor was able to parade a long list of famous people who denied mental imagery, including some leading scientists of the era. And this was all before Behaviorism even existed.

The debate was resolved by Francis Galton, a fascinating man who among other achievements invented eugenics, the "wisdom of crowds", and standard deviation. Galton gave people some very detailed surveys, and found that some people did have mental imagery and others didn't. The ones who did had simply assumed everyone did, and the ones who didn't had simply assumed everyone didn't, to the point of coming up with absurd justifications for why they were lying or misunderstanding the question. There was a wide spectrum of imaging ability, from about five percent of people with perfect eidetic imagery1 to three percent of people completely unable to form mental images2.

Dr. Berman dubbed this the Typical Mind Fallacy: the human tendency to believe that one's own mental structure can be generalized to apply to everyone else's.

continue reading »

The Wannabe Rational

31 MrHen 15 January 2010 08:09PM

I have a terrifying confession to make: I believe in God.

This post has three prongs:

First: This is a tad meta for a full post, but do I have a place in this community? The abstract, non-religious aspect of this question can be phrased, "If someone holds a belief that is irrational, should they be fully ousted from the community?" I can see a handful of answers to this question and a few of them are discussed below.

Second: I have nothing to say about the rationality of religious beliefs. What I do want to say is that the rationality of particular irrationals is not something that is completely answered after their irrationality is ousted. They may be underneath the sanity waterline, but there are multiple levels of rationality hell. Some are deeper than others. This part discusses one way to view irrationals in a manner that encourages growth.

Third: Is it possible to make the irrational rational? Is it possible to take those close to the sanity waterline and raise them above? Or, more personally, is there hope for me? I assume there is. What is my responsibility as an aspiring rationalist? Specifically, when the community complains about a belief, how should I respond?

continue reading »

Call for new SIAI Visiting Fellows, on a rolling basis

29 AnnaSalamon 01 December 2009 01:42AM

Last summer, 15 Less Wrongers, under the auspices of SIAI, gathered in a big house in Santa Clara (in the SF bay area), with whiteboards, existential risk-reducing projects, and the ambition to learn and do.

Now, the new and better version has arrived.  We’re taking folks on a rolling basis to come join in our projects, learn and strategize with us, and consider long term life paths.  Working with this crowd transformed my world; it felt like I was learning to think.  I wouldn’t be surprised if it can transform yours.

continue reading »

Information theory and FOOM

6 PhilGoetz 14 October 2009 04:52PM

Information is power.  But how much power?  This question is vital when considering the speed and the limits of post-singularity development.  To address this question, consider 2 other domains in which information accumulates, and is translated into an ability to solve problems:  Evolution, and science.

continue reading »

View more: Prev | Next