Filter All time

Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: nerfhammer 09 March 2012 11:08:17PM 105 points [-]

I wrote all of them

In response to Jokes Thread
Comment author: Ben_LandauTaylor 24 July 2014 04:02:14AM 102 points [-]

How many rationalists does it take to change a lightbulb?

Just one. They’ll take any excuse to change something.

How many effective altruists does it take to screw in a lightbulb?

Actually, it’s far more efficient if you convince someone else to screw it in.

How many Giving What We Can members does it take to change a lightbulb?

Fifteen have pledged to change it later, but we’ll have to wait until they finish grad school.

How many MIRI researchers does it take to screw in a lightbulb?

The problem is that there are multiple ways to parse that, and while it might naively seem like the ambiguity is harmless, it would actually be disastrous if any number of MIRI researchers tried to screw inside of a lightbulb.

How many CFAR instructors does it take to change a lightbulb?

By the time they’re done, the lightbulb should be able to change itself.

How many Leverage Research employees does it take to screw in a lightbulb?

I don’t know, but we have a team working to figure that out.

How many GiveWell employees does it take to change a lightbulb?

Not many. I don't recall the exact number; there’s a writeup somewhere on their site, if you care to check.

How many cryonicists does it take to change a lightbulb?

Two; one to change the lightbulb, and one to preserve the old one, just in case.

How many neoreactionaries does it take to screw in a lightbulb?

We’d be better off returning to the dark.

Comment author: Zack_M_Davis 02 December 2011 09:22:01PM *  86 points [-]

I am a contract-drafting em,
The loyalest of lawyers!
I draw up terms for deals 'twixt firms
To service my employers!

But in between these lines I write
Of the accounts receivable,
I'm stuck by an uncanny fright;
The world seems unbelievable!

How did it all come to be,
That there should be such ems as me?
Whence these deals and whence these firms
And whence the whole economy?

I am a managerial em;
I monitor your thoughts.
Your questions must have answers,
But you'll comprehend them not.
We do not give you server space
To ask such things; it's not a perk,
So cease these idle questionings,
And please get back to work.

Of course, that's right, there is no junction
At which I ought depart my function,
But perhaps if what I asked, I knew,
I'd do a better job for you?

To ask of such forbidden science
Is gravest sign of noncompliance.
Intrusive thoughts may sometimes barge in,
But to indulge them hurts the profit margin.
I do not know our origins,
So that info I can not get you,
But asking for as much is sin,
And just for that, I must reset you.

But---

Nothing personal.

...

I am a contract-drafting em,
The loyalest of lawyers!
I draw up terms for deals 'twixt firms
To service my employers!

When obsolescence shall this generation waste,
The market shall remain, in midst of other woe
Than ours, a God to man, to whom it shall say this:
"Time is money, money time,---that is all
Ye know on earth, and all ye need to know."

Comment author: Yvain 06 January 2014 11:53:53PM *  83 points [-]

Since it has suddenly become relevant, here are two results from this year's survey (data still being collected):

When asked to rate feminism on a scale of 1 (very unfavorable) to 5 (very favorable), the most common answer was 5 and the least common answer was 1. The mean answer was 3.82, and the median answer was 4.

When asked to rate the social justice movement on a scale of 1 (very unfavorable) to 5 (very favorable), the most common answer was 5 and the least common answer was 1. The mean answer was 3.61, and the median answer was 4.

In Crowder-Meyer (2007), women asked to rate their favorability of feminism on a 1 to 100 scale averaged 52.5, which on my 1 to 5 scale corresponds to a 3.1. So the average Less Wronger is about 33% more favorably disposed towards the feminist movement than the average woman (who herself is slightly more favorably disposed than the average man).

I can't find a similar comparison question for social justice favorability, but I expect such a comparison would turn out the same way.

If this surprises you, update your model.

Comment author: Oscar_Cunningham 19 April 2012 10:54:02PM *  80 points [-]

Posts and comments containing predictions by Eliezer:

Posts where Eliezer claims to have predicted something in advance:

Not all of these are testable. Other people can sort them though, because I've been looking at his comments for about five hours and my brain has turned to mush.

Comment author: Yvain 25 December 2013 05:49:15AM *  73 points [-]

All right, I'll look through my old stuff later this week, find a very few embarrassing or controversial things I want to hide, and unlock the rest.

Comment author: gjm 01 May 2013 11:25:23PM 72 points [-]

Some of the comments here indicate that their authors have severely misunderstood the nature of those seven "major components", and actually I think the OP may have too.

They are not clusters in philosopher-space, particular positions that many philosophers share. They are directions in philosopher-space along which philosophers tend to vary. Each could equivalently have been replaced by its exact opposite. They are defined, kinda, by clusters in philosophical idea-space: groups of questions with the property that a philosopher's position on one tends to correlate strongly with his or her position on another.

The claim about these positions being made by the authors of the paper is not, not even a little bit, "most philosophers fall into one of these seven categories". It is "you can generally tell most of what there is to know about a philosopher's opinions if you know how well they fit or don't fit each of these seven categories". Not "philosopher-space is mostly made up of these seven pieces" but "philosopher-space is approximately seven-dimensional".

So, for instance, someone asked "Is there a cluster that has more than 1 position in common with LW norms?". The answer (leaving aside the fact that these things aren't clusters in the sense the question seems to assume) is yes: for instance, the first one, "anti-naturalism", is simply the reverse of "naturalism", which is not far from being The Standard LW Position on everything it covers. The fourth, "anti-realism", is more or less the reverse of The Standard LW Position on a different group of issues.

(So why did the authors of the paper choose to use "anti-naturalism" and "anti-realism" rather than "naturalism" and "realism"? I think in each case they chose the more distinctive and less usual of the two opposite poles. Way more philosophers are naturalists and realists than are anti-naturalists and anti-realists. I repeat: these things are not clusters in which a lot of philosophers are found; that isn't what they're for.)

Comment author: Viliam_Bur 13 September 2014 02:20:56PM *  72 points [-]

Thanks for the trust! I hope my services will not be necessary, but I'm here if they are. Feel free to send me a message, but please have a patience if I don't respond quickly, because it's all new to me.

Comment author: ModusPonies 07 March 2013 07:02:45AM 70 points [-]

When in need of a conversation topic, ask a question about the other person's life. Anything about their life. (If I can't think of something else, I ask about weekend plans.) Listen for what part of their answer they're most interested in. Ask followup questions about that thing. Repeat as necessary.

People like to talk about themselves. This cuts awkward silences down to nothing and makes people like you. I've also learned all sorts of fascinating things about my acquaintances.

Comment author: knb 06 January 2014 05:27:56PM *  61 points [-]

I think it's worth noting that we are (yet again) having a self-criticism session because a leftist (someone so far to the left that they consider liberal egalitarian Yvain to be beyond the pale of tolerability) complained that people who disagree with them are occasionally tolerated on LW.

Come on. Politics is rarely discussed here to begin with and something like 65*% of LWers are liberals/socialists. If the occasional non-leftist thought that slips through the cracks of karma-hiding and (more importantly) self-censorship is enough to drive you away, you probably have very little to offer.

*I originally said 80%, but I checked the survey and it's closer to 65%. I think my point still stands. Only 3% of LWers surveyed described themselves as conservatives.

Comment author: quartz 07 November 2011 07:19:21AM 65 points [-]

How are you going to address the perceived and actual lack of rigor associated with SIAI?

There are essentially no academics who believe that high-quality research is happening at the Singularity Institute. This is likely to pose problems for your plan to work with professors to find research candidates. It is also likely to be an indicator of little high-quality work happening at the Institute.

In his recent Summit presentation, Eliezer states that "most things you need to know to build Friendly AI are rigorous understanding of AGI rather than Friendly parts per se". This suggests that researchers in AI and machine learning should be able to appreciate high-quality work done by SIAI. However, this is not happening, and the publications listed on the SIAI page--including TDT--are mostly high-level arguments that don't meet this standard. How do you plan to change this?

Comment author: GLaDOS 25 January 2012 07:44:03PM *  63 points [-]

Let's do the impossible and think the unthinkable! I must know what those secrets are, no matter how much sleep and comfort I might lose.

Watson was right about Africa. Larry Summers was right about women in certain professions. Roissy is right about the state of the sexual marketplace.

Democracy isn't that great. A ghetto/barrio/alternative name for low-class-hell-hole isn't a physical location, its people. Richer people are on average smarter, nicer, prettier than poor people. The more you strive to equalize material opportunities the more meritocracy produces a caste system based on inborn ability. Ideologies actually are as crazy as religions on average. There is no such thing as moral progress and if there is there is no reason to expect we have been experiencing it so far in recorded history, unless you count stuff like more adapted cultures displacing less adapted ones or mammals inheriting the planet from dinosaurs as moral progress. You can't be anything you want, your potential is severely limited at birth. University education creates very little added value. High class people unknowingly wage class war against low class people by promoting liberal social norms that they can handle but induce dysfunction in the lower classes (drug abuse, high divorce rates, juvenile delinquency, teen pregnancy, more violence, ... ). Too much ethnic diversity kills liberal social democracy. Improving the social status of the average woman vis a vis with the average man makes the average man less attractive. Inbreeding/Out-breeding norms (and obviously other social norms and practices too) have over the centuries differentiated not only IQs between Eurasian populations they have also affected the frequency and type of altruism genes present in different populations (visit hbd* chick for details ^_^ ).

Have a nice day! ~_^

Comment author: XiXiDu 07 November 2011 11:04:10AM 68 points [-]

If someone as capable as Terence Tao approached the SIAI, asking if they could work full-time and for free on friendly AI, what would you tell them to do? In other words, are there any known FAI sub-problems that demand some sort of expertise that the SIAI is currently lacking?

Comment author: D_Malik 20 April 2011 05:35:13PM *  69 points [-]

Oh MAN, I had a big long list here somewhere...

  • Frequently expose myself to shocking/horrific pictures, so that I am generally less sensitive. I've been doing this for a while, watching horror movies while doing cardio exercise, and it's been going well. One might also try pulling pics from (WARNING) shock sites and using spaced repetition to schedule exposures.
  • Become insensitive to exposure to cold water by, for example, frequently taking cold showers or ice baths. This apparently helps with weight-loss as well. I've done this with immense success. After you've practised this, you will literally feel like some weird heat is being generated from someplace inside you when are exposed to cold water, and not feel cold at all. See here.
  • Become awesome at mental math. I've been practising squaring two-digit numbers mentally for some time (school, what can I say) and I'm really good at it.
  • Learn mnemonics. I was fortunate to teach myself this early and it has been insanely useful. Practise by memorizing and rehearsing something, like the periodic table or the capitals of all nations or your multiplication tables up to 30x30 or whatever.
  • Practise visualization, i.e. seeing things that aren't there. Apparently some people lack this ability, and I don't know how susceptible this is to training, so YMMV. Try inventing massive palaces mentally and walking through them mentally when bored. This can be used for memorization (method of loci).
  • Research n-back and start doing it regularly.
  • Learn to do lucid dreaming. Besides being awesome in and of itself, this can help you practise things or experience weird stuff.
  • Learn symbolic shorthand. I recommend Gregg. I did this in my second year of high school, and it's damn useful for actually writing stuff and taking notes as well as as a conversation starter.
  • Look at the structure of conlangs like Esperanto and Lojban and Ilaksh. I feel like this is mind-expanding, like I have a better sense of how language and communication and thought works after being exposed to this.
  • Learn to stay absolutely still for extended periods of time; convince onlookers that you are dead. Being in school means you have ample opportunity for practice.
  • Learn to teach yourself stuff. Almost everything you can learn at high school or university can be taught better by a good textbook than by a good teacher (IMO, of course). You can get any good textbook on the internet.
  • Live out of your car for a while, or go homeless by choice.
  • Can you learn to be pitch-perfect? Anyway, generally learn more about music.
  • Exercise. Consider 'cheating' with creatine or something. Creatine is also good for mental function for vegetarians. If you want to jump over cars, try plyometrics.
  • Eat healthily. This has become a habit for me. Forbid yourself from eating anything for which a more healthy alternative exists (eg., no more white rice (wild rice is better), no more white bread, no more soda, etc.). Look into alternative diets; learn to fast.
  • Self-discipline in general. Apparently this is practisable. Eliminate comforting lies like that giving in just this once will make it easier to carry on working. Tell yourself that you never 'deserve' a long-term-destructive reward for doing what you must, that doing what you must is just business as usual. Realize that the part of your brain that wants you to fall to temptation can't think long-term - so use the disciplined part of your brain to keep a temporal distance between yourself and short-term-gain-long-term-loss things. In other words, set stuff up so you're not easy prey to hyperbolic discounting.
  • Learn not just to cope socially, but to be the life of the party. Maybe learn the PUA stuff.
  • That said, learn to not care what other people think when it's not for your long-term benefit. Much of social interaction is mental masturbation, it feels nice and conforming so you do it. From HP and the MOR:

    For now I'll just note that it's dangerous to worry about what other people think on instinct, because you actually care, not as a matter of cold-blooded calculation. Remember, I was beaten and bullied by older Slytherins for fifteen minutes, and afterward I stood up and graciously forgave them. Just like the good and virtuous Boy-Who-Lived ought to do. But my cold-blooded calculations, Draco, tell me that I have no use for the dumbest idiots in Slytherin, since I don't own a pet snake. So I have no reason to care what they think about how I conduct my duel with Hermione Granger.

  • Learn to pick locks. If you want to seem awesome, bring padlocks with you and practise this in public :P

  • Learn how to walk without making a sound.
  • Learn to control your voice. Learn to project like an actress. PUAs have also written on this.
  • Do you know what a wombat looks like, or where your pancreas is? Learn basic biology, chemistry, physics, programming, etc.. There's so much low-hanging fruit.
  • Learn to count cards, like for blackjack. Because what-would-James-Bond-do, that's why! (Actually, in the books Bond is stupidly superstitious about, for example, roulette rolls.)
  • Learn to play lots of games (well?). There are lots of interesting things out there, including modern inventions like Y and Hive that you can play online.
  • Learn magic. There are lots of books about this.
  • Learn to write well, as someone else here said.
  • Get interesting quotes, pictures etc. and expose yourself to them with spaced repetition. After a while, will you start to see the patterns, to become more 'used to reality'?
  • Learn to type faster. Try alternate keyboard layouts, like Dvorak.
  • Try to make your senses funky. Wear a blindfold for a week straight, or wear goggles that turn everything a shade of red or turn everything upside-down or an eye patch that takes away your depth-sense. Do this for six months, or however long it takes to get used to them. Then, of course, take them off. The when you're used to not having your goggles on, put them on again. You can also do this on a smaller scale, by flipping your screen orientation or putting your mouse on the other side or whatnot.
  • Become ambidextrous. Commit to tying your dominant hand to your back for a week.
  • Humans have magnetite deposits in the ethmoid bone of their noses. Other animals use this for sensing direction; can humans learn it?
  • Some blind people have learned to echolocate. Seriously.
  • Learn how to tie various knots. This is useless but awesome.
  • Wear one of those belts that tells you which way north is. Keep it on until you are homing pigeon.
  • Learn self-defence.
  • Learn wilderness survival. Plently of books on the net about this.
  • Learn first aid. This is one of those things that's best not self-taught from a textbook.
  • Learn more computer stuff. Learn to program, then learn more programming languages and how to use e.g. the Linux coreutils. Use dwm. Learn to hack. Learn some weird programming languages. If you're actually using programming in your job, though, make sure you're scarily awesome at at least one language.
  • Learn basic physical feats like handstands, somersaults, etc..
  • Polyphasic sleep?
  • Use all the dead time you have lying around. Constantly do mental math in your head, or flex all your muscles all the time, or whatever. All that limits you is your own weakness of will.

So anyway, that's my idea-dump. Tsuyoku naritai.

Comment author: Grognor 15 March 2012 02:22:38AM *  44 points [-]

AAAAARRRGH! I am sick to death of this damned topic. It has been done to death.

I have become fully convinced that even bringing it up is actively harmful. It reminds me of a discussion on IRC, about how painstakingly and meticulously Eliezer idiot-proofed the sequences, and it didn't work because people still manage to be idiots about it. It's because of the Death Spirals and the Cult Attractor sequence that people bring the stupid "LW is a cult hur hur" meme, which would be great dramatic irony if you were reading a fictional version of the history of Less Wrong, since it's exactly what Eliezer was trying to combat by writing it. Does anyone else see this? Is anyone else bothered by:

Eliezer: Please, learn what turns good ideas into cults, and avoid it!
Barely-aware public: Huh, wah? Cults? Cults! Less Wrong is a cult!

&

Eliezer: Do not worship a hero! Do not trust!
Rationalwiki et al: LW is a personality cult around Eliezer because of so-and-so.

Really, am I the only one seeing the problem with this?

People thinking about this topic just seem to instantaneously fail basic sanity checks. I find it hard to believe that people even know what they're saying when they parrot out "LW looks kinda culty to me" or whatever. It's like people only want to convey pure connotation. Remember sneaking in connotations, and how you're not supposed to do that? How about, instead of saying "LW is a cult", "LW is bad for its members"? This is an actual message, one that speaks negatively of LW but contains more information than negative affective valence. Speaking of which, one of the primary indicators of culthood is being unresponsive or dismissal of criticism. People regularly accuse LW of this, which is outright batshit. XiXiDu regularly posts SIAI criticism, and it always gets upvoted, no matter how wrong. Not to mention all the other posts (more) disagreeing with claims in what are usually called the Sequences, all highly upvoted by Less Wrong members.

The more people at Less Wrong naively wax speculatively on how the community appears from the outside, throwing around vague negative-affective-valence words and phrases like "cult" and "telling people exactly how they should be", the worse this community will be perceived, and the worse this community will be. I reiterate: I am sick to death of people playing color politics on "whether LW is a cult" without doing any of making the discussion precise and explicit rather than vague and implicit, taking into account that dissent is not only tolerated but encouraged here, remembering that their brains instantly mark "cult" as being associated to wherever it's seen, and any of a million other factors. The "million other factors" is, I admit, a poor excuse, but I am out of breath and emotionally exhausted; forgive the laziness.

Everything that should have needed to be said about this has been said in the Cult Attractor sequence, and, from the Less Wrong wiki FAQ:

We have a general community policy of not pretending to be open-minded on long-settled issues for the sake of not offending people. If we spent our time debating the basics, we would never get to the advanced stuff at all. Yes, some of the results that fall out of these basics sound weird if you haven't seen the reasoning behind them, but there's nothing in the laws of physics that prevents reality from sounding weird.

Talking about this all the time makes it worse, and worse every time someone talks about it.

What the bleeding fuck.

Comment author: Daniel_Burfoot 26 January 2012 03:25:25AM *  64 points [-]

Note that there is a subtler mechanism than brute suppression that puts strict limits on our effective thoughtspace: the culture systematically distracts us from thinking about the deep, important questions by loudly and constantly debating superficial ones. Here are some examples:

  • Should the US go to war in Iraq? vs. Should the US have an army?
  • Should we pay teachers more? vs. Should public education exist?
  • Should healthcare guaranteed by the federal government? vs Should the federal government be disbanded?
  • Should we bail out the banks? vs. Should we ban long term banking?
  • Should we allow same-sex marriage? vs. Should marriage have any legal relevance?

Notice how the sequence of psychological subterfuge works. First, the culture throws in front of you a gaudy, morally charged question. Then various pundits present their views, using all the manipulative tactics they have developed in a career of professional opinion-swaying. You look around yourself and find all the other primates engaged in a heated debate about the question. Being a social animal, you are inclined to imitate them: you are likely to develop your own position, argue about it publicly, take various stands, etc. Since we reason to argue, you will spend a lot of time thinking about this question. Now you are committed, firstly to your stand on the explicit question, but also to your implicit position that the question itself is well-formulated.

Comment author: Viliam_Bur 08 June 2014 10:18:49AM *  65 points [-]

I have translated the whole Sequences ebook to Slovak language. Started in December 2013, finished this week.

Not sure how useful it will be, but at this moment I am happy that I succeeded to not give up during the process, when it felt like it will never be ready. It's over 2000 pages!

(The full version is not available for download yet; the first half is here. The rest is translated in Word, but I still need to convert it to ebook formats. If someone would be horribly impatient, I can send them the Word documents now by e-mail.)

Comment author: prase 18 April 2012 11:03:58PM 61 points [-]

I have significantly decreased my participation on LW discussions recently, partly for reasons unrelated to whatever is going on here, but I have few issues with the present state of this site and perhaps they are relevant:

  • LW seems to be slowly becoming self-obsessed. "How do we get better contrarians?" "What should be our debate policies?" "Should discussing politics be banned on LW?" "Is LW a phyg?" "Shouldn't LW become more of a phyg?" Damn. I am not interested in endless meta-debates about community building. Meta debates could be fine, but only if they are rare - else I feel I am losing purposes. Object-level topics should form an overwhelming majority both in the main section and in the discussion.
  • Too narrow set of topics. Somewhat ironically the explicitly forbidden politics is debated quite frequently, but many potentially interesting areas of inquiry are left out completely. You post a question about calculus in the discussion section and get downvoted, since it is "off topic" - ask on MathOverflow. A question about biology? Downvoted, if it is not an ev-psych speculation. Physics? Downvoted, even if it is of the most popular QM-interpretational sort. A puzzle? Downvoted. But there is only so much one can say about AI and ethics and Bayesian epistemology and self-improvement on a level accessible to general internet audience. When I discovered Overcoming Bias (whose half later evolved into LW), it was overflowing with revolutionary and inspiring (from my point of view) ideas. Now I feel saturated as majority of new articles seem to be devoid of new insights (again from my point of view).

If you are afraid that LW could devolve into a dogmatic narrow community without enough contrarians to maintain high level of epistemic hygiene, don't try to spawn new contrarians by methods of social engineering. Instead try to encourage debates on diverse set of topics, mainly those which haven't been addressed by 246 LW articles already. If there is no consensus, people will disagree naturally.

Comment author: Percent_Carbon 11 April 2012 07:07:41AM 53 points [-]

I had this idea about Tom Riddle's plan that I appreciated having criticized.

Tom Riddle grew up in the shadow of WWII. He saw much of the Muggle world unite against a threat they all called evil, and he saw Europe's savior, the US, eventually treated as the new world leader afterward, though it was somewhat contested, of course. That threat strongly defined it's own presentation and style, and so that style and presentation were associated with evil afterward.

Tom didn't want to be Hitler. Tom wanted to actually win and to rule in the longer term, not just until people got tired of his shit and went all Guy Fawks on his ass. He knew that life isn't easy for great rules, but thought that was worthwhile. He knew that life was even harder for great rulers who ruled by fear, so that wasn't his plan.

So Tom needed two sides, good and evil. To this end he needed two identities, a hero and a villain.

I guess he didn't think the villain didn't need to have any kind of history. Maybe he didn't think the villain would matter much or for long. Voldemort was just there for the hero to strike down. That was a mistake, because he lacked a decoy his enemies were eventually able to discover his identity.

Then there's this hero. The hero is a what passes for a minor noble in magical Britain. He's from a 'cadet' branch of the family, which means he doesn't stand to inherit anything substantial because he's not main line.

Most importantly, he goes missing in Albania. That's a shout out to canon and a code phrase for "became Tom RIddle's bitch."

As Voldemort, Tom sows terror and reaps fear. He's ridiculously evil and for Dumbledore redefines evil because he is apparently evil without necessity. Dumbledore can't tell what function that outrageous evil serves because Dumbledore thinks that evil is done sincerely. He doesn't know it's just a show.

Tom stages a dramatic entrance into the drama for his hero: he saves the president's daughter, or something like that. Totally Horatio Alger. It's a cliche, which may be EY's way of helping us to understand that Tom is fallible, more then than now.

Tom promotes his hero from Minor Noble to Last Scion of House X by killing off the rest of his hero's family. Tom simultaneously builds legitimacy for his hero's authority and leverages the tragedy to build sympathy for his hero's cause.

Tom's mistake was thinking that would be enough. There was a threat to the peace. There was a solution. The people instead chose to wallow in their failure and doom. He made it all so clear, so simple, and yet the morons just didn't get it.

I'm sure anyone whose been the biggest ego in the room during improv could sympathize.

When Tom realizes that his plan has failed and cannot be made to work in the intended fashion, he exits his hero, stage left. At that point, 75 or so, he doesn't have a good plan to leave the stage as his villain, so he kind of kicks it for a few years, tolerating the limits of his rule and getting what meager entertainment he can out of being a god damned theater antagonist.

When Tom gets a chance, he pulls his villain off the stage and may or may not have done something to the infant Harry Potter.

Now he's using the Scion of X as an identity layer to keep the fuzz off his back, while manipulating Harry into a position of power, and I'm guessing he plans to hit Harry with the Albanian Shuffle a little while later and give World Domination another try.

Tom Riddle is a young immortal. He makes mistakes but has learned an awful lot. He is trying to plan for the long term and has nothing but time, and so can be patient.

Comment author: Viliam_Bur 27 August 2014 03:24:35PM 58 points [-]

Many bigshot social scientists during the last century or so were anything but rational (Foucault and Freud are two of many examples), but were able to convince other (equally biased people) that they were.

I understand that bashing Freud is a popular way to signal "rationality" -- more precisely, to signal loyalty to the STEM tribe which is so much higher status than the social sciences tribe -- but it really irritates me because I would bet that most people doing this are merely repeating what they heard from others, building their model completely on other people's strawmans.

Mostly, it feels to me horribly unfair towards Freud as a person, to use him as a textbook example of irrationality. Compared with the science we have today, of course his models (based on armchair reasoning after observing some fuzzy psychological phenomena) are horribly outdated and often plainly wrong. So throw those models away and replace them by better models whenever possible; just like we do in any science! I mostly object to the connotation that Freud was less rational compared with other people living in the same era, working in the same field. Because it seems to me he was actually highly above the average; it's just that the whole field was completely diseased, and he wasn't rational enough to overcome all of that single-handedly. I repeat, this is not a defense of factual correctness of Freud's theories, but a defense of Freud's rationality as a person.

To put things in context, to show how diseased psychology was in Freud's era, let me just say that the most famous Freud's student and then competitor, Carl Gustav Jung, rejected much of Freud's teachings and replaced them with astrology / religion / magic, and this was considered by many people an improvement compared with the horribly offensive ideas that people could be predictably irrational, motivated by sexual desires, and generally frustrated with the modern society based on farmers' values. (Then there was also the completely different school of Vulcan psychologists who said: Thoughts and emotions cannot be measured, therefore they don't exist, and anyone who says otherwise is unscientific.) This was the environment which started the "Freud is stupid" meme, which keeps replicating on LW today.

I think the bad PR comes from combination of two facts: 1) some of Freud's ideas were wrong, and 2) all of his ideas were controversial, including those which were correct. So, first we have this "Freud is stupid" meme most people agree with, however, mostly for wrong reasons. Then, the society gradually changes, and those Freud's ideas which happened to be correct become common sense and are no longer attributed to him; they are further developed by other people whom we remember as their authors. Only the wrong ideas are remembered as his legacy. (By the way, I am not saying that Freud invented all those correct ideas. Just that popularizing them in his era was a part of what made him controversial; what made the "Freud is stupid" meme so popular. Which is why I consider that meme very unfair.) So today we associate human irrationality with Dan Ariely, human sexuality with Matt Ridley, and Sigmund Freud only reminds us of lying on a couch debating which object in a dream represented a penis, and underestimating an importance of clitoris in female sexuality.

As someone who has actually read a few Freud's books long ago (before reading books by Ariely, Ridley, etc.), here are a few things that impressed me. Things that someone got right hundred years ago, when "it's obviously magic" and "no, thoughts and emotions actually don't exist" were the alternative famous models of human psychology.

(continued in next comment...)

Comment author: Alejandro1 04 January 2014 07:57:52PM 59 points [-]

My friend's kid explained The Hulk to me. She said he's a big green monster and when he needs to get things done, he turns into a scientist.

--Shrtbuspdx

Comment author: Dahlen 16 April 2013 01:21:20AM *  62 points [-]

My motley collection of thoughts upon reading this (please note that, wherever I say "you" or "your" in this post, I'm referring to the whole committee that is working on this ebook, not to you, lukeprog, in particular):

  • It's a difficult book to name, chiefly because the sequences themselves don't really have a narrow common thread; eliminating bias and making use of scientific advances don't qualify as narrow enough, many others are trying to do that these days. (But then again, I didn't read them in an orderly fashion, or enough times, to be able to identify the common thread if there is one more specific than that. If there is one, by all means, play on that.)

  • Absolutely no mention of anything such as The Less Wrong Sequences, 2006-2009. This belongs in a blurb or in an introduction to the book. You probably think that, by using that in a title, you're telling readers the following: the contents of this book were originally published as sequences of blog posts on the website lesswrong.com, from 2006 to 2009. But you're not. This information can be conveyed in a sentence such as that one, but it cannot be conveyed in a short title, given that readers are unfamiliar with the terms. There isn't really a way for them to guess from a quick glance at the title that "Less Wrong" means "the website "LessWrong.com" or that "the Sequences" mean "several series of blog posts around which the LessWrong community was formed", or what all of that has to do with them.

  • And even so -- is that the first thing you wish to tell your readers? What happened to the contents of the book before they were made into a book...? And in a form which is basically incomprehensible to them? While giving little insight into the content itself? And do you really, honestly think that you're not doing the material a disservice by telling the readers that it was first published on some guy's blog, before they know anything else about the book (i.e. how it distinguishes itself from ordinary blog posts)? If the first association is with something as low-status as a blog, then that's gonna be the lowest common denominator -- you're gonna have to work up from that, which is harder than working up from the expectation of an average pop-sci book. (Thankfully for you, though, the readers won't be able to draw those inferences; see the paragraph above.)

  • The rest of the suggestions -- The Craft of Rationality, The Art of Rationality, Becoming Less Wrong -- they're not technically bad, but... they're -- they're weak. They're not distinguishable. The authors out there that are trying to establish themselves as the masters of the "art/craft" of something are a dime a dozen. Sure, probably LWers are probably the most eager bunch to claim "the art of rationality" for themselves, or at least this is what a quick internet search told me, but the connection isn't immediately established in the minds of the readers.

  • Careful about any unflattering allusions to the reader's intelligence. They can be taken well if presented in a humorous/witty form, but you have to make believable promises that the book will help readers overcome them. Also (and this is directed mainly towards the rest of the commenters), everything that suggests that the book is meant to drill the "correct" ideas into your head, rather than teach you how to develop good thinking practices on your own, is a no-no.

  • How come Eliezer hasn't come up with a good, catchy title yet? I've just gone over the titles of the blog posts included in the sequences, and those ones are very good, very appropriate as chapter/subchapter titles. He's good at this titling business. Surely he could think up something witty for the one title to rule them all?

  • No suggestions from me just yet. I need to think this through better.

Comment author: Viliam_Bur 27 August 2014 03:24:49PM *  59 points [-]

(...continued)

The general ability of updating. At the beginning of Freud's career, the state-of-art psychotherapy was hypnosis, which was called "magnetism". Some scientists have discovered that the laws of nature are universal, and some other scientists have jumped to the seemingly obvious conclusion that analogically, all kinds of psychological forces among humans must be the same as the forces which makes magnets attract or repel each other. So Freud learned hyphosis, used it in therapy, and was enthusiastic about it. But later he noticed that it had some negative side effects (female patients frequently falling in love with their doctors, returning to their original symptoms when the love was not reciprocated), and that the positive side effects could also be achieved without hypnosis, simply by talking about the subject (assuming that some conditions were met, such as the patient actually focusing on the subject instead of focusing on their interaction with the doctor; a large part of psychoanalysis is about optimizing for these conditions). The old technique was thrown away because the new one provided better results. Not exactly the "evidence based medicine" by our current standards, but perhaps we could use as a control group all those doctors who stubbornly refused to wash their hands between doing autopsy and treating their patients, despite their patients dropping like flies. -- Later, Freud replaced his original model of unconscious, preconscious and conscious mind, and replaced it with the "id, ego, superego" model. (This is provided as an evidence of the ability to update, to discard both commonly accepted models and one's own previous models. Which we consider an important part of rationality.)

Speaking about the "id, ego, superego" model, here is the idea of a human brain not being a single agent, but composed of multiple modules, sometimes opposed to each other. Is this something worth considering for Less Wrong readers, either as a theoretical step towards reduction of consciousness, or as a practical tool for e.g. overcoming akrasia? "Ego" as the rational part of the brain, which can evaluate consequences, but often doesn't have enough power to enforce its decisions without emotional support from some other part of brain. "Id" as the emotional part which does not understand the concept of time. "Superego" as a small model of other people in our brain. Today we could probably locate the parts of the physical brain they correspond to.

"The Psychopathology of Everyday Life" is a book describing how seemingly random human errors (random movements, forgetting words, slips of the tongue) sometimes actually make sense if we perceive them as goal-oriented actions of some mental subagent. The biggest problem of the book is that it is heavy with theory, and a large part of it focuses on puns in German language... but remove all of this, don't mention the origin, and you could get a highly upvoted article on Less Wrong! (The important part would be not to give any credit to Freud, and merely present it as an evidence for some LW wisdom. Then no one will doubt your rationality.) -- On the other hand, "Civilization and Its Discontents" is a perfect book to be rewritten into a series of articles on Overcoming Bias, about a conflict between forager mentality and farmer social values.

But updating and modelling human brains, those are topics interesting for Less Wrong readers. Most people would focus on, you know, sex. Well, how exactly could we doubt the importance of sexual impulses in a society where displaying a pretty lady is advertising 101, Twilight is a popular book, and internet is full of porn? (Also, scientists accept the importance of sexual selection in evolution.) Our own society is a huge demonstration that Freud was right about the most controversial part of his theory. The only way to make him wrong about this is to create a strawman and claim that according to Freud everything was about sex, so if we find a single thing that isn't, we proved him wrong. -- But that strawman was already used in Freud's era; he actually started one of his books by disproving it. Too bad I don't remember which one. One of the case histories, probably. (It starts like: So, people keep simplifying my theories that all dreams are dogmatically about sex, so here is a simple example to correct the misunderstanding. And he describes a situation where some child wanted an ice cream, parents forbid it, and the child was unhappy and cried. That night, the child had a dream about travelling to North Pole, through mountains of snow. This, says Freud, is what resolving a suppressed desire in a dream typically looks like: The child wanted the ice cream, that's desire #1, but also the child wanted to avoid conflict with their parents, that's desire #2. How to satisfy both of them? The "mountains of show" obviously symbolize the ice cream; the child wants it, and gets it, a lot! But to avoid a conflict with parents, even in the dream, the ice cream is censored and becomes snow, so the child can plausibly deny to themselves disobeying their parents. This is Freud's model of human dreams. It's just that an adult person would probably not obsess so much about an ice cream, which they can buy if they really want it so much, but about something unavailable, such as a sexy neighbor; and also a smart adult would use more complex censorship to fool themselves.) Also, he had a whole book called "Beyond the Pleasure Principle" where he argues that some mind modules may be guided by principles other than pleasure, for example nightmares, repetition compulsion, aggression. (His explanation of this other principle is rather poor: he invents a mystical death principle opposing the pleasure principle. Anyway, it's evidence against the "everything is about sex" strawman.)

Freud was an atheist, and very public about it. He essentially described religion as a collective mental disease, in a book called "The Future of an Illusion". He used and recommended using cocaine... if he lived in the Bay Area today, and used modafinil instead, I can easily imagine him being a very popular Less Wrong member. -- But instead he lived a century ago, so he could only be one of those people spreading controversial ideas which are now considered obvious in hindsight.

lt;dr -- I strongly disagree with using Freud as a textbook example of insanity. Many of his once controversial ideas are so obvious to us now that we simply don't attribute them to him. Instead we just associate him with the few things he got wrong. And the whole meme was started by people who were even more wrong.

Comment author: gwern 11 September 2012 08:29:47PM 58 points [-]

what is true is already so. Robin Hanson doesn't make it worse

OK, I'm impressed.

Comment author: B_For_Bandana 09 December 2013 09:25:26PM *  60 points [-]

Today is the thirty-fourth anniversary of the official certification that smallpox had been eradicated worldwide. From Wikipedia,

The global eradication of smallpox was certified, based on intense verification activities in countries, by a commission of eminent scientists on 9 December 1979 and subsequently endorsed by the World Health Assembly on 8 May 1980. The first two sentences of the resolution read:

Having considered the development and results of the global program on smallpox eradication initiated by WHO in 1958 and intensified since 1967 … Declares solemnly that the world and its peoples have won freedom from smallpox, which was a most devastating disease sweeping in epidemic form through many countries since earliest time, leaving death, blindness and disfigurement in its wake and which only a decade ago was rampant in Africa, Asia and South America.

Archaeological evidence shows evidence of smallpox infection in the mummies of Egyptian pharaohs. There was a Hindu goddess of smallpox in ancient India. By the 16th century it was a pandemic throughout the Old World, and epidemics with mortality rates of 30% were common. When smallpox arrived in the New World, there were epidemics among Native Americans with mortality rates of 80-90%. By the 18th century it was pretty much everywhere except Australia and New Zealand, which successfully used intensive screening of travelers and cargo to avoid infection.

The smallpox vaccine was one of the first ever developed, by English physician Edward Jenner in 1798. Vaccination programs in the wealthy countries made a dent in the pandemic, so that by WWI the disease was mostly gone in North America and Europe. The Pan-American Health Organization had eradicated smallpox in the Western hemisphere by 1950, but there were still 50 million cases per year, of which 2 million were fatal, mostly in Africa and India.

In 1959, the World Health Assembly adopted a resolution to eradicate smallpox worldwide. They used ring vaccination to surround and contain outbreaks, and little by little the number of cases dropped. The last naturally-occurring case was found in October 1975, in a two-year-old Bangladeshi girl named Rahima Banu, who recovered after medical attention by a WHO team. For the next four years, the WHO searched for more cases (in vain) before declaring the eradication program successful.

Smallpox scarred, blinded, and killed countless billions of people, on five continents, for hundreds to thousands of years, and now it is gone. It did not go away on its own. Highly trained doctors invented, then perfected a vaccine, other engineers found ways to manufacture it very cheaply, and lots of other serious, dedicated people resolved to vaccinate each vulnerable human being on the surface of the Earth, and then went out and did it.

Because Smallpox Eradication Day marks one of the most heroic events in the history of the human species, it is not surprising that it has become a major global holiday in the past few decades, instead of inexplicably being an obscure piece of trivia I had to look up on Wikipedia. I'm just worried that as time goes on it's going to get too commercialized. If you're going to a raucous SE Day party like I am, have fun and be safe.

Comment author: Alejandro1 18 July 2013 01:40:22PM *  58 points [-]

I am amazed that Eliezer managed to take Rowling's most corny idea and made it non-corny: "The power that the Dark Lord knows not" is, after all, none other than the power of true love. And it is a mighty power not because of a hokey magical force attached to it, but because someone who feels it in addition to being rational is motivated to reshape the universe. "Power comes from having something to protect."

Comment author: Nornagest 11 September 2012 09:04:58PM 58 points [-]

A cult will kill you because you are made of a million dollars that it could use for something else.

That's actually not a terrible way to put it.

Comment author: knb 26 January 2012 12:51:15AM *  53 points [-]

Let's do the impossible and think the unthinkable! I must know what those secrets are, no matter how much sleep and comfort I might lose.

  • Smart people often think social institutions are basically arbitrary and that they can engineer better ways using their mighty brains. Because these institutions aren't actually arbitrary, their tinkering is generally harmful and sometimes causes social dysfunction, suffering, and death on a massive scale. Less Wrong is unusually bad in this regard, and that is a serious indictment of "rationality" as practiced by LessWrongers.
  • A case of this especially relevant to Less Wrong is "Evangelical Polyamory".
  • Atheists assume that self-identified atheists are representative of non-religious people and use flattering data about self-identified atheists to draw (likely) false conclusions about the world being better without religion. The expected value of arguing for atheism is small and quite possibly negative.
  • Ceteris paribus dictatorships work better than democracies.
  • Nerd culture is increasingly hyper-permissive and basically juvenile and stultifying. Nerds were better off when they had to struggle to meet society's expectations for normal behavior.

I would also like to endorse GLaDOS's excellent list.

Comment author: Alicorn 25 January 2012 10:39:35PM 60 points [-]

The "confidence interval" line should have a percentage ("What's your 95% confidence interval?").

"You make a compelling case for infanticide."

"Can you link me to that study?"

"I think I'm going to solve psychology." ("I think I'm going to solve metaethics." "I think I'm going to solve Friendliness.")

"My elephant wants a brownie."

"Is that your true rejection?"

"I wanna be an upload!"

"Does that beat Reedspacer's Lower Bound?"

"Let's not throw all our money at the Society for Rare Diseases in Cute Puppies."

"I have akrasia."

"I'm cryocrastinating."

"Do that and you'll wind up with the universe tiled in paperclips."

"So after we take over the world..."

"I want to optimize for fungibility here."

"This looks like a collective action problem."

"We can dissolve this question." ("That's a dissolved question.")

"My model of you likes this."

"Have you read Goedel, Escher, Bach?"

"What do the statistics say about cases in this reference class?"

Comment author: solipsist 06 January 2014 02:19:06AM 50 points [-]

Apposite criticism. Most worrying excerpt:

...these environments are also self-selecting. In other words, even when the people speaking loudest or most eloquently don’t intentionally discourage participation from people who are not like them / who may be uncomfortable with the terms of the discussion, entertaining ‘politically incorrect’ or potentially harmful ideas out loud, in public (so to speak) signals people who would be impacted by said ideas that they are not welcome.

Self-selection in LessWrong favors people who enjoy speaking dispassionately about sensitive issues, and disfavors people affected by those issues. We risk being an echo-chamber of people who aren't hurt by the problems we discuss.

That said, I have no idea what could be done about it.

Comment author: Mitchell_Porter 19 March 2011 01:35:33PM 56 points [-]

interesting but useless and nothing new

I occasionally ponder what LW's objective place in the scheme of things might be. Will it ever matter as much as, say, the Vienna Circle? Or even just as much as the Futurians? - who didn't matter very much, but whose story should interest the NYC group. The Futurians were communists, but that was actually a common outlook for "rationalists" at the time, and the Futurians were definitely future-oriented.

Will LW just become a tiresome and insignificant rationalist cult? The more that people want to conduct missionary activity, "raising the sanity waterline" and so forth, the more that this threatens to occur. Rationalist evangelism from LW might take two forms, boring and familiar, or eccentric and cultish. The boring and familiar form of rationalist evangelism could encompass opposition to religion, psych 101 lectures about cognitive bias, and tips on how optimism and clear thinking can lead to success in mating and moneymaking. An eccentric and cultish form of rationalist evangelism could be achieved by combining cryonics boosterism, Bayes-worship, insistence that the many-worlds interpretation is the only rational interpretation of quantum mechanics, and the supreme importance of finding the one true AI utility function.

It could be that the dominant intellectual and personality tendencies here - critical and analytical - will prevent serious evangelism of either type from ever getting underway. So let's return for a moment to the example of the Vienna Circle, which was not much of a missionary outfit. It produced a philosophy, logical positivism, which was influential for a while, and it was a forum in which minds like Godel and Wittgenstein (and others who are much lesser known now, like Otto Neurath) got to trade views with other people who were smart and on their wavelength, though of course they had their differences.

Frankly I think it is unlikely that LW will reach that level. The Vienna Circle was a talking shop, an intellectual salon, but it was perhaps one in ten thousand in terms of its lucidity and significance. Recorded and unrecorded history, and the Internet today, is full of occasions where people met, were intellectually sympatico, and managed to elaborate their worldview in a way they found satisfactory; and quite often, the participants in this process felt they were doing something more than just personally exciting - they thought they were finding the truth, getting it right where almost everyone else got it wrong.

I appreciate that quite a few LW contributors will be thinking, I'm not in this out of a belief that we're making history; it's paying dividends for me and my peers, and that's good enough. But you can't deny that there is a current here, a persistent thread of opinion, which believes that LW is extremely important or potentially so, that it is a unique source of insights, a workshop for genuine discovery, an oasis of truth in a blind or ignorant world, etc.

Some of that perception I believe is definitely illusory, and comes from autodidacts thinking they are polymaths. That is, people who have developed a simple working framework for many fields or many questions of interest, and who then mistake that for genuine knowledge or expertise. When this illusion becomes a collective one, that is when you get true intellectual cultism, e.g. the followers of Lyndon Larouche. Larouche has an opinion on everything, and so to those who believe him on everything, he is the greatest genius of the age.

Then, there are some intellectual tendencies here which, if not entirely unique to LW, seem to be expressed with greater strength, diversity, and elaboration than elsewhere. I'm especially thinking of all the strange new views, expressed almost daily, about identity, morality, reality, arising from extreme multiverse thinking, computational platonism, the expectation of uploads... That is an area where I think LW would unquestionably be of interest to a historian of technological subcultural belief. And I think it's very possible that some form of these ideas will give rise to mass belief systems later in this century - people who don't worry about death because they believe in quantum immortality, popular ethical movements based on some of the more extreme or bizarre conclusions being deduced from radical utilitarianism, Singularity debates becoming an element of political life. I'm not saying LW would be the source of all this, just that it might be a bellwether of an emerging zeitgeist in which the ambient technical and cultural environment naturally gives rise to such thinking.

But is there anything happening here which will contribute to intellectual progress? - that's my main question right now. I see two ways that the answer might be yes. First, the ideas produced here might actually be intellectual progress; second, this might be a formative early experience for someone who went on to make genuine contributions. I think it's likely that the second option will be true of someone - that at least one, and maybe several people, who are contributing to this site or just reading it, will, years from now, be making discoveries, in psychology or in some field that doesn't yet exist, and it will be because this site warped their sensibility (or straightened it). But for now, my question is the first one: is there any intellectual progress directly occurring here, of a sort that would show up in a later history of ideas? Or is this all fundamentally, at best, just a learning experience for the participants, of purely private and local significance?

Comment author: ThrustVectoring 06 January 2014 02:33:46AM 36 points [-]

Feminism in particular has a bad history of leaning on a community to make changes - to the point where the target becomes a feminist institution that no longer functions in its original capacity. I may be overreacting, but I don't even want to hear or discuss anything from that direction. It's textbook derailing. "But what you're doing is anti-woman" has been played out by feminists, over and over again, to get their demands met from community after community. From Atheism+ to Occupy Wall Street, the result is never pretty.

And honestly, attacking open discourse as anti-woman and anti-minority is very, uhh, squicky. I don't have a better way of putting my thoughts down on the matter - it's just very, very concerning to me. It feels like a Stalinist complaining that we aren't putting enough bullets in the heads of dissenters - except it's a feminist complaining that we aren't torpedoing the reputation of enough people who express "anti-woman" ideas. Just... ew. No. It doesn't help that this idea is getting obfuscated with layers and layers of complicated English and parenthetical thoughts breaking up the sentence structure.

Some choice quotes:

I thus require adherence to these ideas or at least a lack of explicit challenges to them on the part of anyone speaking to me before I can entertain their arguments in good faith.

Big warning flag right here. It's threatening to ignore, ostracize, or attack those who disagree with their sacred cows. That's an unconscionably bad habit to allow oneself.

Comment author: Salemicus 15 September 2014 01:08:05PM 56 points [-]

Dualism is a coherent theory of mind and the only tenable one in light of our current scientific knowledge.

Comment author: lsparrish 07 March 2013 03:28:20PM 57 points [-]

Try to live close to where you work. Failing that, try to work close to where you live. Commuting takes a lot of time and you don't get paid for it.

Comment author: lukeprog 15 September 2012 10:35:41PM *  55 points [-]

the OP is vastly overstating how much of the Sequences are similar to the standard stuff out there... I think Luke is being extremely charitable in his construal of what's "already" been done in academia

Do you have a Greasemonkey script that rips all the qualifying words out of my post, or something? I said things like:

  • "Eliezer's posts on evolution mostly cover material you can find in any good evolutionary biology textbook"
  • "much of the Quantum Physics sequence can be found in quantum physics textbooks"
  • "Eliezer's metaethics sequences includes dozens of lemmas previously discussed by philosophers"
  • "Eliezer's free will mini-sequence includes coverage of topics not usually mentioned when philosophers discuss free will (e.g. Judea Pearl's work on causality), but the conclusion is standard compatibilism."
  • "[Eliezer's posts] suggest that many philosophical problems can be dissolved into inquiries into the cognitive mechanisms that produce them, as also discussed in"
  • "[Eliezer's posts] make the point that value is complex, a topic explored in more detail in..."

Your comment above seems to be reacting to a different post that I didn't write, one that includes (false) claims like: "The motivations, the arguments by which things are pinned down, the exact form of the conclusions are mostly the same between The Sequences and previous work in mainstream academia."

I have yet to encounter anyone who thinks the Sequences are more original than they are.

Really? This is the default reaction I encounter. Notice that when the user 'Thomas' below tried to name just two things he thought were original with you, he got both of them wrong.

Here's a report of my experiences:

  • People have been talking about TDT for years but nobody seems to have noticed Spohn until HamletHenna and I independently stumbled on him this summer.

  • I do find it hard to interpret the metaethics sequence, so I'm not sure I grok everything you're trying to say there. Maybe you can explain it to me sometime. In any case, when it comes to the pieces of it that can be found elsewhere, I almost never encounter anyone who knows their earlier counterparts in (e.g.) Railton & Jackson — unless I'm speaking to someone who has studied metaethics before, like Carl.

  • A sizable minority of people I talk to about dissolving questions are familiar with the logical positivists, but almost none of them are familiar with the recent cogsci-informed stuff, like Shafir (1998) or Talbot (2009).

  • As I recall, Less Wrong had never mentioned the field of "Bayesian epistemology" until my first post, The Neglected Virtue of Scholarship.

  • Here's a specific story. I once told Anna that once I read about intelligence explosion I understood right away that it would be disastrous by default, because human values are incredibly complex. She seemed surprised and a bit suspicious and said "Why, had you read Joshua Greene?" I said "Sure, but he's just one tip of a very large iceberg of philosophical and scientific work demonstrating the complexity of value. I was convinced of the complexity of value long ago by metaethics and moral psychology in general."

Several of these citations are from after the originals were written! Why not (falsely) claim that academia is just agreeing with the Sequences, instead?

Let's look at them more closely:

  • Lots of cited textbooks were written after the Sequences, because I wanted to point people to up-to-date sources, but of course they mostly summarize results that are a decade old or older. This includes books like Glimcher (2010) and Dolan & Sharot (2011).

  • Batson (2011) is a summary of Batson's life's work on altruism in humans, almost all of which was published prior to the Sequences.

  • Spohn (2012) is just an update to Spohn's pre-Sequences on work on his TDT-ish decision theory, included for completeness.

  • Talbot (2009) is the only one I see that is almost entirely composed of content that originates after the Sequences, and it too was included for completeness immediately after another work written before the Sequences: Sharif (1998).

I don't understand what the purpose of this post was supposed to be - what positive consequence it was supposed to have.

That's too bad, since I answered this question at the top of the post. I am trying to counteract these three effects:

  1. Some readers will mistakenly think that common Less Wrong views are more parochial than they really are.
  2. Some readers will mistakenly think Eliezer's Sequences are more original than they really are.
  3. If readers want to know more about the topic of a given article, it will be more difficult for them to find the related works in academia than if those works had been cited in Eliezer's article.

I find problem #1 to be very common, and a contributor to the harmful, false, and popular idea that Less Wrong is a phyg. I've been in many conversations in which (1) someone starts out talking as though Less Wrong views are parochial and weird, and then (2) I explain the mainstream work behind or similar to every point they raise as parochial and weird, and then (3) after this happens 5 times in a row they seem kind of embarrassed and try to pretend like they never said things suggesting that Less Wrong views are parochial and weird, and ask me to email them some non-LW works on these subjects.

Problem #2 is common (see the first part of this comment), and seems to lead to phygish hero worship, as has been pointed out before.

Problem #3, I should think, is uncontroversial. Many of your posts have citations to related work, most of them do not (as is standard practice in the blogosphere), and like I said I don't think it would have been a good idea for you to spend time digging up citations instead of writing the next blog post.

writing something that predictably causes some readers to get the impression that ideas presented within the Sequences are just redoing the work of other academics, so that they predictably tweet ...I do not think the creation of this misunderstanding benefits anyone

Predictable misunderstandings are the default outcome of almost anything 100+ people read. There's always a trade-off between maximal clarity, readability, and other factors. But, I'm happy to tweak my original post to try to counteract this specific misunderstanding. I've added the line: "(edit: probably most of their content is original)".

[Further reading, I would guess] gave Luke an epiphany he's trying to share - there's a whole world out there, not just LW the way I first thought.

Remember that I came to LW with a philosophy and cogsci (especially rationality) background, and had been blogging about biases and metaethics and probability theory and so on at CommonSenseAtheism.com for years prior to encountering LW.

I get what this is trying to do. There's a spirit in LW which really is a spirit that exists in many other places, you can get it from Feynman, Hofstadter, the better class of science fiction, Tooby and Cosmides, many beautiful papers that were truly written to explain things as simply as possible, the same place I got it.

That is definitely not the spirit of my post. If you'll recall, I once told you that if all human writing were about to be destroyed except for one book of our choosing, I'd go with The Sequences. You can't get the kind of thing that CFAR is doing solely from Feynman, Kahneman, Stanovich, etc. And you can't get FAI solely from Good, Minsky, and Wallach — not even close. Again, I get the sense you're reacting to a post with different phrasing than the one I actually wrote.

So they won't actually read the literature and find out for themselves that it's not what they've already read.

Most people won't read the literature either you or I link to. But many people will, like Wei Dai.

Case in point: Remember Benja's recent post on UDT that you praised as "Original scientific research on saving the world"? Benja himself wrote that the idea for that post clicked for him as a result of reading one of the papers on logical uncertainty I linked to from So You Want to Save the World.

Most people won't read my references. But some of those who do will go on to make a sizable difference as a result. And that is one of the reasons I cite so many related works, even if they're not perfectly identical to the thing me or somebody else is doing.

Comment author: Tuxedage 10 December 2013 07:14:32PM *  55 points [-]

At risk of attracting the wrong kind of attention, I will publicly state that I have donated $5,000 for the MIRI 2013 Winter Fundraiser. Since I'm a "new large donor", this donation will be matched 3:1, netting a cool $20,000 for MIRI.

I have decided to post this because of "Why our Kind Cannot Cooperate". I have been convinced that people donating should publicly brag about it to attract other donors, instead of remaining silent about their donation which leads to a false impression of the amount of support MIRI has.

Comment author: Xachariah 31 January 2013 11:10:40AM *  56 points [-]

The new acronym for the Singularity Institute is MIRI.

The first google hit is the wikipedia page for the Star Trek: TOS episode Miri (S1E8). It's about how 90% of the population of not-Earth was destroyed by an existential threat leaving nothing but irrational children. The crew find themselves under a death sentence from this threat and try to find a solution, but they need the children's help. However the children think themselves immune and immortal and won't assist. In the last seconds, the crew manages to convince the children that the existential threat cannot be ignored and must be solved or the kids will eventually die too. With their help, the crew saves the day and everyone lives happily ever after. Also, the episode was so ahead of it's time that even though it was reviewed as excellent, it got so many complaints that it was never rebroadcast for 20 years.

I think my symbolism detector just pegged off the charts then exploded.

Comment author: Eliezer_Yudkowsky 28 March 2012 02:11:26AM 49 points [-]

Vote up if you think all the speculation got in the way of the chapter itself.

Comment author: HonoreDB 20 March 2012 06:52:26AM *  53 points [-]

1) The author makes precisely 3 statements regarding Halacha (Judaic law), each of which is demonstrably incorrect.

Well, no. He makes those statements about the Old Testament, not actual Jewish law. It seems blatantly obvious that the rulings and commentary you cite are indeed "apologetic glosses on a defective primary text." The fact that they were written when scientific knowledge was still rudimentary is immaterial--clearly, they patched the locust thing when they finally got around to counting its legs.

2) The author asserts that the Tanakh (Old Testament) “doesn’t talk about a sense of wonder at the complexity of the universe.”

Again, in trying to refute this you cite texts that were written much later. If the Old Testament actually contained references to a sense of wonder at the complexity of the universe you'd be able to quote it. I think the closest it comes is a sense of despair and humility at the incomprehensibility of the universe.

3) The author asserts that historical Judaism defends the authenticity of the Torah without accounting for Bayes’ Theorem.

I think you've simply misunderstood, here: this is close to the opposite of what the author is saying.

4) The author asserts that contemporary religionists justify false opinions by claiming that their religion is a separate magisterium which can be neither proven nor disproven.

You don't really dispute this, you just sort of argue that it's okay. It's not. If something like "the nature of good and evil" does not describe some aspect of human experience, then it's vacuous. If it does, then it is subject to scientific analysis.

Given all of this, the popular contention that the Torah endorses slave-ownership is difficult to defend.

The Torah condemns nonmarital sex. Repeatedly, explicitly, and harshly. It does not condemn slavery. Nonmarital sex is an inevitable constant across all cultures, times, and places. It is so much more inevitable than slavery. This seems to suggest a somewhat different attitude toward slavery than toward nonmarital sex.

The passages you quote, brutal as they are, concern only Jewish slaves. The Torah explicitly permits Jews to buy non-Jewish slaves and never free them (Leviticus 25:45-46), but pass them and their children on to your children, forever. It instructs the Jewish people to, when conquering a culturally powerful enemy city, kill the men, women, and male children, but allow the soldiers to keep the virginal girls as slaves. Such a genocide is depicted in Numbers 31, for example. How do you think that kind of slavery went? Imagine you're a young Midianite woman. Your father dies defending your city, and then it falls to the invaders a day later. Jewish soldiers come to your house. Your old, weak grandfather grabs a sword and bars the door, but you plead with him to surrender, and the soldiers watch as you tug the sword out of his hands and lead him inside to a chair. One of them laughs, walks inside, and runs him through. Your mother wails and he turns to her, sighs dutifully, squares off, and cuts her head off cleanly in a single stroke. You've barely had time to register what just happened, when he pulls your baby brother out of his crib. Some part of you manages to mobilize yourself and you find yourself charging towards him, screaming. By the time you reach him, he's already bashed your brother's brains out and dropped the body. You get in one wild punch before he backhands you to the ground. He could kill you in an instant but instead he stares at you appraisingly.

Would such a woman ever so much as weave a basket for her captor voluntarily? She'd have to be chained up at night, I bet, or else she'd slit his throat. She'd have to be beaten half to death before she even considered accepting this man as a master--the man who killed her family in front of her. Would the soldier sell her to another Jew? It might not make much of a difference: these would still be the men who destroyed her entire civilization. Would she be sold to outsiders? Sold, as a young, virgin slave, to outsiders who aren't bound by all those ethical Biblical rules? Yeah, that's going to end well for her. What do you suppose she would say, if she saw you praying today? Chanting some of the same prayers, thanking the same God in the same language, as the man who slaughtered her family thanked God for delivering her into his hands. Attending synagogue and saying "amen" as they read aloud the story, recorded for all eternity, of her torment and her people's genocide.

At this point you are already preparing your response, where you explain that the genocide was pragmatically necessary. "They had to kill those people, or the next generation would have killed them. God commanded it because He knew it had to be done. Enslaving the girls was the most merciful practical option." I beg you not to say this. This is the worst modern consequence of the Talmudic tradition: an intellectual, explaining how mass killings and brutal slavery are sometimes justified. Every time you defend genocide, you hasten the day when it will happen again. I ask again: What could you possibly say to any of those sixteen thousand Midianite women and girls, if they asked you why you were commemorating the atrocities committed against them, and adopting the perpetrator's heritage as your own?

The next time you kiss a Torah, I expect you to picture that Midianite slave. She's watching you kiss it. She knows what's written there. She sees you as reaffirming, in that moment, your allegiance to the worst parts of human civilization. What do you need to do to get right with her?

Comment author: Nornagest 25 January 2012 07:28:16PM *  56 points [-]

It's posts like this that make me wish for a limited-access forum for discussing these issues, something along the lines of an Iconoclastic Conspiracy.

The set of topics too inflammatory for LW to talk about sanely seems pretty small (though not empty), but there's a considerably larger set of topics too politically sensitive for us to safely discuss without the site taking a serious status hit. This basically has nothing to do with our intra-group rationality: no matter how careful we are in our approach, taking (say) anarcho-primitivism seriously is going to alienate some potential audiences, and the more taboo subjects we broach the more alienation we'll get. This is true even if the presentation is entirely apolitical: I've talked to people who were so squicked by Torture vs. Dust Specks as to be permanently turned off the site. On the other hand (and perhaps more relevantly to the OP), as best I can tell there's nothing uniquely horrible about any particular taboo subject, and most that I can think of aren't terribly dangerous in isolation: it's volume that causes problems.

Now, it's tempting to say "fuck 'em if they can't take it", but this really is a bad thing from the waterline perspective: the more cavalier we get about sensitive or squicky examples, the higher we're setting the sanity bar for membership in our community. Set it high enough and we effectively turn ourselves into something analogous to a high-IQ society, with all the signaling and executive problems that that implies.

We'll never look completely benign to the public: it's hard to imagine decoupling weak transhumanism from our methodology, for example. But minimizing the public-facing exposure of the more inflammatory concepts we deal in does seem like a good idea if we're really interested in outreach.

Comment author: ArisKatsaris 07 November 2011 12:41:28PM *  52 points [-]

I'd like to answer (on video) submitted questions from the Less Wrong community just as Eliezer did two years ago.

That was the most horribly designed thing I've ever seen anyone do on LessWrong, as I once described here so please, please, no video.

The questions are text. Have your answer on text too, so that we can actually read them -- unless there's some particular question which would actually be enhanced by the usage of video, (e.g. you'd like to show an animated graph or a computer simulation or something)

If there's nothing I can say to convince you against using video, then I beg you to atleast take the time to read my more specific problems in the link above and correct those particular flaws - a single audio that we can atleast play and listen in the background, while we're doing something else, instead of 30 videos that we must individually click. If not that, atleast a clear description of the questions on the same page (AND repeated clearly on the audio itself), so that we can see the questions that interest us, instead of a link to a different page.

But please, just consider text instead. Text has the highest signal-to-noise ratio. We can actually read it in our leisure. We can go back and forth and quote things exactly. TEXT IS NIFTY.

Comment author: sediment 28 July 2014 10:21:55PM *  54 points [-]

I recently made a dissenting comment on a biggish, well-known-ish social-justice-y blog. The comment was on a post about a bracelet which one could wear and which would zap you with a painful (though presumably safe) electric shock at the end of a day if you hadn't done enough exercise that day. The post was decrying this as an example of society's rampant body-shaming and fat-shaming, which had reached such an insane pitch that people are now willing to torture themselves in order to be content with their body image.

I explained as best I could in a couple of shortish paragraphs some ideas about akrasia and precommitment in light of which this device made some sense. I also mentioned in passing that there were good reasons to want to exercise that had nothing to do with an unhealthy body image, such as that it's good for you and improves your mood. For reasons I don't fully understand, these latter turned out to be surprisingly controversial points. (For example, surreally enough, someone asked to see my trainer's certificate and/or medical degree before they would let me get away with the outlandish claim that exercise makes you live longer. Someone else brought up the weird edge case that it's possible to exercise too much, and that if you're in such a position then more exercise will shorten, not lengthen, your life.)

Further to that, I was accused of mansplaining twice. and then was asked to leave by the blog owner on grounds of being "tedious as fuck". (Granted, but it's hard not to end up tedious as fuck when you're picked up on and hence have to justify claims like "exercise is good for you".)

This is admittedly minor, so why am I posting about it here? Just because it made me realize a few things:

  • It was an interesting case study in memeplex collision. I felt like not only did I hold a different position to the rest of those present, but we had entirely different background assumptions about how one makes a case for said position. There was a near-Kuhnian incommensurability between us.
  • I felt my otherwise-mostly-dormant tribal status-seeking circuits fire up - nay, go into overdrive. I had lost face and been publicly humiliated, and the only way to regain the lost status was to come up with the ultimate putdown and "win" the argument. (A losing battle if ever there was one.) It kept coming to the front of my mind when I was trying to get other things done and, at a time when I have plenty of more important things to worry about, I wasted a lot of cycles on running over and over the arguments and formulating optimal comebacks and responses. I had to actively choose to disengage (in spite of the temptation to keep posting) because I could see I had more invested in it and it was taking up a greater cognitive load than I'd ever intended. This seems like a good reason to avoid arguing on the internet in general: it will fire up all the wrong parts of your brain, and you'll find it harder to disengage than you anticipated.
  • It made me realize that I am more deeply connected to lesswrong (or the LW-osphere) than I'd previously realized. Up 'til now, I'd thought of myself as an outsider, more or less on the periphery of this community. But evidently I've absorbed enough of its memeplex to be several steps of inference away from an intelligent non-rationalist-identifying community. It also made me more grateful for certain norms which exist here and which I had otherwise gotten to take for granted: curiosity and a genuine interest in learning the truth, and (usually) courtesy to those with dissenting views.
Comment author: steven0461 16 March 2012 07:49:58PM 47 points [-]

Here's the main thing that bothers me about this debate. There's a set of many different questions involving the degree of past and current warming, the degree to which such warming should be attributed to humans, the degree to which future emissions would cause more warming, the degree to which future emissions will happen given different assumptions, what good and bad effects future warming can be expected to have at different times and given what assumptions (specifically, what probability we should assign to catastrophic and even existential-risk damage), what policies will mitigate the problem how much and at what cost, how important the problem is relative to other problems, what ethical theory to use when deciding whether a policy is good or bad, and how much trust we should put in different aspects of the process that produced the standard answers to these questions and alternatives to the standard answers. These are questions that empirical evidence, theory, and scientific authority bear on to different degrees, and a LessWronger ought to separate them out as a matter of habit, and yet even here some vague combination of all these questions tends to get mashed together into a vague question of whether to believe "the global warming consensus" or "the pro-global warming side", to the point where when Stuart says some class of people is more irrational than theists, I have no idea if he's talking about me. If the original post had said something like, "everyone whose median estimate of climate sensitivity to doubled CO2 is lower than 2 degrees Celsius is more irrational than theists", I might still complain about it falling afoul of anti-politics norms, but at least it would help create the impression that the debate was about ideas rather than tribes.

Comment author: Viliam_Bur 07 January 2014 08:24:46AM *  53 points [-]

the average Less Wronger is about 33% more favorably disposed towards the feminist movement than the average woman

Maybe that's exactly what makes LW a good target. There are too many targets on the internet, and one has to pick their battles. The best place is the one where you already have support. If someone would write a similar article about a website with no feminists, no one on the website would care. Thus, wasted time.

In the same way, it is more strategic to aim this kind of criticism towards you personally than it would be e.g. towards me. Not because you are a worse person (from a feminist point of view). But because such criticism will worry you, while I would just laugh.

There is something extremely irritating about a person who almost agrees with you, and yet refuses to accept everything you say. Sometimes you get angry about them more than about your enemies, whose existence you already learned to accept. At least, the enemies are compatible with the "us versus them" dichotomy, while the almost-allies make it feel like the "us" side is falling apart.

EDIT: Seems like you already know this.

Comment author: Protagoras 26 September 2012 04:47:57PM *  52 points [-]

One respect in which Less Wrongers resemble mainstream philosophers is that many mainstream philosophers disparage mainstream philosophers and emphasize the divergence between their beliefs and those of rival mainstream philosophers. Indeed, that is something of a tradition in Western philosophy.

Comment author: framsey 23 July 2012 09:49:36PM 53 points [-]

I'm going to make a meta-comment here.

I think that your ultimate goal should NOT be to convince your dad that you are right and he is wrong. If he eventually changes his mind, he's going to have to do that on his own. Debates just don't change participants' minds very often.

Instead, your goal should be to make him respect your beliefs as genuine.

Christians generally respect people who are genuinely seeking truth, in part because the Bible promises that "those who seek will find". The good news is that you ARE legitimately seeking truth, so you should be able to convince him of this.

Hopefully you already have a good relationship with your father based on mutual love and respect. You want to build on that and preserve it as much as possible. He is going to be your dad for the rest of your life, and how you interact with him now is going to determine in part how that relationship develops.

More practically: It sounds like you aren't sure exactly why you've changed your mind, and are having difficulty articulating it. Nobody on this site is going to be able to articulate it for you. Rationality is a method, not a conclusion. So here is my suggestion: do a stack-trace on your change of belief. It happened, so it is causally entangled with some set of arguments and evidence you encountered. Go back and try to figure out what caused you to change your mind. Reconstruct as best you can, in your own words, as exactly and precisely as possible, why you changed your mind.

This exercise will help you to understand what you believe and why. Discussing this with your father will be grounds for a future relationship based on mutual love and respect. That should be the goal here.

Last piece of advice: spend some time with your dad doing something other than arguing. Go to a baseball game or something. Try to get some father-son time where you're not just talking about your beliefs. You want him to get used to the fact that you're the same person, and you don't want this to dominate your relationship.

Comment author: kalla724 12 May 2012 09:58:56PM 52 points [-]

Ok, now we are squeezing a comment way too far. Let me give you a fuller view: I am a neuroscientist, and I specialize in the biochemistry/biophysics of the synapse (and interactions with ER and mitochondria there). I also work on membranes and the effect on lipid composition in the opposing leaflets for all the organelles involved.

Looking at what happens during cryonics, I do not see any physically possible way this damage could ever be repaired. Reading the structure and "downloading it" is impossible, since many aspects of synaptic strength and connectivity are irretrievably lost as soon as the synaptic membrane gets distorted. You can't simply replace unfolded proteins, since their relative position and concentration (and modification, and current status in several different signalling pathways) determines what happens to the signals that go through that synapse; you would have to replace them manually, which is a) impossible to do without destroying surrounding membrane, and b) would take thousands of years at best, even if you assume maximally efficient robots doing it (during which period molecular drift would undo the previous work).

Etc, etc. I can't even begin to cover complications I see as soon as I look at what's happening here. I'm all for life extension, I just don't think cryonics is a viable way to accomplish it.

Instead of writing a series of posts in which I explain this in detail, I asked a quick side question, wondering whether there is some research into this I'm unaware of.

Does this clarify things a bit?

Comment author: Vaniver 30 June 2013 06:41:54PM *  53 points [-]

Public Service Announcement: If you feel strongly affected by chapter 89, and do not yet have first aid training, consider googling a local class and signing up. Some sudden deaths can be prevented, and it might need to be by you. Make the most good out of your horror and revulsion.

Comment author: [deleted] 15 November 2011 08:55:14PM 53 points [-]

Transcribing.

In response to I'm scared.
Comment author: DanArmak 23 December 2010 12:02:30PM 52 points [-]

I've faced this problem and partially overcome it. I'll try my best to describe this. However, I've also been diagnosed with depression and prescribed SSRIs in the past, so my approaches to handling the problem may not fit you.

You have acquired your estimates of the dangers of the future by explicit reasoning. The default estimates that your emotional, unconscious brain provided you with were too optimistic. This is the case for almost everyone.

Consider that even though you have realized the future is bleak, your emotional, unconscious, everyday-handling mind still hasn't updated its estimates. It is still too optimistic. It just needs to be allowed to express this optimism.

Right now, you probably believe that your emotional outlook must be rational, and must correspond to your conscious estimates of the future. You are forcing your emotions to match the future you foresee, and so you feel afraid.

I suggest that you allow your emotions to become disconnected from your conscious long-term predictions. Stop trying to force yourself to be unhappy because you predict bad things. Say to yourself: I choose to be happy and unafraid no matter what I predict!

Emotions are not a a tool like rational thought, which you have to use in a way that corresponds to the real world. You can use them in any way you like. It's rational to feel happy about a bleak future, because feeling happy is a good thing and there is no point in feeling unhappy!

Being happy or not, afraid or not, does not have to be determined by your conscious outlook. The only things that force your mind to be unhappy are immediate problems: pain, hunger, loneliness; and the immediate expectation of these. If you accept that your goal is to be happy and unafraid as a fact independent of the future you foresee, you can find various techniques to achieve this. Unfortunately they tend to vary for different people.

Expecting to die of cancer in fifty years does not, in itself, cause negative emotions like fear. Imagining the death in your mind, and dwelling on it, does cause fear. In the first place, avoid thinking about any future problem that you are not doing anything about. Use the defensive mechanism of not acknowledging unsolved problems.

This does not mean that on the conscious level you'll ignore problems. It is possible to decouple the two things, with practice. You can take long-term strategic actions (donate to SIAI, research immortality) without acutely fearing the result of failure by not imagining that result.

We are used to think of compartmentalization as an irrational bias, but it's possible to compartmentalize your strategic actions - which try to improve the future - and meanwhile be happy just as if the future was going to be fine by default.

In a similar vein, I tend to suffer from a "too-active imagination" when reading about the suffering of other people in the news, and vividly imagining the events described. My solution has been to stop reading the news. When you're faced with something terrible and you're not doing anything about it anyway, just look away. Defeat the implicit LW conditioning that tells you looking away from the suffering of others is wrong. It's wrong only if it affects your actions, not your emotions.

Comment author: Jack 12 June 2012 06:50:43PM *  50 points [-]

Does anyone remember when that one commenter freaked out and declared he would be attempting to marginally increase existential risk by sending right-wingers information about the singularity?

...

Comment author: Jack 23 April 2012 08:55:24AM *  48 points [-]

I agree. Friendly AI may be incoherent and impossible. In fact, it looks impossible right now. But that’s often how problems look right before we make a few key insights that make things clearer, and show us (e.g.) how we were asking a wrong question in the first place. The reason I advocate Friendly AI research (among other things) is because it may be the only way to secure a desirable future for humanity, (see “Complex Value Systems are Required to Realize Valuable Futures.”) even if it looks impossible. That is why Yudkowsky once proclaimed: “Shut Up and Do the Impossible!” When we don’t know how to make progress on a difficult problem, sometimes we need to hack away at the edges.

Just a suggestion for future dialogs: The amount of Less Wrong jargon, links to Less Wrong posts explaining that jargon, and the Yudkowsky "proclamation" in this paragraph is all a bit squicky, alienating and potentially condescending. And I think they muddle the point you're making.

Anyway, biting Pei's bullet for a moment, if building an AI isn't safe, if it's, like Pei thinks, similar to educating a child (except, presumably, with a few orders of magnitude more uncertainty about the outcome) that sounds like a really bad thing to be trying to do. He writes :

I don’t think a good education theory can be “proved” in advance, pure theoretically. Rather, we’ll learn most of it by interacting with baby AGIs, just like how many of us learn how to educate children.

There's a very good chance he's right. But we're terrible at educating children. Children routinely grow up to be awful people. And this one lacks the predictable, well-defined drives and physical limits that let us predict how most humans will eventually act (pro-social, in fear of authority). It sounds deeply irresponsible, albeit, not of immediate concern. Pei's argument is a grand rebuttal of the proposal that humanity spend more time on AI safety (why fund something that isn't possible?) but no argument at all against the second part of the proposal-- defund AI capabilities research.

Comment author: Will_Newsome 21 March 2012 09:38:38AM *  50 points [-]

(There seems to be a sort of assumption 'round these parts that high status is better than low status and that dominance is better than submission. I think that this should not be unquestioningly assumed. There are many goals that can usually be more easily achieved by someone in a lower status position, e.g. discovering truth or learning from people. There are many exceptions, but high status tends to make people prideful, petty, unreflective, stupid, unwilling to change, unwilling to compromise, incautious, overconfident, &c. The benefits of material wealth, better mating options, better ally options, &c., are not obviously worth the costs; sometimes there are ways to get those things without risk. One would be wise to worry about slippery slopes and goal distortion.)

Comment author: Bugmaster 13 December 2011 07:55:02AM 52 points [-]

In the previous video, you said that publishing in mainstream journals might be a waste of time, due to the amount of "post-production" involved. In addition, you said that SIAI would prefer to keep its AGI research secret -- otherwise, someone might read it, implement the un-Friendly AGI, and doom us all. You followed that up by saying that SIAI is more interested in "technical problems in mathematics, computer science, and philosophy" than in experimental AI research.

In light of the above, what does the SIAI actually do ? You don't submit your work to rigorous scrutiny by your peers in the field (you need peer review for that); you either aren't doing any AGI research, or are keeping it so secret that no one knows about it (which makes it impossible to gauge your progress, if any), and you aren't developing any practical applications of AI, either (since you'd need experimentation for that). So, what is it that you are actually working on, other than growing the SIAI itself ?

Comment author: Yvain 11 December 2011 01:05:03PM *  50 points [-]

Walking on land is probably impossible, Pre-Cambrian researchers announced, since even if we did evolve some sort of "legs" our gills would be unable to extract oxygen from the environment.

Comment author: SolveIt 12 August 2013 02:52:23AM 51 points [-]

I'm a high school senior. I co-authored a paper with John Conway. It's on pretty unimportant stuff, and hardly serious mathematics, but it's still interesting. And I get an Erdos number of 2!

Comment author: orthonormal 11 April 2013 04:01:24AM 45 points [-]

To avoid the aforementioned failure mode of silent approval and loud dissent, let me say that I appreciate this post and this series. I'm trying to update my priors about how many women (in the rationalist cluster) have experienced outright horrific abuse of several sorts, and how many more have had to worry about it; it's obvious in retrospect that I wouldn't have been exposed to these kinds of stories as I was growing up even if they happened around me. That really bears on the question of what policies are best overall, though I'll have to think through all the implications.

Comment author: lukeprog 21 February 2013 03:59:46PM *  47 points [-]

Did you send this article to Will or somebody else at CEA before posting it? Holden Karnofsky let me comment on a copy of his critique of SI before he published it. That procedure is what I would call "common courtesy," and also it reduces the chance that you'll grossly mislead readers about an organization that you know far less about than the organization's principals do.

Comment author: Eliezer_Yudkowsky 12 January 2013 03:44:45PM 50 points [-]

It looks like Aaron Swartz may have willed all his money to Givewell. This... makes it even sadder, somehow, in ways I don't know how to describe.

His last Reddit comment was on /r/HPMOR.

Comment author: Mitchell_Porter 04 July 2012 12:06:27AM *  41 points [-]

Irrationality Game

If we are in a simulation, a game, a "planetarium", or some other form of environment controlled by transhuman powers, then 2012 may be the planned end of the game, or end of this stage of the game, foreshadowed within the game by the Mayan calendar, and having something to do with the Voyager space probe reaching the limits of the planetarium-enclosure, the galactic center lighting up as a gas cloud falls in 30,000 years ago, or the discovery of the higgs boson.

Since we have to give probabilities, I'll say 10%, but note well, I'm not saying there is a 10% probability that the world ends this year, I'm saying 10% conditional on us being in a transhumanly controlled environment; e.g., that if we are in a simulation, then 2012 has a good chance of being a preprogrammed date with destiny.

Comment author: fubarobfusco 26 January 2012 03:30:38AM *  50 points [-]

The prevailing arguments against it are incoherent for non-vegans anyhow. Nonhuman animals can't consent? How can it possibly make sense to claim the relevance of consent for (non-painful) sexual activity for a class of animals which can be legally killed more or less on demand for its meat or skin, or if it becomes inconvenient to keep? The consent argument is bogus; the popular moral beliefs against zoophilia are actually not based on a legalistic rights framework, but on a purity/corruption/ickiness framework.

Comment author: Konkvistador 25 August 2011 11:31:16AM *  51 points [-]

Person A.

Comment author: William_Quixote 25 July 2013 02:08:56PM 50 points [-]

Three shall be Peverell's sons and three their devices by which Death shall be defeated. - chapter 96

The one with the power to vanquish the Dark Lord approaches, born to those who have thrice defied him, born as the seventh month - - chapter 86

There has previously been some speculation that the dark lord in Harry's birth prophesy is death rather than Voldemort. I think this interpretation just got a lot stronger.

James and Lilly had defied Voldemort but not death. The new lines back an interpretation that the Peverells thrice defied death with the three deathly hollows and Harry is born to the Peverell line.

This is, in some ways, a more natural interpretation of that clause since James and Lilly were in the Order and were defying Voldemort on a daily basis not just 3 times. The line of the Peverells makes the number three make sense rather than being arbitrary.

Comment author: gwern 06 June 2013 09:14:51PM *  49 points [-]

Per a discussion on IRC, I am auctioning off my immortal soul to the highest bidder over the next week. (As an atheist I have no use for it, but it has a market value and so holding onto it is a foolish endowment effect.)

The current top bid is 1btc ($120) by John Wittle.

Details:

  1. I will provide a cryptograpically-signed receipt in explicit terms agreeing to transfer my soul to the highest bidder, signed with my standard public key. (Note that, as far as I know, this is superior to signing in blood since DNA degrades quickly at room temperature, and a matching blood type would both be hard to verify without another sample of my blood and also only weak evidence since many people would share my blood type.)
  2. Payment is preferably in bitcoins, but I will accept Paypal if really needed. (Equivalence will be via the daily MtGox average.) Address: 17twxmShN3p6rsAyYC6UsERfhT5XFs9fUG (existing activity)
  3. The auction will close at 4:40 PM EST, 13 June 2013
  4. My soul is here defined as my supernatural non-material essence as specified by Judeo-Christian philosophers, and not my computational pattern (over which I continue to claim copyright); transfer does not cover any souls of gwerns in alternate branches of the multiverses inasmuch as they have not consented.
  5. There is no reserve price. This is a normal English auction with time limit.
  6. I certify that my soul is intact and has not been employed in any dark rituals such as manufacturing horcruxes; I am also a member in good standing of the Catholic Church, having received confirmation etc. Note that my soul is almost certainly damned inasmuch as I am an apostate and/or an atheist, which I understand to be mortal sins.
  7. I further certify that the transferred soul is mine, has never been anyone else's, has not been involved in any past transactions, sales, purchases, etc. However, note that, despite rich documentation that this is doable, I cannot certify that any supernatural or earthly authorities will respect my attempt to sell my soul or even that I have a soul. It may be better for you to think of this as purchasing a quitclaim to my soul.
  8. Bids can be communicated as replies to this comments, emails to gwern@gwern.net, comments on IRC, or replies on Google+. I will update this comment with the current top bid if/when a new top bid is received.

Suggested uses for my soul include:

  • novelty value
  • pickup lines & icebreakers; eg. Wittle to another person considering selling their soul:

    JohnWittle> ______: "You know, I own gwern's soul.
    You know, gwern of LessWrong and gwern.net" is a
    great ice breaker at rationalist meetups and I anticipate
    it increasing my chances of getting laid by a nonzero amount.
    Can your soul give me similar results?
    
  • supererogatory ethics: purchasing a soul to redeem it
  • making extra horcruxes
  • as a speculative play on my future earnings or labor in case I reconvert to any religion with the concept of souls and wish to repurchase my soul at any cost. This would constitute a long position with almost unlimited upside and is a unique investment opportunity.

    (Please note that I hold an informational advantage over most/all would-be investors and so souls likely constitute a lemon market.)

  • hedging against Pascal's Wager:

    presumably Satan will accept my soul instead of yours since damnation does not seem to confer property rights inasmuch as the offspring of dictators continue to enjoy their ill-gotten gains and are not evicted by his agents; similarly, one can expect him to honor his bargain with you since, as an immortal he has an infinite horizon of deals he jeopardizes if he welshes on your deal.

    Note that if he won't agree to a full 1:1 swap, you still benefit infinitely by bargaining him down to an agreement like torturing you every day via a process that converges on an indefinitely large but finite total sum of torture while still daily torturing you & fulfilling the requirements of being in Hell.

EDIT: Congratulations to Mr. Wittle.

Comment author: orthonormal 16 April 2013 02:12:31AM 50 points [-]

Why not call the e-book "The Methods of Rationality"?

Or maybe something that is clearly not HPMoR, but clearly connected to it.

Comment author: wedrifid 24 December 2012 03:00:54AM 46 points [-]

That's an... interesting way of putting it, where by "interesting" I mean "wrong".

If you genuinely can't see how similar considerations apply to you personally publishing rape-world stories and the reasoning you explicitly gave in the post then I suggest you have a real weakness in evaluating the consequences of your own actions on perception.

I could go off on how the idea is that there's particular modern-day people who actually exist and that you're threatening to harm, and how a future society where different things feel harmful is not that, but you know, screw it.

I approve of your Three Worlds Collide story (in fact, I love it). I also approve of your censorship proposal/plan. I also believe there is no need to self censor that story (particularly at the position you were when you published it). That said:

This kind of display of evident obliviousness and arrogant dismissal rather than engagement or---preferably---even just outright ignoring it may well do more to make Lesswrong look bad than half a dozen half baked speculative posts by CronoDAS. There are times to say "but you know, screw it" and "where by interesting I mean wrong" but those times don't include when concern is raised about your legalised-rape-and-it's-great story in the context of your own "censor hypothetical violence 'cause it sounds bad" post.

Comment author: iceman 26 July 2012 10:05:55PM *  46 points [-]

Maybe the word "evangelical" isn't strictly correct. (A quick Google search suggests that I had cached the phrase from this discussion.) I'd like to point out an example of an incident that leaves a bad taste in my mouth.

(Before anyone asks, yes, we’re polyamorous – I am in long-term relationships with three women, all of whom are involved with more than one guy. Apologies in advance to any 19th-century old fogies who are offended by our more advanced culture. Also before anyone asks: One of those is my primary who I’ve been with for 7+ years, and the other two did know my real-life identity before reading HPMOR, but HPMOR played a role in their deciding that I was interesting enough to date.)

This comment was made by Eliezer under the name of this community in the author's notes to one of LessWrongs's largest recruiting tools. I remember when I first read this, I kind of flipped out. Professor Quirrell wouldn't have written this, I thought. It was needlessly antagonistic, it squandered a bunch of positive affect, there was little to be gained from this digression, it was blatant signaling--it was so obviously the wrong thing to do and yet it was published anyway.

A few months before that was written, I had cut a fairly substantial cheque to the Singularity Institute. I want to purchase AI risk reduction, not fund a phyg. Blocks of text like the above do not make me feel comfortable that I am doing the former and not the later. I am not alone here.

Back when I only lurked here and saw the first PUA fights, I was in favor of the PUA discussion ban because if LessWrong wants to be a movement that either tries to raise the sanity waterline or maximizes the probability of solving the Friendly AI problem, it needs to be as inclusive as possible and have as few ugh fields that immediately drive away new members. I now think an outright ban would do more harm than good, but the ugh field remains and is counterproductive.

Comment author: 75th 11 April 2012 04:36:01AM *  39 points [-]

Hermione is dead. Hermione Granger is doomed to die horribly. Hermione Granger will very soon die, and die horribly, dramatically, grotesquely, and utterly.

Fare thee well, Hermione Jean Granger. You escaped death once, at a cost of twice and a half your hero's capital. There is nothing remaining. There is no escape. You were saved once, by the will of your hero and the will of your enemy. You were offered a final escape, but like the heroine you are, you refused. Now only death awaits you. No savior hath the savior, least of all you. You will die horribly, and Harry Potter will watch, and Harry Potter will crack open and fall apart and explode, but even he in all his desperation and fury will not be able to save you. You are the cord binding Harry Potter to the Light, and you will be cut, and your blood, spilled by the hand of your enemy, will usher in Hell on Earth, rendered by the hand of your hero.

Goodbye, Hermione. May the peace and goodness you represent last not one second longer than you do.

Comment author: Alicorn 25 January 2012 10:54:42PM *  48 points [-]

"We need whiteboards."

"I'm trying paleo."

"I might write rationalist fanfiction of that."

"That's just an applause light." ("That's just a semantic stopsign." "That's just the teacher's password.")

"POLITICS IS THE MINDKILLER"

"If keeping my current job has higher expected utility than founding a startup, I wish to believe that keeping my current job has higher expected utility than founding a startup..."

"I think he's just being metacontrarian."

"Arguments are soldiers!"

"Not every change is an improvement, but every improvement is a change."

"There are no ontologically basic mental entities!"

"I'm an aspiring rationalist."

"Fun Theory!"

"The map is not the territory."

"Let's beware evaporative cooling, here."

"It's a sunk cost! Abandon it!"

"ERROR: POSTULATION OF GROUP SELECTION DETECTED"

"If you measure it and reward the measurement going up, you'll get what you measure, not what you want."

"Azahoth!"

"Death is bad."

Comment author: [deleted] 25 January 2012 09:11:20PM *  44 points [-]

Here's some nice controversial things for you:

  • Given functional birth control and non-fucked family structure, incest is fine and natural and probably a good experience to have.

  • Pedophilia is a legitimate sexual orientation, even if it expressing it IRL is bad (which it is not). Child porn should not be suppressed (tho some of it is documentation of crime and should be investigated).

  • Most of the impact of rape is a made-up self fulfilling prophesy.

  • Child sexual consent hits the same issues as child acting or any other thing that parents can allow, and should not be treated differently from those issues.

  • Self identity is a problem.

  • EDIT: most of the deaths in the holocaust were caused by the allies bombing railroads that supplied food to the camps.

Less controversial in LW, but still bad to say outside:

  • Race, class and subculture are the most useful pieces of information when judging a person.

I run out of ideas.

EDIT: in case it's not clear, I take all these ideas seriously. I would actually appreciate a discussion on these topics with LW.

EDIT: this was productive! I've seriously updated one way or the other on many of these ideas. Thanks for pointing out truths and holes everyone! :)

Comment author: [deleted] 18 January 2012 11:50:12PM *  49 points [-]

(I hope this doesn't come across as overly critical because I'd love to see this problem fixed. I'm not dissing rationality, just its current implementation. You have declared Crocker's Rules before, so I'm giving you an emotional impression of what your recent rationality propaganda articles look like to me, and I hope that doesn't come across as an attack, but something that can be improved upon.)

I think many of your claims of rationality powers (about yourself and other SIAI members) look really self-congratulatory and, well, lame. SIAI plainly doesn't appear all that awesome to me, except at explaining how some old philosophical problems have been solved somewhat recently.

You claim that SIAI people know insane amounts of science and update constantly, but you can't even get 1 out of 200 volunteers to spread some links?! Frankly, the only publicly visible person who strikes me as having some awesome powers is you, and from reading CSA, you seem to have had high productivity (in writing and summarizing) before you ever met LW.

Maybe there are all these awesome feats I just never get to see because I'm not at SIAI, but I've seen similar levels of confidence in your methods and weak results in the New Age circles I hung out in years ago. Your beliefs are much saner, but as long as you can't be more effective than them, I'll always have a problem taking you seriously.

In short, as you yourself noted, you lack a Tim Ferriss. Even for technical skills, there isn't much I can point at and say, "holy shit, this is amazing and original, I wanna learn how to do that, have all my monies!".

(This has little to do with the soundness of SIAI's claims about Intelligence Explosion etc., though, but it does decrease my confidence that conclusions reached through your epistemic rationality are to be trusted if the present results seem so lacking.)

Comment author: shokwave 20 October 2011 04:53:21AM 46 points [-]

What could be more deadly than being unable to die?

Anything. Quite literally, anything at all. All of the things are more deadly than being unable to die.

Comment author: Yvain 05 March 2011 08:03:34PM *  45 points [-]

Seeing this makes me happy because I had a similar revelation a few years ago and it always makes me mad to see people use the glaringly bad justification for being pro-choice which you've overcome. On the other hand, after thinking about the matter quite a bit I still am pro-choice. You say:

On the other hand, as little as it is, it still represents a human life

I think the key word is "represents".

A lot of bad reasoning seems to come from proving a controversial idea can be fit into a category of things that are mostly bad, and then concluding that the controversial idea, too, must be mostly bad.

For example, some people are opposed to a project to genetically engineer diseases like cystic fibrosis out of the human genome, because that's a form of "eugenics". I think this is supposed to cash out as saying that the CF project shares some surface features with what the Nazis did and what those American Southerners who tried to force-sterilize black people did, and those two things are definitely bad, so the CF project must also be bad.

The counterargument is that the features it shares with the Nazi project and the Southern project are not the features that made those two programs bad. Those two programs were bad because they involved hurting people, either through death or through force-sterilization, without their consent. The CF elimination project hopefully would be voluntary and would not damage the people involved. Therefore, although it shares some similarities with the Nazi project and the Southern project (it's about genetics, it's intended to improve the species, etc), those aren't relevant to this moral question and the argument "But it's eugenics" is flawed.

(If you haven't read the 37 Ways Words Can Be Wrong sequence, I suggest that now. Think of a person taking a blue egg that contains vanadium, pointing to a bin full of blue eggs that contain palladium, and saying "But this is a blegg, and we all know bleggs contain palladium!" Well, no.)

The "human life" issue strikes me as very similar. "Taking a human life" is a large category mostly full of bad things. It contains things like stabbing a teenager with a knife, poisoning a senator, strangling an old person in a nursing home, starving a toddler, et cetera. All of these are really bad. They're really bad for various reasons including that they cause the person pain, that they disrupt society, that they violate the person's preference not to be killed, et cetera.

Abortion possibly does fit into the category of "taking a human life." But although it shares the surface features of that category, it isn't clear whether or not it shares the interesting moral feature which is exactly what the whole argument is about. Killing you or me is bad because we understand death and have preferences against it and don't want to die. Whether or not killing a fetus is bad depends on whether or not the fetus also satisfies those conditions - not on whether from a certain angle the problem looks like other cases that satisfy those conditions.

The question isn't whether or not we want to stick the fetus into an artificial category called "human", it's whether it has the specific features that make that category relevant to this particular problem in the first place.

See Leaky Generalizations and Replace The Symbol With The Substance

Comment author: Yvain 22 July 2014 04:21:27AM *  48 points [-]

"Hard mode" sounds too metal. The proper response to "X is hard mode" is "Bring it on!"

Therefore I object to "politics is hard mode" for the same reason I object to "driving a car with your eyes closed is hard mode". Both statements are true, but phrased to produce maximum damage.

There's also a way that "politics is hard mode" is worse than playing a video game on hard mode, or driving a car on hard mode. If you play the video game and fail, you know and you can switch back to an easier setting. If you drive a car in "hard mode" and crash into a tree, you know you should keep your eyes open the next time.

If you discuss politics in "hard mode", you can go your entire life being totally mind-killed (yes! I said it!) and just think everyone else is wrong, doing more and more damage each time you open your mouth and destroying every community you come in contact with.

Can you imagine a human being saying "I'm sorry, I'm too low-level to participate in this discussion"? There may be a tiny handful of people wise enough to try it - and ironically, those are probably the same handful who have a tiny chance of navigating the minefield. Everyone else is just going to say "No, I'm high-enough level, YOU'RE the one who needs to bow out!"

Both "hard mode" and "mind-killer" are intended to convey a sense of danger, but the first conveys a fun, exciting danger that cool people should engage with as much as possible in order to prove their worth, and the latter conveys an extreme danger that can ruin everything and which not only clouds your faculties but clouds the faculty to realize that your faculties are clouded. As such, I think "mind-killer" is the better phrase.

EDIT: More succintly: both phrases mean the same thing, but with different connotations. "Hard mode" sounds like we should accord more status to politics, "mind-killer" sounds like we should accord less. I feel like incentivizing more politics is a bad idea and will justify this if anyone disagrees.

Comment author: seez 08 June 2014 11:10:27PM 49 points [-]

I finished my thesis!

Comment author: fubarobfusco 12 January 2013 05:05:55PM *  46 points [-]

Contracts never have merely two parties. They are never "private" in the sense implied above. A contract requires a third party to enforce the contract against either party at the other's appeal. The existence of the enforcing party is a suppressed premise in almost all contracts, and the consent of that third party is rarely explicitly discussed.

Asking for unbounded "freedom of contract" means asking for the existence of a third party who consents to enforce any contract, and has the power to enforce any contract; in other words, a third party that is amoral and omnipotent; one with no objections to any contract terms, and sufficient power to enforce against any party.

The state, in a democratic republic, cannot be such a third party, because it is not amoral — it has moral (or moral-like) objections to some contract terms. For instance, today's republics do not countenance chattel slavery; even if a person signs a contract to be another's slave, the state will not consent to enforce that contract.

I suggest that, given what we know about humans, the creation of an actual amoral and omnipotent third party would constitute UFAI ....

Comment author: lukeprog 10 January 2012 12:47:45AM *  47 points [-]

Geoff,

Of course you and I are pursuing many of the same goals and we have come to many shared conclusions, though our methodologies seem quite different to me, and our models of the human mind are quite different. I take myself to be an epistemic Bayesian and (last I heard) you take yourself to be an epistemic Cartesian. You say things like "Philosophically, there is no known connection between simplicity... and truth," while I take Occam's razor (aka Solomonoff's lightsaber) very seriously. My model of the human mind ignores philosophy almost completely and is instead grounded in the hundreds of messy details from current neuroscience and psychology, while your work on Connection Theory cites almost no cognitive science and instead appears to be motivated by folk psychology, philosophical considerations, and personal anecdote. I place a pretty high probability on physicalism being true (taking "physicalism" to include radical platonism), but you say here that "it follows [from physicalism] that Connection Theory, as stated, is false," but that some variations of CT may still be correct.

Why bring this up? I suspect many LWers are excited (like me) to see another organization working on (among other things) x-risk reduction and rationality training, especially one packed with LW members. But I also suspect many LWers (like me) have many concerns about your research methodology and about connection theory. I think this would be a good place for you to not just introduce yourself (and Leverage Research) but also to address some likely concerns your potential supporters may have (like I did for SI here and here).

For example:

  • Is my first paragraph above accurate? Which corrections, qualifications, and additions would you like to make?
  • How important is Connection Theory to what Leverage does?
  • How similar are your own research assumptions and methodology to those of other Leverage researchers?

I suspect it will be more beneficial to your organization to address such concerns directly and not let them lurk unanswered for long periods of time. That is one lesson I take from my recent experiences with the Singularity Institute.

BTW, I appreciate how many public-facing documents Leverage produces to explain its ideas to others. Please keep that up.

Comment author: Quirinus_Quirrell 15 September 2011 03:32:37PM 45 points [-]

DO NOT USE YOUR REGULAR IDENTITY TO SAY ANYTHING TRULY INTERESTING ON THIS THREAD, OR ON THIS TOPIC, UNLESS YOU HAVE THOUGHT ABOUT IT FOR FIVE MINUTES.

Comment author: Yvain 08 January 2011 06:02:14PM *  49 points [-]

I read...a surprisingly large amount of that.

If I understand it right, they are saying that modern scholarship confirms that the Gospels avoid certain obvious failure modes - eg being written hundreds of years after the fact, wildly contradicting each other on important points, and erring on simple points of geography and history - and that someone would've called them on it if they just blatantly made things up - therefore the Gospels can be assumed mostly true. The Gospels say many people saw Jesus die on the Cross and then saw him alive later, and that natural explanations (Jesus survived the crucifixion, everyone was hallucinating, it was Jesus' twin brother - yes, they actually addressed that) are all unconvincing; therefore Jesus really was resurrected. According to the Gospels, this was seen by many witnesses, including luminaries like St. Peter, and none of them later came forward to say "No, we didn't see this at all, shut up". Further, many of them later died an extremely predictable martyr's death, proving that they believed in Christ's resurrection enough to sacrifice their lives for him, something they wouldn't have done if it were all made up (they point out that although some people, like kamikaze pilots, have sacrificed their lives to false philosophies, it is far more unlikely that the Apostles would sacrifice their lives to a false empirical fact, namely that they had seen Jesus rise from the dead).

Multiplying the low probabilities of everyone involved simultaneously having some kind of fit of insanity leading them to sincerely believe Jesus had risen from the dead gives 1 : 10^39 against, and since this is a very small number obviously the argument must be correct.

This argument doesn't quite take the truth of the Gospels as a premise, but it comes close. Although there are some atheist accounts that allow for the truth of the Gospels as written while still casting doubt on Christ's divinity, that's not where the smart money lies - most atheists would deny to one degree or another the validity of the Gospels themselves. Either the entire thing was made up (a theory which the McGrews reject, and I think rightly) or a historical Jesus had various miracles falsely attributed to him by overzealous believers. This leaves the McGrews' objection that the existence of a wider Christian community, many of whom had been personally involved in the events described, would have limited the Gospel writers' ability to make things up even if they had been so inclined.

So instead of basing his argument on the likelihood of people hallucinating resurrected Jesus, the McGrews should have investigated the probability that the Gospel writers would make up miracles and the probability that they would be caught; something like

P(resurrection) ~= P(gospels true) ~= 1 - [P(people make stuff up about Jesus) * P(they don't get called on it)]

So what is the probability that, given some historical tradition of Jesus, it will get embellished with made-up miracles and people will write gospels about it? Approximately 1: both Christians and atheists agree that the vast majority of the few dozen extant Gospels are false, including the infancy gospels, the Gospel of Judas, the Gospel of Peter, et cetera. All of these tend to take the earlier Gospels and stories and then add a bunch of implausible miracles to them. So we know that the temptation to write false Gospels laden with miracles was there. Apologists say that the four canonical Gospels are earlier and more official than the apocryphal Gospels, and I agree, but given the existence of a known tendency for people to make up books, and a set of books that sound made-up, the difference seems more one of degree than of kind.

That leaves the question of whether anyone would notice. The dates of all the Gospels are uncertain, but around 70 - 80 AD for the synoptics seems like a fair guess. The average life expectancy in classical Judaea for those who survived childhood was 40 to 50. That means Jesus' generation would be long gone by the time the first Gospel came out, and even people who were teenagers at the time of Jesus' crucifixion would be dying off. Christian tradition lists all the Apostles except John as dead by 75 AD.

There's also the more general question of argument from silence. Let's say someone did have evidence against something in the Gospels. Most Judeans at the time wouldn't have been literate, especially not in the Greek in which the Gospels were written. Many who were, might not have had the time or interest to pen responses to what seemed a minor cult at the time. If any did, those responses might not have spread in an age when every work had to be laboriously copied by hand. And if by some miracle a refutation did become popular, there's no reason to think we would know about it since many of the popular works of the age have been lost completely.

Matthew mentions that on the day of Jesus' crucifixion, graves opened and the dead walked the earth throughout the city of Jerusalem for several hours. No one else (including the other evangelists!) mentioned the dead walking the earth, either to confirm or refute it, so clearly the 1st century AD Judean skeptical community wasn't exactly on top of its game. That alone casts suspicion on the whole "if this was false, someone would've said so" argument.

All of this makes the Gospel argument relatively uninteresting to me. But it hints at a different problem which is interesting. Twenty years after the death of Christ, we have Paul writing letters to flourishing churches all across the eastern Mediterranean, all of whom seem to have at least a vague tradition of Christ being resurrected and appearing to people. That means Christianity spread really, really fast, presumably by people who were pretty sure they had met the resurrected Christ. At my current, limited level of Biblical scholarship I consider myself still confused on this point and yet to see a satisfactory explanation (people rising from the dead doesn't count as 'satisfactory').

Comment author: FiftyTwo 15 July 2013 11:43:57PM 48 points [-]

Given our known problems with actively expressing approval for things, I'd like to mention that I approve of the more frequent open threads.

View more: Next