Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Eliezer Yudkowsky Facts

126 steven0461 22 March 2009 08:17PM
  • Eliezer Yudkowsky was once attacked by a Moebius strip. He beat it to death with the other side, non-violently.
  • Inside Eliezer Yudkowsky's pineal gland is not an immortal soul, but another brain.
  • Eliezer Yudkowsky's favorite food is printouts of Rice's theorem.
  • Eliezer Yudkowsky's favorite fighting technique is a roundhouse dustspeck to the face.
  • Eliezer Yudkowsky once brought peace to the Middle East from inside a freight container, through a straw.
  • Eliezer Yudkowsky once held up a sheet of paper and said, "A blank map does not correspond to a blank territory". It was thus that the universe was created.
  • If you dial Chaitin's Omega, you get Eliezer Yudkowsky on the phone.
  • Unless otherwise specified, Eliezer Yudkowsky knows everything that he isn't telling you.
  • Somewhere deep in the microtubules inside an out-of-the-way neuron somewhere in the basal ganglia of Eliezer Yudkowsky's brain, there is a little XML tag that says awesome.
  • Eliezer Yudkowsky is the Muhammad Ali of one-boxing.
  • Eliezer Yudkowsky is a 1400 year old avatar of the Aztec god Aixitl.
  • The game of "Go" was abbreviated from "Go Home, For You Cannot Defeat Eliezer Yudkowsky".
  • When Eliezer Yudkowsky gets bored, he pinches his mouth shut at the 1/3 and 2/3 points and pretends to be a General Systems Vehicle holding a conversation among itselves. On several occasions he has managed to fool bystanders.
  • Eliezer Yudkowsky has a swiss army knife that has folded into it a corkscrew, a pair of scissors, an instance of AIXI which Eliezer once beat at tic tac toe, an identical swiss army knife, and Douglas Hofstadter.
  • If I am ignorant about a phenomenon, that is not a fact about the phenomenon; it just means I am not Eliezer Yudkowsky.
  • Eliezer Yudkowsky has no need for induction or deduction. He has perfected the undiluted master art of duction.
  • There was no ice age. Eliezer Yudkowsky just persuaded the planet to sign up for cryonics.
  • There is no spacetime symmetry. Eliezer Yudkowsky just sometimes holds the territory upside down, and he doesn't care.
  • Eliezer Yudkowsky has no need for doctors. He has implemented a Universal Curing Machine in a system made out of five marbles, three pieces of plastic, and some of MacGyver's fingernail clippings.
  • Before Bruce Schneier goes to sleep, he scans his computer for uploaded copies of Eliezer Yudkowsky.

If you know more Eliezer Yudkowsky facts, post them in the comments.

She has joined the Conspiracy

9 Eliezer_Yudkowsky 13 January 2009 07:48PM


I have no idea whether I had anything to do with this.

Thanksgiving Prayer

20 Eliezer_Yudkowsky 28 November 2008 04:17AM

At tonight's Thanksgiving, Erin remarked on how this was her first real Thanksgiving dinner away from her family, and that it was an odd feeling to just sit down and eat without any prayer beforehand.  (Yes, she's a solid atheist in no danger whatsoever, thank you for asking.)

And as she said this, it reminded me of how wrong it is to give gratitude to God for blessings that actually come from our fellow human beings putting in a great deal of work.

So I at once put my hands together and said,

"Dear Global Economy, we thank thee for thy economies of scale, thy professional specialization, and thy international networks of trade under Ricardo's Law of Comparative Advantage, without which we would all starve to death while trying to assemble the ingredients for such a dinner as this.  Amen."

Mundane Magic

103 Eliezer_Yudkowsky 31 October 2008 04:00PM

Followup toJoy in the Merely Real, Joy in Discovery, If You Demand Magic, Magic Won't Help

As you may recall from some months earlier, I think that part of the rationalist ethos is binding yourself emotionally to an absolutely lawful reductionistic universe—a universe containing no ontologically basic mental things such as souls or magic—and pouring all your hope and all your care into that merely real universe and its possibilities, without disappointment.

There's an old trick for combating dukkha where you make a list of things you're grateful for, like a roof over your head.

So why not make a list of abilities you have that would be amazingly cool if they were magic, or if only a few chosen individuals had them?

For example, suppose that instead of one eye, you possessed a magical second eye embedded in your forehead.  And this second eye enabled you to see into the third dimension—so that you could somehow tell how far away things were—where an ordinary eye would see only a two-dimensional shadow of the true world.  Only the possessors of this ability can accurately aim the legendary distance-weapons that kill at ranges far beyond a sword, or use to their fullest potential the shells of ultrafast machinery called "cars".

"Binocular vision" would be too light a term for this ability.  We'll only appreciate it once it has a properly impressive name, like Mystic Eyes of Depth Perception.

So here's a list of some of my favorite magical powers:

continue reading »

Awww, a Zebra

23 Eliezer_Yudkowsky 01 October 2008 01:28AM

This image recently showed up on Flickr (original is nicer):


With the caption:

"Alas for those who turn their eyes from zebras and dream of dragons!  If we cannot learn to take joy in the merely real, our lives shall be empty indeed." —Eliezer S. Yudkowsky.

"Awww!", I said, and called over my girlfriend over to look.

"Awww!", she said, and then looked at me, and said,  "I think you need to take your own advice!"

Me:  "But I'm looking at the zebra!"
Her:  "On a computer!"
Me:  (Turns away, hides face.)
Her:  "Have you ever even seen a zebra in real life?"
Me:  "Yes!  Yes, I have!  My parents took me to Lincoln Park Zoo!  ...man, I hated that place."


Part of the Joy in the Merely Real subsequence of Reductionism

Next post: "Hand vs. Fingers"

Previous post: "Initiation Ceremony"

Points of Departure

14 Eliezer_Yudkowsky 09 September 2008 09:18PM

Followup toAnthropomorphic Optimism

If you've watched Hollywood sci-fi involving supposed robots, androids, or AIs, then you've seen AIs that are depicted as "emotionless".  In the olden days this was done by having the AI speak in a monotone pitch - while perfectly stressing the syllables, of course.  (I could similarly go on about how AIs that disastrously misinterpret their mission instructions, never seem to need help parsing spoken English.)  You can also show that an AI is "emotionless" by having it notice an emotion with a blatant somatic effect, like tears or laughter, and ask what it means (though of course the AI never asks about sweat or coughing).

If you watch enough Hollywood sci-fi, you'll run into all of the following situations occurring with supposedly "emotionless" AIs:

  1. An AI that malfunctions or otherwise turns evil, instantly acquires all of the negative human emotions - it hates, it wants revenge, and feels the need to make self-justifying speeches.
  2. Conversely, an AI that turns to the Light Side, gradually acquires a full complement of human emotions.
  3. An "emotionless" AI suddenly exhibits human emotion when under exceptional stress; e.g. an AI that displays no reaction to thousands of deaths, suddenly showing remorse upon killing its creator.
  4. An AI begins to exhibit signs of human emotion, and refuses to admit it.

Now, why might a Hollywood scriptwriter make those particular mistakes?

continue reading »

Harder Choices Matter Less

31 Eliezer_Yudkowsky 29 August 2008 02:02AM

...or they should, logically speaking.

Suppose you're torn in an agonizing conflict between two choices.

Well... if you can't decide between them, they must be around equally appealing, right?  Equally balanced pros and cons?  So the choice must matter very little - you may as well flip a coin.  The alternative is that the pros and cons aren't equally balanced, in which case the decision should be simple.

This is a bit of a tongue-in-cheek suggestion, obviously - more appropriate for choosing from a restaurant menu than choosing a major in college.

But consider the case of choosing from a restaurant menu.  The obvious choices, like Pepsi over Coke, will take very little time.  Conversely, the choices that take the most time probably make the least difference.  If you can't decide between the hamburger and the hot dog, you're either close to indifferent between them, or in your current state of ignorance you're close to indifferent between their expected utilities.

continue reading »

I'd take it

4 Eliezer_Yudkowsky 02 July 2008 07:57AM

Out-of-context quote of the day:

"...although even $10 trillion isn't a huge amount of money..."

From Simon Johnson, Director of the IMF's Research Department, on "The Rise of Sovereign Wealth Funds".

So if you had $10 trillion, what would you do with it?

A Broken Koan

6 Eliezer_Yudkowsky 24 May 2008 07:04PM

At Baycon today and tomorrow.  Physics series resumes tomorrow.

Meanwhile, here's a link to a page of Broken Koans and other Zen debris I ran across, which should amuse fans of ancient Eastern wisdom; and a koan of my own:

Two monks were arguing about a flag. One said, "The flag is moving."

The other said, "The wind is moving."

Julian Barbour happened to be passing by.  He told them, "Not the wind, not the flag."

The first monk said, "Is the mind moving?"

Barbour replied, "Not even mind is moving."

The second monk said, "Is time moving?"

Barbour said, "There is no time.  You could say that it is mu-ving."

"Then why do we think that flags flap, and wind blows, and minds change, and time moves?" inquired the first monk.

Barbour thought, and said, "Because you remember."

If Many-Worlds Had Come First

44 Eliezer_Yudkowsky 10 May 2008 07:43AM

Followup to: Collapse Postulates, Decoherence is Simple, Falsifiable and Testable

Not that I'm claiming I could have done better, if I'd been born into that time, instead of this one...

Macroscopic decoherence—the idea that the known quantum laws that govern microscopic events, might simply govern at all levels without alteration—also known as "many-worlds"—was first proposed in a 1957 paper by Hugh Everett III.  The paper was ignored.  John Wheeler told Everett to see Niels Bohr.  Bohr didn't take him seriously.

Crushed, Everett left academic physics, invented the general use of Lagrange multipliers in optimization problems, and became a multimillionaire.

It wasn't until 1970, when Bryce DeWitt (who coined the term "many-worlds") wrote an article for Physics Today, that the general field was first informed of Everett's ideas.  Macroscopic decoherence has been gaining advocates ever since, and may now be the majority viewpoint (or not).

But suppose that decoherence and macroscopic decoherence had been realized immediately following the discovery of entanglement, in the 1920s.  And suppose that no one had proposed collapse theories until 1957.  Would decoherence now be steadily declining in popularity, while collapse theories were slowly gaining steam?

Imagine an alternate Earth, where the very first physicist to discover entanglement and superposition, said, "Holy flaming monkeys, there's a zillion other Earths out there!"

In the years since, many hypotheses have been proposed to explain the mysterious Born probabilities.  But no one has yet suggested a collapse postulate.  That possibility simply has not occurred to anyone.

One day, Huve Erett walks into the office of Biels Nohr...

continue reading »

View more: Prev | Next