Eliezer Yudkowsky Facts

124 steven0461 22 March 2009 08:17PM
  • Eliezer Yudkowsky was once attacked by a Moebius strip. He beat it to death with the other side, non-violently.
  • Inside Eliezer Yudkowsky's pineal gland is not an immortal soul, but another brain.
  • Eliezer Yudkowsky's favorite food is printouts of Rice's theorem.
  • Eliezer Yudkowsky's favorite fighting technique is a roundhouse dustspeck to the face.
  • Eliezer Yudkowsky once brought peace to the Middle East from inside a freight container, through a straw.
  • Eliezer Yudkowsky once held up a sheet of paper and said, "A blank map does not correspond to a blank territory". It was thus that the universe was created.
  • If you dial Chaitin's Omega, you get Eliezer Yudkowsky on the phone.
  • Unless otherwise specified, Eliezer Yudkowsky knows everything that he isn't telling you.
  • Somewhere deep in the microtubules inside an out-of-the-way neuron somewhere in the basal ganglia of Eliezer Yudkowsky's brain, there is a little XML tag that says awesome.
  • Eliezer Yudkowsky is the Muhammad Ali of one-boxing.
  • Eliezer Yudkowsky is a 1400 year old avatar of the Aztec god Aixitl.
  • The game of "Go" was abbreviated from "Go Home, For You Cannot Defeat Eliezer Yudkowsky".
  • When Eliezer Yudkowsky gets bored, he pinches his mouth shut at the 1/3 and 2/3 points and pretends to be a General Systems Vehicle holding a conversation among itselves. On several occasions he has managed to fool bystanders.
  • Eliezer Yudkowsky has a swiss army knife that has folded into it a corkscrew, a pair of scissors, an instance of AIXI which Eliezer once beat at tic tac toe, an identical swiss army knife, and Douglas Hofstadter.
  • If I am ignorant about a phenomenon, that is not a fact about the phenomenon; it just means I am not Eliezer Yudkowsky.
  • Eliezer Yudkowsky has no need for induction or deduction. He has perfected the undiluted master art of duction.
  • There was no ice age. Eliezer Yudkowsky just persuaded the planet to sign up for cryonics.
  • There is no spacetime symmetry. Eliezer Yudkowsky just sometimes holds the territory upside down, and he doesn't care.
  • Eliezer Yudkowsky has no need for doctors. He has implemented a Universal Curing Machine in a system made out of five marbles, three pieces of plastic, and some of MacGyver's fingernail clippings.
  • Before Bruce Schneier goes to sleep, he scans his computer for uploaded copies of Eliezer Yudkowsky.

If you know more Eliezer Yudkowsky facts, post them in the comments.

[Link] Max Tegmark and Nick Bostrom Speak About AI Risk at UN International Security Event

10 Gram_Stone 13 October 2015 11:25PM

[Link] Rationality and Mental Illness in the Huffington Post

6 Gleb_Tsipursky 18 October 2015 01:24AM

Just published an article in the The Huffington Post about using rationality-informed strategies to manage my mental illness. Hope this helps people think more rationally about this topic.

Rationality Cardinality

20 jimrandomh 03 October 2015 03:54PM

Rationality Cardinality is a card game which takes memes and concepts from the rationality/Less Wrong sphere, and mixes them with jokes to make a game. After nearly two years of card-creation, playtesting and development, today, I'm taking the "beta" label off the web-based version of Rationality Cardinality. Go to the website and, if at least two other people visit at the same time, you can play against them.

I've put a lot of thought and a lot of work into the cards, and they're not just about humor; I also went systematically through blog posts and glossaries collecting terms and concepts that I think people should know about and be reminded of, and wrote concise explanations for them. It provides an easy way for everyone to quickly learn the jargon that's floating around, in a fun way; and it provides spaced repetition for concepts that might not otherwise have sunk in.

Rationality Cardinality will also soon have a print version. The catch is that in order to mass-produce it, I need to be sure there's enough demand. So, here's the deal: once enough people have played the online version, I'll launch a Kickstarter to sell print copies. You can speed this up by inviting people who might not otherwise see it to play.

 

Rationality Cardinality is somewhat inspired by Cards Against Rationality. Software for the web-based implementation is based on Cards for Humanity, with modifications.

Cards Against Rationality

21 fubarobfusco 16 June 2012 02:23AM

(This post won't make much sense if you don't know about the game Cards Against Humanity. Fortunately it has a web site. If you know the game Apples to Apples, well, CAH's gameplay is almost identical to Apples to Apples ... but the cards range from snarky to perverted to shockingly un-PC.)

After the LW meetup in Mountain View yesterday, the idea came up of a Less Wrong expansion set for Cards Against Humanity ... with a roughly Shit Rationalists Say theme, with a little help from Eliezer Yudkowsky Facts. Regardless of whether this ever happens, we felt the need to share the pain with the rest of the community.

These are meant to be mixed with the standard deck. Hence, the completed phrase "That which can be destroyed by being a motherfucking sorceror should be" is a clearly winning combination, as is "Why am I sticky? Grass-fed butter."

Black cards:

  • That which can be destroyed by _____ should be.
  • _____ is the mind-killer.
  • The thirteenth virtue of rationality is _____.
  • _____ is truly part of you.
  • "Let me not become attached to _____ I may not want."
  • _____ is vulnerable to counterfactual mugging.
  • What is true is already so. _____ doesn't make it worse.
  • _____ is not the territory.
  • _____ will kill you because you are made of _____ that it could use for something else.
  • "I'm an aspiring _____."
  • In the new version of Newcomb's problem, you have to choose between a box containing _____ and a box containing _____.
  • Instrumental rationality is the art of winning at _____.
  • Less Wrong is not a cult so long as our meetups don't include _____.
  • In an Iterated Prisoners' Dilemma, _____ beats _____.
  • The latest hot fanfic: _____ and the Methods of _____.
  • _____ is highly correlated with _____.
  • Absence of _____ is evidence of _____.
  • The coherent extrapolated volition of humanity includes a term for _____.
  • We have encountered aliens who communicate through _____.
  • In the future, Eliezer Yudkowsky will be remembered for _____.
  • I'm signed up with Alcor, so _____ will be frozen when I die.
  • "I am running on corrupted _____."
  • An improperly-programmed AI might tile the universe with _____.
  • You know what they say: one person's _____ is another person's _____.
  • "I want to want _____."
  • _____ is what _____ feels like from the inside.
  • _____ is the unit of caring.
  • If you're not getting _____, you're spending too many resources on _____.
  • Every _____ wants to be _____.
  • Inside Eliezer Yudkowsky's pineal gland is not an immortal soul, but _____.
  • Before Bruce Schneier goes to sleep, he scans his computer for uploaded copies of _____.
  • Eliezer Yudkowsky updates _____ to fit his priors.
  • Eliezer Yudkowsky doesn't have a chin; under his beard is _____.
  • Never go in against _____ when _____ is on the line.
  • Reversed _____ is not _____.
  • You have no idea how big _____ is.
  • Why haven't I signed up for cryonics?
  • What am I optimizing for?
  • The Quantified Self people have finally figured out how to measure _____.
  • You can't fit a sheep into a _____.
  • Make beliefs pay rent in _____.
  • Why did my comment get downvoted?
  • "You make a compelling argument for _____."
  • "My model of you likes _____."
  • "I can handle _____, because I am already enduring it."
White cards:
  • Eliezer Yudkowsky
  • Friendly AI
  • Unfriendly AI
  • Lukeprog's love life
  • The New York meetup group
  • Updating
  • Ugh fields
  • Ben Goertzel
  • Guessing the teacher's password
  • Confidence intervals
  • Signaling
  • Polyamory
  • The paleo diet
  • Asperger's syndrome
  • Ephemerisle
  • Burning Man
  • Grass-fed butter
  • Dropping acid
  • Timeless Decision Theory
  • Pascal's mugging
  • The Sequences
  • Deathism
  • Alcor
  • The Singularity Institute for Artificial Intelligence
  • Quirrellmort
  • Dark Arts
  • Tenorman's family chili
  • Affective death spirals
  • Rejection therapy
  • The cult attractor
  • Akrasia
  • The Bayesian Conspiracy
  • Paperclips
  • The Copenhagen interpretation
  • Clippy
  • Shit Rationalists Say
  • Babyeaters
  • Superhappies
  • Aubrey de Grey's beard
  • Robin Hanson
  • The blind idiot god, Evolution
  • Getting downvoted on Less Wrong
  • Two-boxing
  • The obvious Schelling point
  • Negging
  • Peacocking
  • P-Zombies
  • Tit-for-Tat
  • Applause lights
  • Rare diseases in cute puppies
  • Rationalist fanfiction
  • Sunk costs
  • Vibram Fivefingers
  • RationalWiki
  • The Chaos Legion Marching Song
  • Poor epistemic hygiene
  • A sheep-counting machine
  • A horcrux
  • Getting timelessly physical
  • The Stanford Prison Experiment
  • A ridiculously complicated Zendo rule
  • Utils
  • Wireheading
  • My karma score
  • Wiggins
  • Ontologically basic mental entities
  • The invisible dragon in my garage
  • Meta-contrarianism
  • Mormon transhumanists
  • Nootropics
  • Quantum immortality
  • Quantum immorality
  • The least convenient possible world
  • Cards Against Rationality
  • Moldbuggery
  • The #1 reviewed Harry Potter / The Fountainhead crossover fanfic, "Howard Roark and the Prisoner of Altruism"
  • Low-hanging fruits
  • The set of all possible fetishes
  • Rationalist clopfic
  • The Library of Babel's porn collection
  • Counterfactual hugging
  • Acausal sex
Post your own!
EDIT, 2012-08-29: Several additions from the thread and elsewhere.
EDIT, 2012-12-25: This is licensed under Creative Commons CC BY-NC-SA 2.0 license, because Cards Against Humanity is.

Alcor vs. Cryonics Institute

27 prespectiveCryonaut 09 April 2012 01:49AM

I searched but did not find any discussion comparing the merits of the two major cryonics providers in the US, so I figured it might be productive to start such a discussion myself by posing the question to the community: which provider would you choose, all things being equal: Alcor or the Cryonics Institute?

From my research, Alcor comes across as the flasher, higher-end option, while CI seems more like a Mom-and-Pop operation, having only two full-time employees. Alcor also costs substantially more, with its neurosuspension option alone running ~$80k, compared with CI's whole-body preservation cost of ~$30k. While Alcor has received far more publicity than CI, much of it has been negative. The Ted Williams fiasco is probably the most prominent example, although the accuser in that case seems anything but trustworthy. However, Alcor remains something of a shadowy organization that many within the cryonics community are suspicious of. Mike Darwin, a former Alcor president, has written at length on both organizations at http://www.chronopause.com, and on the whole, at least based on what I've read, Alcor comes across looking less competent, less trustworthy, and less open than CI.

One issue in particular is funding. Even though Alcor costs much more, it has many more expenses, and Darwin and others have questioned the long term financial stability of the organization. Ralph Merkle, an Alcor board member and elder statesman of cryonics who has made significant contributions to other fields like nanotechnology, a field he practically invented, and encryption, with Merkle's Puzzles, has essentially admitted(1) that Alcor hasn't managed its money very well:

"Some Alcor members have wondered why rich Alcor members have not donated more money to Alcor. The major reason is that rich Alcor members are rich because they know how to manage money, and they know that Alcor traditionally has managed money poorly. Why give any significant amount of money to an organization that has no fiscal discipline? It will just spend it, and put itself right back into the same financial hole it’s already in.

 As a case in point, consider Alcor’s efforts over the year to create an “endowment fund” to stabilize its operating budget. These efforts have always ended with Alcor spending the money on various useful activities. These range from research projects to subsidizing our existing members — raising dues and minimums is a painful thing to do, and the Board is always reluctant to do this even when the financial data is clear. While each such project is individually worthy and has merit, collectively the result has been to thwart the effort to create a lasting endowment and leave Alcor in a financially weak position."


Such an acknowledgement, though appreciated, is frankly disturbing, considering that members depend utterly on these organizations remaining operational and solvent for decades, perhaps even centuries, after they are deanimated.

Meanwhile, CI carries on merrily, well under the radar, seemingly without any drama or intrigue. And Ben Best seems to have very good credentials in the cryonics community, and Eliezer, one of the most prominent public advocates of cryonics, is signed up with them. Yet the tiny size of the operation still fills me with unease concerning its prospects for long-term survivability.

So with all of that said, besides cost, what factors would lead or have led you to pick one organization over the other?

1: http://www.alcor.org/Library/html/CryopreservationFundingAndInflation.html

Bayesian Judo

71 Eliezer_Yudkowsky 31 July 2007 05:53AM

You can have some fun with people whose anticipations get out of sync with what they believe they believe.

I was once at a dinner party, trying to explain to a man what I did for a living, when he said: "I don't believe Artificial Intelligence is possible because only God can make a soul."

At this point I must have been divinely inspired, because I instantly responded: "You mean if I can make an Artificial Intelligence, it proves your religion is false?"

continue reading »

What is Bayesianism?

81 Kaj_Sotala 26 February 2010 07:43AM

This article is an attempt to summarize basic material, and thus probably won't have anything new for the hard core posting crowd. It'd be interesting to know whether you think there's anything essential I missed, though.

You've probably seen the word 'Bayesian' used a lot on this site, but may be a bit uncertain of what exactly we mean by that. You may have read the intuitive explanation, but that only seems to explain a certain math formula. There's a wiki entry about "Bayesian", but that doesn't help much. And the LW usage seems different from just the "Bayesian and frequentist statistics" thing, too. As far as I can tell, there's no article explicitly defining what's meant by Bayesianism. The core ideas are sprinkled across a large amount of posts, 'Bayesian' has its own tag, but there's not a single post that explicitly comes out to make the connections and say "this is Bayesianism". So let me try to offer my definition, which boils Bayesianism down to three core tenets.

We'll start with a brief example, illustrating Bayes' theorem. Suppose you are a doctor, and a patient comes to you, complaining about a headache. Further suppose that there are two reasons for why people get headaches: they might have a brain tumor, or they might have a cold. A brain tumor always causes a headache, but exceedingly few people have a brain tumor. In contrast, a headache is rarely a symptom for cold, but most people manage to catch a cold every single year. Given no other information, do you think it more likely that the headache is caused by a tumor, or by a cold?

If you thought a cold was more likely, well, that was the answer I was after. Even if a brain tumor caused a headache every time, and a cold caused a headache only one per cent of the time (say), having a cold is so much more common that it's going to cause a lot more headaches than brain tumors do. Bayes' theorem, basically, says that if cause A might be the reason for symptom X, then we have to take into account both the probability that A caused X (found, roughly, by multiplying the frequency of A with the chance that A causes X) and the probability that anything else caused X. (For a thorough mathematical treatment of Bayes' theorem, see Eliezer's Intuitive Explanation.)

continue reading »

Crazy Ideas Thread, Aug. 2015

7 polymathwannabe 11 August 2015 01:24PM

This thread is intended to provide a space for 'crazy' ideas. Ideas that spontaneously come to mind (and feel great), ideas you long wanted to tell but never found the place and time for and also for ideas you think should be obvious and simple - but nobody ever mentions them.

This thread itself is such an idea. Or rather the tangent of such an idea which I post below as a seed for this thread.

 

Rules for this thread:

  1. Each crazy idea goes into its own top level comment and may be commented there.
  2. Voting should be based primarily on how original the idea is.
  3. Meta discussion of the thread should go to the top level comment intended for that purpose. 

 


If this should become a regular thread I suggest the following :

  • Use "Crazy Ideas Thread" in the title.
  • Copy the rules.
  • Add the tag "crazy_idea".
  • Create a top-level comment saying 'Discussion of this thread goes here; all other top-level comments should be ideas or similar'
  • Add a second top-level comment with an initial crazy idea to start participation.

Less Wrong EBook Creator

45 ScottL 13 August 2015 09:17PM

I read a lot on my kindle and I noticed that some of the sequences aren’t available in book form. Also, the ones that are mostly only have the posts. I personally want them to also include some of the high ranking comments and summaries. So, that is why I wrote this tool to automatically create books from a set of posts. It creates the book based on the information you give it in an excel file. The excel file contains:

Post information

  • Book name
  • Sequence name
  • Title
  • Link
  • Summary description

Sequence information

  • Name
  • Summary

Book information

  • Name
  • Summary

The only compulsory component is the link to the post.

I have used the tool to create books for Living LuminouslyNo-Nonsense MetaethicsRationality: From AI to ZombiesBenito's Guide and more. You can see them in the examples folder in this github link. The tool just creates epub books you can use calibre or a similar tool to convert it to another format.  

continue reading »

View more: Next