Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Cards Against Rationality

19 fubarobfusco 16 June 2012 02:23AM

(This post won't make much sense if you don't know about the game Cards Against Humanity. Fortunately it has a web site. If you know the game Apples to Apples, well, CAH's gameplay is almost identical to Apples to Apples ... but the cards range from snarky to perverted to shockingly un-PC.)

After the LW meetup in Mountain View yesterday, the idea came up of a Less Wrong expansion set for Cards Against Humanity ... with a roughly Shit Rationalists Say theme, with a little help from Eliezer Yudkowsky Facts. Regardless of whether this ever happens, we felt the need to share the pain with the rest of the community.

These are meant to be mixed with the standard deck. Hence, the completed phrase "That which can be destroyed by being a motherfucking sorceror should be" is a clearly winning combination, as is "Why am I sticky? Grass-fed butter."

Black cards:

  • That which can be destroyed by _____ should be.
  • _____ is the mind-killer.
  • The thirteenth virtue of rationality is _____.
  • _____ is truly part of you.
  • "Let me not become attached to _____ I may not want."
  • _____ is vulnerable to counterfactual mugging.
  • What is true is already so. _____ doesn't make it worse.
  • _____ is not the territory.
  • _____ will kill you because you are made of _____ that it could use for something else.
  • "I'm an aspiring _____."
  • In the new version of Newcomb's problem, you have to choose between a box containing _____ and a box containing _____.
  • Instrumental rationality is the art of winning at _____.
  • Less Wrong is not a cult so long as our meetups don't include _____.
  • In an Iterated Prisoners' Dilemma, _____ beats _____.
  • The latest hot fanfic: _____ and the Methods of _____.
  • _____ is highly correlated with _____.
  • Absence of _____ is evidence of _____.
  • The coherent extrapolated volition of humanity includes a term for _____.
  • We have encountered aliens who communicate through _____.
  • In the future, Eliezer Yudkowsky will be remembered for _____.
  • I'm signed up with Alcor, so _____ will be frozen when I die.
  • "I am running on corrupted _____."
  • An improperly-programmed AI might tile the universe with _____.
  • You know what they say: one person's _____ is another person's _____.
  • "I want to want _____."
  • _____ is what _____ feels like from the inside.
  • _____ is the unit of caring.
  • If you're not getting _____, you're spending too many resources on _____.
  • Every _____ wants to be _____.
  • Inside Eliezer Yudkowsky's pineal gland is not an immortal soul, but _____.
  • Before Bruce Schneier goes to sleep, he scans his computer for uploaded copies of _____.
  • Eliezer Yudkowsky updates _____ to fit his priors.
  • Eliezer Yudkowsky doesn't have a chin; under his beard is _____.
  • Never go in against _____ when _____ is on the line.
  • Reversed _____ is not _____.
  • You have no idea how big _____ is.
  • Why haven't I signed up for cryonics?
  • What am I optimizing for?
  • The Quantified Self people have finally figured out how to measure _____.
  • You can't fit a sheep into a _____.
  • Make beliefs pay rent in _____.
  • Why did my comment get downvoted?
  • "You make a compelling argument for _____."
  • "My model of you likes _____."
  • "I can handle _____, because I am already enduring it."
White cards:
  • Eliezer Yudkowsky
  • Friendly AI
  • Unfriendly AI
  • Lukeprog's love life
  • The New York meetup group
  • Updating
  • Ugh fields
  • Ben Goertzel
  • Guessing the teacher's password
  • Confidence intervals
  • Signaling
  • Polyamory
  • The paleo diet
  • Asperger's syndrome
  • Ephemerisle
  • Burning Man
  • Grass-fed butter
  • Dropping acid
  • Timeless Decision Theory
  • Pascal's mugging
  • The Sequences
  • Deathism
  • Alcor
  • The Singularity Institute for Artificial Intelligence
  • Quirrellmort
  • Dark Arts
  • Tenorman's family chili
  • Affective death spirals
  • Rejection therapy
  • The cult attractor
  • Akrasia
  • The Bayesian Conspiracy
  • Paperclips
  • The Copenhagen interpretation
  • Clippy
  • Shit Rationalists Say
  • Babyeaters
  • Superhappies
  • Aubrey de Grey's beard
  • Robin Hanson
  • The blind idiot god, Evolution
  • Getting downvoted on Less Wrong
  • Two-boxing
  • The obvious Schelling point
  • Negging
  • Peacocking
  • P-Zombies
  • Tit-for-Tat
  • Applause lights
  • Rare diseases in cute puppies
  • Rationalist fanfiction
  • Sunk costs
  • Vibram Fivefingers
  • RationalWiki
  • The Chaos Legion Marching Song
  • Poor epistemic hygiene
  • A sheep-counting machine
  • A horcrux
  • Getting timelessly physical
  • The Stanford Prison Experiment
  • A ridiculously complicated Zendo rule
  • Utils
  • Wireheading
  • My karma score
  • Wiggins
  • Ontologically basic mental entities
  • The invisible dragon in my garage
  • Meta-contrarianism
  • Mormon transhumanists
  • Nootropics
  • Quantum immortality
  • Quantum immorality
  • The least convenient possible world
  • Cards Against Rationality
  • Moldbuggery
  • The #1 reviewed Harry Potter / The Fountainhead crossover fanfic, "Howard Roark and the Prisoner of Altruism"
  • Low-hanging fruits
  • The set of all possible fetishes
  • Rationalist clopfic
  • The Library of Babel's porn collection
  • Counterfactual hugging
  • Acausal sex
Post your own!
EDIT, 2012-08-29: Several additions from the thread and elsewhere.
EDIT, 2012-12-25: This is licensed under Creative Commons CC BY-NC-SA 2.0 license, because Cards Against Humanity is.

Self-modification, morality, and drugs

14 fubarobfusco 10 April 2011 12:02AM

No, not psychoactive drugs: allergy drugs.

This is my attempt to come to grips with the idea of self-modification. I'm interested to know of any flaws folks might spot in this analogy or reasoning.

Gandhi wouldn't take a pill that would make him want to kill people. That is to say, a person whose conscious conclusions agree with their moral impulses wouldn't self-modify in such a way that they no longer care about morally significant things. But, what about morally insignificant things? Specifically, is willingness to self-modify about X a good guide to whether X is morally significant?

A person with untreated pollen allergies cares about pollen; they have to. In order to have a coherent thought without sneezing in the middle of it, they have to avoid inhaling pollen. They may even perceive pollen as a personal enemy, something that attacks them and makes them feel miserable. But they would gladly take a drug that makes them not care about pollen, by turning off or weakening their immune system's response to it. That's what allergy drugs are for.

But a sane person would not shut off their entire immune system, including responses to pathogens that are actually attacking their body. Even if giving themselves an immune deficiency would stop their allergies, a sane allergy sufferer wouldn't do it; they know that the immune system is there for a reason, to defend against actual attacks, even if their particular immune system is erroneously sensitive to pollen as well as to pathogens.

My job involves maintaining computer systems. Like other folks in this sort of job, my team use an automated monitoring system that will send us an alert (by pager or SMS), waking us up at night if necessary, if something goes wrong with the systems. We want to receive significant alerts, and not receive false positives. We regularly modify the monitoring system to prevent false positives, because we don't like being woken up at night for no good reason. But we wouldn't want to turn off the monitoring system entirely; we actually want to receive true alerts, and we will take action to refine our monitoring system to deliver more accurate, more timely true alerts — because we would like to improve our systems to make them fail less often. We want to win, and false positives or negatives detract from winning.

Similarly, there are times when we conclude that our moral impulses are incorrect: that they are firing off "bad! evil! sinful!" or "good! virtuous! beneficent!" alerts about things that are not actually bad or good; or that they are failing to fire for things which are. Performing the requisite Bayesian update is quite difficult: training yourself to feel that donating to an ineffective charity is not at all praiseworthy, or that it can be morally preferable to work for money and donate it, than to volunteer; altering the thoughts that come unbidden to mind when you think of eating meat, in accordance with a decision that vegetarianism is or is not morally preferable; and so on.

A sane allergy sufferer wants to update his or her immune system to make it stop having false positives, but doesn't want to turn it off entirely; and may want to upgrade its response sometimes, too. A sane system administrator wants to update his or her monitoring tools to make them stop having false positives, but doesn't want to turn it off entirely; and sometimes will program new alerts to avoid false negatives. There is a fact of the matter of whether a particular particle is innocuous pollen or a dangerous pathogen; there is a fact of the matter of whether a text message alert coincides with a down web server; and this fact of the matter explains exactly why we would or wouldn't want to alter our immune system or our servers' monitoring system.

The same may apply to our moral impulses: to decide that something is morally significant is, if we are consistent, equivalent to deciding that we would not self-modify to avoid noticing that significance; to decide that it is morally significant is equivalent to deciding that we would self-modify to notice it more reliably.

EDIT: Thanks for the responses. After mulling this over and consulting the Sequences, it seems that the kind of self-modification I'm talking about above is summed up by the training of System 1 by System 2 discussed waaaaay back here. Self-modification for FAI purposes is a level above this. I am only an egg.