The Irrationality Game

38 Will_Newsome 03 October 2010 02:43AM

Please read the post before voting on the comments, as this is a game where voting works differently.

Warning: the comments section of this post will look odd. The most reasonable comments will have lots of negative karma. Do not be alarmed, it's all part of the plan. In order to participate in this game you should disable any viewing threshold for negatively voted comments.

Here's an irrationalist game meant to quickly collect a pool of controversial ideas for people to debate and assess. It kinda relies on people being honest and not being nitpickers, but it might be fun.

Write a comment reply to this post describing a belief you think has a reasonable chance of being true relative to the the beliefs of other Less Wrong folk. Jot down a proposition and a rough probability estimate or qualitative description, like 'fairly confident'.

Example (not my true belief): "The U.S. government was directly responsible for financing the September 11th terrorist attacks. Very confident. (~95%)."

If you post a belief, you have to vote on the beliefs of all other comments. Voting works like this: if you basically agree with the comment, vote the comment down. If you basically disagree with the comment, vote the comment up. What 'basically' means here is intuitive; instead of using a precise mathy scoring system, just make a guess. In my view, if their stated probability is 99.9% and your degree of belief is 90%, that merits an upvote: it's a pretty big difference of opinion. If they're at 99.9% and you're at 99.5%, it could go either way. If you're genuinely unsure whether or not you basically agree with them, you can pass on voting (but try not to). Vote up if you think they are either overconfident or underconfident in their belief: any disagreement is valid disagreement.

That's the spirit of the game, but some more qualifications and rules follow.

continue reading »

Mixed strategy Nash equilibrium

40 Meni_Rosenfeld 16 October 2010 04:00PM

Inspired by: Swords and Armor: A Game Theory Thought Experiment

Recently, nick012000 has posted Swords and Armor: A Game Theory Thought Experiment. I was disappointed to see many confused replies to this post, even after a complete solution was given by Steve_Rayhawk. I thought someone really ought to post an explanation about mixed strategy Nash equilibria. Then I figured that that someone may as well be me.

I will assume readers are familiar with the concepts of a game (a setting with several players, each having a choice of strategies to take and a payoff which depends on the strategies taken by all players) and of a Nash equilibrium (an "optimal" assignment of strategies such that, if everyone plays their assigned strategy, no player will have a reason to switch to a different strategy). Some games, like the famous prisoner's dilemma, have a Nash equilibrium in so-called "pure strategies" (as opposed to mixed strategies, to be introduced later). Consider, however, the following variant of the matching pennies game:

Player 1 is a general leading an attacking army, and player 2 is the general of the defending army. The attacker can attack from the east or west, and the defender can concentrate his defenses on the east or west. By the time each side learns the strategy of its enemy, it is too late to switch strategies. Attacking where the defenses aren't concentrated gives a great advantage; additionally, due to unspecified tactical circumstances, attacking from the east gives a slight advantage. The sides have no interest in cooperating, so this is a zero-sum game (what one side wins, the other loses).

This elaborate description can be summarized in the following payoff matrix (these payoffs are for the attacker; the defender's payoffs are their negatives):

  2: East 2: West
1: East -1 2
1: West 1 -2

continue reading »

Compartmentalization in epistemic and instrumental rationality

77 AnnaSalamon 17 September 2010 07:02AM

Related to: Humans are not automatically strategic, The mystery of the haunted rationalist, Striving to accept, Taking ideas seriously

I argue that many techniques for epistemic rationality, as taught on LW, amount to techniques for reducing compartmentalization.  I argue further that when these same techniques are extended to a larger portion of the mind, they boost instrumental, as well as epistemic, rationality.

Imagine trying to design an intelligent mind.

One problem you’d face is designing its goal.  

Every time you designed a goal-indicator, the mind would increase action patterns that hit that indicator[1].  Amongst these reinforced actions would be “wireheading patterns” that fooled the indicator but did not hit your intended goal.  For example, if your creature gains reward from internal indicators of status, it will increase those indicators -- including by such methods as surrounding itself with people who agree with it, or convincing itself that it understood important matters others had missed.  It would be hard-wired to act as though “believing makes it so”. 

A second problem you’d face is propagating evidence.  Whenever your creature encounters some new evidence E, you’ll want it to update its model of  “events like E”.  But how do you tell which events are “like E”? The soup of hypotheses, intuition-fragments, and other pieces of world-model is too large, and its processing too limited, to update each belief after each piece of evidence.  Even absent wireheading-driven tendencies to keep rewarding beliefs isolated from threatening evidence, you’ll probably have trouble with accidental compartmentalization (where the creature doesn’t update relevant beliefs simply because your heuristics for what to update were imperfect).

Evolution, AFAICT, faced just these problems.  The result is a familiar set of rationality gaps:

continue reading »

Self-Improvement or Shiny Distraction: Why Less Wrong is anti-Instrumental Rationality

105 patrissimo 14 September 2010 04:17PM

Introduction

Less Wrong is explicitly intended is to help people become more rational.  Eliezer has posted that rationality means epistemic rationality (having & updating a correct model of the world), and instrumental rationality (the art of achieving your goals effectively).  Both are fundamentally tied to the real world and our performance in it - they are about ability in practice, not theoretical knowledge (except inasmuch as that knowledge helps ability in practice).  Unfortunately, I think Less Wrong is a failure at instilling abilities-in-practice, and designed in a way that detracts from people's real-world performance.

It will take some time, and it may be unpleasant to hear, but I'm going to try to explain what LW is, why that's bad, and sketch what a tool to actually help people become more rational would look like.

(This post was motivated by Anna Salomon's Humans are not automatically strategic and the response, more detailed background in footnote [1].)

Update / Clarification in response to some comments: This post is based on the assumption that a) the creators of Less Wrong wish Less Wrong to result in people becoming better at achieving their goals (instrumental rationality, aka "efficient productivity"), and b) Some (perhaps many) readers read it towards that goal.  It is this I think is self-deception.  I do not dispute that LW can be used in a positive way (read during fun time instead of the NYT or funny pictures on Digg), or that it has positive effects (exposing people to important ideas they might not see elsewhere).  I merely dispute that reading fun things on the internet can help people become more instrumentally rational.  Additionally, I think instrumental rationality is really important and could be a huge benefit to people's lives (in fact, is by definition!), and so a community value that "deliberate practice towards self-improvement" is more valuable and more important than "reading entertaining ideas on the internet" would be of immense value to LW as a community - while greatly decreasing the importance of LW as a website.

Why Less Wrong is not an effective route to increasing rationality.

Definition:

Work: time spent acting in an instrumentally rational manner, ie forcing your attention towards the tasks you have consciously determined will be the most effective at achieving your consciously chosen goals, rather than allowing your mind to drift to what is shiny and fun.

By definition, Work is what (instrumental) rationalists wish to do more of.  A corollary is that Work is also what is required in order to increase one's capacity to Work.  This must be true by the definition of instrumental rationality - if it's the most efficient way to achieve one's goals, and if one's goal is to increase one's instrumental rationality, doing so is most efficiently done by being instrumentally rational about it. [2]

That was almost circular, so to add meat, you'll notice in the definition an embedded assumption that the "hard" part of Work is directing attention - forcing yourself to do what you know you ought to instead of what is fun & easy.  (And to a lesser degree, determining your goals and the most effective tasks to achieve them).  This assumption may not hold true for everyone, but with the amount of discussion of "Akrasia" on LW, the general drift of writing by smart people about productivity (Paul Graham: Addiction, Distraction, Merlin Mann: Time & Attention), and the common themes in the numerous productivity/self-help books I've read, I think it's fair to say that identifying the goals and tasks that matter and getting yourself to do them is what most humans fundamentally struggle with when it comes to instrumental rationality.

Figuring out goals is fairly personal, often subjective, and can be difficult.  I definitely think the deep philosophical elements of Less Wrong and it's contributions to epistemic rationality [3] are useful to this, but (like psychedelics) the benefit comes from small occasional doses of the good stuff.  Goals should be re-examined regularly, but occasionally (roughly yearly, and at major life forks).  An annual retreat with a mix of close friends and distant-but-respected acquaintances (Burning Man, perhaps) will do the trick - reading a regularly updated blog is way overkill.

And figuring out tasks, once you turn your attention to it, is pretty easy.  Once you have explicit goals, just consciously and continuously examining whether your actions have been effective at achieving those goals will get you way above the average smart human at correctly choosing the most effective tasks.  The big deal here for many (most?) of us, is the conscious direction of our attention.

What is the enemy of consciously directed attention?  It is shiny distraction.  And what is Less Wrong?  It is a blog, a succession of short fun posts with comments, most likely read when people wish to distract or entertain themselves, and tuned for producing shiny ideas which successfully distract and entertain people.  As Merlin Mann says: "Joining a Facebook group about creative productivity is like buying a chair about jogging".  Well, reading a blog to overcome akrasia IS joining a Facebook group about creative productivity.  It's the opposite of this classic piece of advice.

continue reading »

Seven Shiny Stories

104 Alicorn 01 June 2010 12:43AM

It has come to my attention that the contents of the luminosity sequence were too abstract, to the point where explicitly fictional stories illustrating the use of the concepts would be helpful.  Accordingly, there follow some such stories.

1. Words (an idea from Let There Be Light, in which I advise harvesting priors about yourself from outside feedback)

Maria likes compliments.  She loves compliments.  And when she doesn't get enough of them to suit her, she starts fishing, asking plaintive questions, making doe eyes to draw them out.  It's starting to annoy people.  Lately, instead of compliments, she's getting barbs and criticism and snappish remarks.  It hurts - and it seems to hurt her more than it hurts others when they hear similar things.  Maria wants to know what it is about her that would explain all of this.  So she starts taking personality tests and looking for different styles of maintaining and thinking about relationships, looking for something that describes her.  Eventually, she runs into a concept called "love languages" and realizes at once that she's a "words" person.  Her friends aren't trying to hurt her - they don't realize how much she thrives on compliments, or how deeply insults can cut when they're dealing with someone who transmits affection verbally.  Armed with this concept, she has a lens through which to interpret patterns of her own behavior; she also has a way to explain herself to her loved ones and get the wordy boosts she needs.

2. Widgets (an idea from The ABC's of Luminosity, in which I explain the value of correlating affect, behavior, and circumstance)

Tony's performance at work is suffering.  Not every day, but most days, he's too drained and distracted to perform the tasks that go into making widgets.  He's in serious danger of falling behind his widget quota and needs to figure out why.  Having just read a fascinating and brilliantly written post on Less Wrong about luminosity, he decides to keep track of where he is and what he's doing when he does and doesn't feel the drainedness.  After a week, he's got a fairly robust correlation: he feels worst on days when he doesn't eat breakfast, which reliably occurs when he's stayed up too late, hit the snooze button four times, and had to dash out the door.  Awkwardly enough, having been distracted all day tends to make him work more slowly at making widgets, which makes him less physically exhausted by the time he gets home and enables him to stay up later.  To deal with that, he starts going for long runs on days when his work hasn't been very tiring, and pops melatonin; he easily drops off to sleep when his head hits the pillow at a reasonable hour, gets sounder sleep, scarfs down a bowl of Cheerios, and arrives at the widget factory energized and focused.

continue reading »

Forager Anthropology

11 WrongBot 28 July 2010 05:48AM

(This is the second post in a short sequence discussing evidence and arguments presented by Christopher Ryan and Cacilda Jethá's Sex at Dawninspired by the spirit of Kaj_Sotala's recent discussion of What Intelligence Tests MissIt covers Part II: Lust in Paradise and Part III: The Way We Weren't.)

Forager anthropology is a discipline that is easy to abuse. It relies on unreliable first-hand observations of easily misunderstood cultures that are frequently influenced by the presence of modern observers. These cultures are often exterminated or assimilated within decades of their discovery, making it difficult to confirm controversial claims and discoveries. But modern-day foraging societies are the most direct source of evidence we have about our pre-agricultural ancestors; in many ways, they are agriculture's control group, living in conditions substantially similar to the ones under which our species evolved. The standard narrative of human sexual evolution ignores or manipulates the findings of forager anthropology to support its claims, and this is no doubt responsible for much of its confused support.

Steven Pinker is one of the most prominent and well-respected advocates of the standard narrative, both on Less Wrong and elsewhere. Eliezer has referenced him as an authority on evolutionary psychology. One commenter on the first post in this series claimed that Pinker is "the only mainstream academic I'm aware of who visibly demonstrates the full suite of traditional rationalist virtues in essentially all of his writing." Another cited Pinker's claim that 20-60% of hunter-gatherer males were victims of lethal human violence ("murdered") as justification for a Malthusian view of human nature. 

That 20-60% number comes from a claim about war casualties in a 2007 TED talk Pinker gave on "the myth of violence", for which he drew upon several important findings in forager anthropology. (The talk is based on an argument presented in the third chapter of The Blank Slate; there is a text version of the talk available, but it omits the material on forager anthropology that Ryan and Jethá critique.)

At 2:45 in the video Pinker displays a slide which reads

Until 10,000 years ago, humans lived as hunter-gatherers, without permanent settlements or government.

He also points out that modern hunter-gatherers are our best evidence for drawing conclusions about those prehistoric hunter-gatherers; in both these statements he is in accordance with nearly universal historical, anthropological, and archaeological opinion. Pinker's next slide is a chart from The Blank Slate, originally based on the research of Lawrence Keeley. Sort of. It is labeled as "the percentage of male deaths due to warfare," with bars for eight hunter-gatherer societies that range from approximately 15-60%. The problem is that of these eight cultures, zero are migratory hunter-gatherers.

continue reading »

Against the standard narrative of human sexual evolution

7 WrongBot 23 July 2010 05:28AM

(This post is the beginning of a short sequence discussing evidence and arguments presented by Christopher Ryan and Cacilda Jethá's Sex at Dawn, inspired by the spirit of Kaj_Sotala's recent discussion of What Intelligence Tests Miss. It covers Part I: On the Origin of the Specious.)

Sex at Dawn: The Prehistoric Origins of Modern Sexuality was first brought to my attention by a rhapsodic mention in Dan Savage's advice column, and while it seemed quite relevant to my interests I am generally very skeptical of claims based on evolutionary psychology. I did eventually decide to pick up the book, primarily so that I could raid its bibliography for material for an upcoming post on jealousy management, and secondarily to test my vulnerability to confirmation bias. I succeeded in the first and failed in the second: Sex at Dawn is by leaps and bounds the best evolutionary psychology book I've read, largely because it provides copious evidence for its claims.1 I mention the strength of my opinion as a disclaimer of sorts, so that careful readers may take the appropriate precautions.


The book's first section focuses on the current generally accepted explanation for human sexual evolution, which the authors call "the standard narrative." It's an explanation that should be quite familiar to regular LessWrong readers: men are attracted to fertile-appearing women and try to prevent them from having sex with other men so as to confirm the paternity of their offspring; women are attracted to men who seem like they will be good providers for their children and try to prevent them from forming intimate bonds with other women so as to maintain access to their resources.

continue reading »

'oy, girls on lw, want to get together some time?'

31 MBlume 02 October 2009 10:50AM

2:45:24 PM Katja Grace: The main thing that puts me off in online dating profiles is lack of ambition to save the world
2:45:35 PM Katja Grace: Or do anything much
2:48:03 PM Michael Blume: *nods*
2:48:07 PM Michael Blume: this is indeed a problem
2:57:55 PM Katja Grace: Maybe there is a dating site for smart ambitious nerds somewhere
2:58:25 PM Katja Grace: Need to set up lw extension perhaps
2:59:02 PM Michael Blume: haha, yes ^^
3:00:40 PM Katja Grace: Plenty of discussion on why few girls, how to get girls, nobody ever says 'oy, girls on lw, want to get together some time?'
3:01:14 PM Michael Blume: somebody really should say that
3:01:34 PM Michael Blume: hell, I'm tempted to just copy that IM into a top-level post and click 'submit'
3:01:48 PM Katja Grace: Haha dare you to

Room for rent in North Berkeley house

6 Kevin 13 July 2010 08:28PM

Hi Less Wrong. I am moving into a 5 bedroom house in North Berkeley with Mike Blume and Emil Gilliam. We have an extra bedroom available.

It's located in the Gourmet Ghetto neighborhood (because we can afford to eat at Chez Pannise when we aren't busy saving the world, right? I didn't think so) and is about 1/2 mile from the Downtown Berkeley and North Berkeley BART stations. From Downtown Berkeley to Downtown SF via BART, it is a painless 25 minute commute. The bedroom is unfurnished and available right now. Someone willing to commit to living there for one year is preferred, but willing to consider six month or month to month leases.

I'm open to living with a wide range of people and tend to be extremely tolerant of people's quirks. I am not tolerant to drama, so I am open to living with anyone that will not bring any sort of unneeded conflict to my living space.

~$750/month+utilities. Easy street parking available.

Feel free to ask questions via email (kfischer @# gmail %# com) or in the comments here.

And before any of you pedants downvote me because "Less Wrong is not Craigslist", this is kind of like a year long Less Wrong meetup.

Bad reasons for a rationalist to lose

30 matt 18 May 2009 10:57PM

Reply to: Practical Advice Backed By Deep Theories

Inspired by what looks like a very damaging reticence to embrace and share brain hacks that might only work for some of us, but are not backed by Deep Theories. In support of tinkering with brain hacks and self experimentation where deep science and large trials are not available.

Eliezer has suggested that, before he will try a new anti-akraisia brain hack:

[…] the advice I need is from someone who reads up on a whole lot of experimental psychology dealing with willpower, mental conflicts, ego depletion, preference reversals, hyperbolic discounting, the breakdown of the self, picoeconomics, etcetera, and who, in the process of overcoming their own akrasia, manages to understand what they did in truly general terms - thanks to experiments that give them a vocabulary of cognitive phenomena that actually exist, as opposed to phenomena they just made up.  And moreover, someone who can explain what they did to someone else, thanks again to the experimental and theoretical vocabulary that lets them point to replicable experiments that ground the ideas in very concrete results, or mathematically clear ideas.

This doesn't look to me like an expected utility calculation, and I think it should. It looks like an attempt to justify why he can't be expected to win yet. It just may be deeply wrongheaded.

I submit that we don't "need" (emphasis in original) this stuff, it'd just be super cool if we could get it. We don't need to know that the next brain hack we try will work, and we don't need to know that it's general enough that it'll work for anyone who tries it; we just need the expected utility of a trial to be higher than that of the other things we could be spending that time on.

So… this isn't other-optimizing, it's a discussion of how to make decisions under uncertainty. What do all of us need to make a rational decision about which brain hacks to try?

  • We need a goal: Eliezer has suggested "I want to hear how I can overcome akrasia - how I can have more willpower, or get more done with less mental pain". I'd push cost in with something like "to reduce the personal costs of akraisia by more than the investment in trying and implementing brain hacks against it plus the expected profit on other activities I could undertake with that time".
  • We need some likelihood estimates:
    • Chance of a random brain hack working on first trial: ?, second trial: ?, third: ?
    • Chance of a random brain hack working on subsequent trials (after the third - the noise of mood, wakefulness, etc. is large, so subsequent trials surely have non-zero chance of working, but that chance will probably diminish): →0
    • Chance of a popular brain hack working on first (second, third) trail: ? (GTD is lauded by many many people; your brother in law's homebrew brain hack is less well tried)
    • Chance that a brain hack that would work in the first three trials would seem deeply compelling on first being exposed to it: ?
      (can these books be judged by their covers? how does this chance vary with the type of exposure? what would you need to do to understand enough about a hack that would work to increase its chance of seeming deeply compelling on first exposure?)
    • Chance that a brain hack that would not work in the first three trials would seem deeply compelling on first being exposed to it: ? (false positives)
    • Chance of a brain hack recommended by someone in your circle working on first (second, third) trial: ?
    • Chance that someone else will read up "on a whole lot of experimental psychology dealing with willpower, mental conflicts, ego depletion, preference reversals, hyperbolic discounting, the breakdown of the self, picoeconomics, etcetera, and who, in the process of overcoming their own akrasia, manages to understand what they did in truly general terms - thanks to experiments that give them a vocabulary of cognitive phenomena that actually exist, as opposed to phenomena they just made up.  And moreover, someone who can explain what they did to someone else, thanks again to the experimental and theoretical vocabulary that lets them point to replicable experiments that ground the ideas in very concrete results, or mathematically clear ideas", all soon: ? (pretty small?)
    • What else do we need to know?
  • We need some time/cost estimates (these will vary greatly by proposed brain hack):
    • Time required to stage a personal experiment on the hack: ?
    • Time to review and understand the hack in sufficient detail to estimate the time required to stage a personal experiment?
    • What else do we need?

… and, what don't we need?

  • A way to reject the placebo effect - if it wins, use it. If it wins for you but wouldn't win for someone else, then they have a problem. We may choose to spend some effort helping others benefit from this hack, but that seems to be a different task - it's irrelevant to our goal.


How should we decide how much time to spend gathering data and generating estimates on matters such as this? How much is Eliezer setting himself up to lose, and how much am I missing the point?

View more: Next