Less Wrong NYC: Case Study of a Successful Rationalist Chapter

137 Cosmos 17 March 2011 08:12PM

It is perhaps the best-kept secret on Less Wrong that the New York City community has been meeting regularly for almost two years. For nearly a year we've been meeting weekly or more.  The rest of this post is going to be a practical guide to the benefits of group rationality, but first I will do something that is still too rare on this blog: make it clear how strongly I feel about this. Before this community took off, I did not believe that life could be this much fun or that I could possibly achieve such a sustained level of happiness.

Being rational in an irrational world is incredibly lonely. Every interaction reveals that our thought processes differ widely from those around us, and I had accepted that such a divide would always exist. For the first time in my life I have dozens of people with whom I can act freely and revel in the joy of rationality without any social concern - hell, it's actively rewarded! Until the NYC Less Wrong community formed, I didn't realize that I was a forager lost without a tribe...

Rationalists are still human, and we still have basic human needs. lukeprog summarizes the literature on subjective well-being, and the only factors which correlate to any degree are genetics, health, work satisfaction and social life - which actually gets listed three separate times as social activity, relationship satisfaction and religiosity. Rationalists tend to be less socially adept on average, and this can make it difficult to obtain the full rewards of social interaction. However, once rationalists learn to socialize with each other, they also become increasingly social towards everyone more generally. This improves your life. A lot.

We are a group of friends to enjoy life alongside, while we try miracle fruit, dance ecstatically until sunrise, actively embarrass ourselves at karaoke, get lost in the woods, and jump off waterfalls.  Poker, paintball, parties, go-karts, concerts, camping... I have a community where I can live in truth and be accepted as I am, where I can give and receive feedback and get help becoming stronger. I am immensely grateful to have all of these people in my life, and I look forward to every moment I spend with them. To love and be loved is an unparalleled experience in this world, once you actually try it.

So, you ask, how did all of this get started...?

continue reading »

The danger of living a story - Singularity Tropes

23 patrissimo 14 November 2010 10:39PM

The following should sound familiar:

A thoughtful and observant young protagonist dedicates their life to fighting a great world-threatening evil unrecognized by almost all of their short-sighted elders (except perhaps for one encouraging mentor), gathering a rag-tag band of colorful misfits along the way and forging them into a team by accepting their idiosyncrasies and making the most of their unique abilities, winning over previously neutral allies, ignoring those who just don't get it, obtaining or creating artifacts of great power, growing and changing along the way to become more powerful, fulfilling the potential seen by their mentors/supporters/early adopters, while becoming more human (greater empathy, connection, humility) as they collect resources to prepare for their climactic battle against the inhuman enemy.

Hmm, sounds a bit like SIAI!  (And while I'm throwing stones, let me make it clear that I live in a glass house, since the same story could just as easily be adapted to TSI, my organization, as well as many others)

This story is related to Robin's Abstract/Distant Future Bias

Regarding distant futures, however, we’ll be too confident, focus too much on unlikely global events, rely too much on trends, theories, and loose abstractions, while neglecting details and variation.  We’ll assume the main events take place far away (e.g., space), and uniformly across large regions.  We’ll focus on untrustworthy consistently-behaving globally-organized social-others.  And we’ll neglect feasibility, taking chances to achieve core grand symbolic values, rather than ordinary muddled values.

More bluntly, we seem primed to confidently see history as an inevitable march toward a theory-predicted global conflict with an alien united them determined to oppose our core symbolic values, making infeasible overly-risky overconfident plans to oppose them.  We seem primed to neglect the value and prospect of trillions of quirky future creatures not fundamentally that different from us, focused on their simple day-to-day pleasures, mostly getting along peacefully in vastly-varied uncoordinated and hard-to-predict local cultures and life-styles. 

Living a story is potentially risky, for example Tyler Cowen warns us to be cautious of stories as there are far fewer stories than there are real scenarios, and so stories must oversimplify.  Our view of the future may be colored by a "fiction bias", which leads us to expect outcomes like those we see in movies (climactic battles, generally interesting events following a single plotline).  Thus stories threaten both epistemic rationality (we assume the real world is more like stories than it is) and instrumental rationality (we assume the best actions to effect real-world change are those which story heroes take).

Yet we'll tend to live stories anyway because it is fun - it inspires supporters, allies, and protagonists.  The marketing for "we are an alliance to fight a great unrecognized evil" can be quite emotionally evocative.  Including in our own self-narrative, which means we'll be tempted to buy into a story whether or not it is correct.  So while living a fun story is a utility benefit, it also means that story causes are likely to be over-represented among all causes, as they are memetically attractive.  This is especially true for the story that there is risk of great, world-threatening evil, since those who believe it are inclined to shout it from the rooftops, while those who don't believe it get on with their lives.  (There are, of course, biases in the other direction as well).

Which is not to say that all aspects of the story are wrong - advancing an original idea to greater prominence (scaling) will naturally lead to some of these tropes - most people disbelieving, a few allies, winning more people over time, eventual recognition as a visionary.  And Michael Vassar suggests that some of the tropes arise as a result of "trying to rise in station beyond the level that their society channels them towards".  For these aspects, the tropes may contain evolved wisdom about how our ancestors negotiated similar situations.

And whether or not a potential protagonist believes in this wisdom, the fact that others do will surely affect marketing decisions.  If Harry wishes to not be seen as Dark, he must care what others see as the signs of a Dark Wizard, whether or not he agrees with them.  If potential collaborators have internalized these stories, skillful protagonists will invoke them in recruiting, converting, and team-building.  Yet the space of story actions is constrained, and the best strategy may sometimes lie far outside them.

Since this is not a story, we are left with no simple answer.  Many aspects of stories are false but resonate with us, and we must guard against them lest they contaminate our rationality.  Others contain wisdom about how those like us have navigated similar situations in the past - we must decide whether the similarities are true or superficial.  The most universal stories are likely to be the most effective in manipulating others, which any protagonist must due to amplify their own efforts in fighting for their cause.  Some of these universal stories are true and generally applicable, like scaling techniques, yet the set of common tropes seems far too detailed to reflect universal truths rather than arbitrary biases of humanity and our evolutionary history.

May you live happily ever after (vanquishing your inhuman enemy with your team of true friends, bonded through a cause despite superficial dissimilarities).

The End.

Call for Volunteers: Rationalists with Non-Traditional Skills

22 Jasen 28 October 2010 09:38PM

SIAI's Fellows Program is looking for rationalists with skills.  More specifically, we're looking for rationalists with skills outside our usual cluster who are interested in donating their time by teaching those skills and communicating the mindsets that lead to their development.  If possible, we'd like to learn from specialists who "speak our language," or at least are practiced in resolving confusion and disagreement using reason and evidence.  Broadly, we're interested in developing practical intuitions, doing practical things, and developing awareness and culture around detail-intensive technical subskills of emotional self-awareness and social fluency.  More specifically:    

continue reading »

Self-Improvement or Shiny Distraction: Why Less Wrong is anti-Instrumental Rationality

105 patrissimo 14 September 2010 04:17PM

Introduction

Less Wrong is explicitly intended is to help people become more rational.  Eliezer has posted that rationality means epistemic rationality (having & updating a correct model of the world), and instrumental rationality (the art of achieving your goals effectively).  Both are fundamentally tied to the real world and our performance in it - they are about ability in practice, not theoretical knowledge (except inasmuch as that knowledge helps ability in practice).  Unfortunately, I think Less Wrong is a failure at instilling abilities-in-practice, and designed in a way that detracts from people's real-world performance.

It will take some time, and it may be unpleasant to hear, but I'm going to try to explain what LW is, why that's bad, and sketch what a tool to actually help people become more rational would look like.

(This post was motivated by Anna Salomon's Humans are not automatically strategic and the response, more detailed background in footnote [1].)

Update / Clarification in response to some comments: This post is based on the assumption that a) the creators of Less Wrong wish Less Wrong to result in people becoming better at achieving their goals (instrumental rationality, aka "efficient productivity"), and b) Some (perhaps many) readers read it towards that goal.  It is this I think is self-deception.  I do not dispute that LW can be used in a positive way (read during fun time instead of the NYT or funny pictures on Digg), or that it has positive effects (exposing people to important ideas they might not see elsewhere).  I merely dispute that reading fun things on the internet can help people become more instrumentally rational.  Additionally, I think instrumental rationality is really important and could be a huge benefit to people's lives (in fact, is by definition!), and so a community value that "deliberate practice towards self-improvement" is more valuable and more important than "reading entertaining ideas on the internet" would be of immense value to LW as a community - while greatly decreasing the importance of LW as a website.

Why Less Wrong is not an effective route to increasing rationality.

Definition:

Work: time spent acting in an instrumentally rational manner, ie forcing your attention towards the tasks you have consciously determined will be the most effective at achieving your consciously chosen goals, rather than allowing your mind to drift to what is shiny and fun.

By definition, Work is what (instrumental) rationalists wish to do more of.  A corollary is that Work is also what is required in order to increase one's capacity to Work.  This must be true by the definition of instrumental rationality - if it's the most efficient way to achieve one's goals, and if one's goal is to increase one's instrumental rationality, doing so is most efficiently done by being instrumentally rational about it. [2]

That was almost circular, so to add meat, you'll notice in the definition an embedded assumption that the "hard" part of Work is directing attention - forcing yourself to do what you know you ought to instead of what is fun & easy.  (And to a lesser degree, determining your goals and the most effective tasks to achieve them).  This assumption may not hold true for everyone, but with the amount of discussion of "Akrasia" on LW, the general drift of writing by smart people about productivity (Paul Graham: Addiction, Distraction, Merlin Mann: Time & Attention), and the common themes in the numerous productivity/self-help books I've read, I think it's fair to say that identifying the goals and tasks that matter and getting yourself to do them is what most humans fundamentally struggle with when it comes to instrumental rationality.

Figuring out goals is fairly personal, often subjective, and can be difficult.  I definitely think the deep philosophical elements of Less Wrong and it's contributions to epistemic rationality [3] are useful to this, but (like psychedelics) the benefit comes from small occasional doses of the good stuff.  Goals should be re-examined regularly, but occasionally (roughly yearly, and at major life forks).  An annual retreat with a mix of close friends and distant-but-respected acquaintances (Burning Man, perhaps) will do the trick - reading a regularly updated blog is way overkill.

And figuring out tasks, once you turn your attention to it, is pretty easy.  Once you have explicit goals, just consciously and continuously examining whether your actions have been effective at achieving those goals will get you way above the average smart human at correctly choosing the most effective tasks.  The big deal here for many (most?) of us, is the conscious direction of our attention.

What is the enemy of consciously directed attention?  It is shiny distraction.  And what is Less Wrong?  It is a blog, a succession of short fun posts with comments, most likely read when people wish to distract or entertain themselves, and tuned for producing shiny ideas which successfully distract and entertain people.  As Merlin Mann says: "Joining a Facebook group about creative productivity is like buying a chair about jogging".  Well, reading a blog to overcome akrasia IS joining a Facebook group about creative productivity.  It's the opposite of this classic piece of advice.

continue reading »

Applying Behavioral Psychology on Myself

53 John_Maxwell_IV 20 June 2010 06:25AM

In which I attempt to apply findings from behavioral psychology to my own life.

Behavioral Psychology Finding #1: Habituation

The psychological process of "extinction" or "habituation" occurs when a stimulus is administered repeatedly to an animal, causing the animal's response to gradually diminish.  You can imagine that if you were to eat your favorite food for breakfast every morning, it wouldn't be your favorite food after a while.  Habituation tends to happen the fastest when the following three conditions are met:

  • The stimulus is delivered frequently
  • The stimulus is delivered in small doses
  • The stimulus is delivered at regular intervals

Source is here.

Applied Habituation

I had a project I was working on that was really important to me, but whenever I started working on it I would get demoralized.  So I habituated myself to the project: I alternated 2 minutes of work with 2 minutes of sitting in the yard for about 20 minutes.  This worked.

continue reading »

Anti-Akrasia Technique: Structured Procrastination

51 patrissimo 12 November 2009 07:35PM

This idea has been mentioned in several comments, but it deserves a top-level post.  From an ancient, ancient web article (1995!), Stanford philosophy professor John Perry writes:

I have been intending to write this essay for months. Why am I finally doing it? Because I finally found some uncommitted time? Wrong. I have papers to grade, textbook orders to fill out, an NSF proposal to referee, dissertation drafts to read. I am working on this essay as a way of not doing all of those things. This is the essence of what I call structured procrastination, an amazing strategy I have discovered that converts procrastinators into effective human beings, respected and admired for all that they can accomplish and the good use they make of time. All procrastinators put off things they have to do. Structured procrastination is the art of making this bad trait work for you. The key idea is that procrastinating does not mean doing absolutely nothing. Procrastinators seldom do absolutely nothing; they do marginally useful things, like gardening or sharpening pencils or making a diagram of how they will reorganize their files when they get around to it. Why does the procrastinator do these things? Because they are a way of not doing something more important. If all the procrastinator had left to do was to sharpen some pencils, no force on earth could get him do it. However, the procrastinator can be motivated to do difficult, timely and important tasks, as long as these tasks are a way of not doing something more important.

The insightful observation that procrastinators fill their time with effort, not staring at the walls, gives rise to this form of akrasia aikido, where the urge to not do something is cleverly redirected into productivity.  If you can "waste time" by doing useful things, while feeling like you are avoiding doing the "real work", then you avoid depleting your limited supply of willpower (which happens when you force yourself to do something).

In other words, structured procrastination (SP) is an efficient use of this limited resource, because doing A in order to avoid doing B is easier than making yourself do A.  If A is something you want to get done, then the less willpower you can use to do it, the more you will be to accomplish.  This only works if A is something that you do want to get done - that's how SP differs from normal procrastination, of course.

continue reading »

Your Most Valuable Skill

28 Alicorn 27 September 2009 05:01PM

Knowledge is great: I suspect we can agree there.  Sadly, though, we can't guarantee ourselves infinite time in which to learn everything eventually, and in the meantime, there are plenty of situations where having irrelevant knowledge instead of more instrumentally useful knowledge can be decidedly suboptimal.  Therefore, there's good reason to work out what facts we'll need to deploy and give special priority to learning those facts.  There's nothing intrinsically more interesting or valuable about the knowledge that the capital of the United States is Washington, D.C. than there is about the knowledge that the capital of Bali is Denpasar, but unless you live or spend a lot of time in Indonesia, the latter knowledge will be less likely to come up.

It seems the same is true of procedural knowledge (with the quirk that it's easier to deliberately put yourself in situations where you use whatever procedural knowledge you have than it is to arrange to need to know the capital of Bali.)  If your procedural knowledge is useful, and also difficult to obtain or unpopular to practice or both, you might even turn it into a career (or save money that you would have spent hiring people who have).

Rationality is sort of the ur-procedure, but after a certain point - the point where you're no longer buying into supernaturalist superstition, begging for a Darwin Award, or falling for cheap scams - its marginal practical value diminishes.  Practicing rationality as an art is fun and there's some chance it'll yield a high return, but evolution (genetic and memetic) didn't do that bad of a job on us: we enter adulthood with an arsenal of heuristics that are mostly good enough.  A little patching of the worst leaks, some bailing of bilge that got in early on, and you have a serviceable brain-yacht.  (Sound of metaphor straining.)

continue reading »

Scope Insensitivity

45 Eliezer_Yudkowsky 14 May 2007 02:53AM

Once upon a time, three groups of subjects were asked how much they would pay to save 2000 / 20000 / 200000 migrating birds from drowning in uncovered oil ponds. The groups respectively answered $80, $78, and $88 [1]. This is scope insensitivity or scope neglect: the number of birds saved - the scope of the altruistic action - had little effect on willingness to pay.

Similar experiments showed that Toronto residents would pay little more to clean up all polluted lakes in Ontario than polluted lakes in a particular region of Ontario [2], or that residents of four western US states would pay only 28% more to protect all 57 wilderness areas in those states than to protect a single area [3].

continue reading »

The Machine Learning Personality Test

25 PhilGoetz 04 August 2009 11:36PM

You've probably heard of the Briggs-Myers personality test, which is a classification system of 16 different personality types based on the writings of Carl Jung, a man who believed that his library books sometimes spontaneously exploded.  Its main advantage is that it manages to classify people without insulting them.  (This is accomplished by confounding dimensions:  Instead of measuring one property of personality along one dimension, which leads to some scores being considered better than others, you subtract a measurement along one desirable property of personality from a measurement along another desirable property of personality, and call the result one dimension.)

You've probably also heard of the MMPI, a test designed by giving lots of questions to mental patients and seeing which ones were answered differently by people with particular diagnoses.  This is more like personality clustering for fault diagnosis than a personality test.  You may find it useful if you're crazy.  (One of the criticisms of this test is that religious people often test as psychotic: "Do you sometimes think someone else is directing your actions?  Is someone else trying to plan events in your life?"  Is that a bug, or a feature?)

You may have heard of the Personality Assessment Inventory, a test devised by listing things that psychotherapists thought were important, and trying to come up with questions to test them.

The Big 5 personality test is constructed in a well-motivated way, using factor analysis to try to discover from the data what the true dimensions of personality are.

But these all work from the top down, looking at human behavior (answers), and trying to uncover latent factors farther down.  I'm instead going to propose a personality system that, instead, starts from the very bottom of your hardware and leaves it to you to work your way up to the variables of interest:  the Machine Learning Personality Test ("MLPT").

continue reading »

She Blinded Me With Science

13 Jonathan_Graehl 04 August 2009 07:10PM

Scrutinize claims of scientific fact in support of opinion journalism.

Even with honest intent, it's difficult to apply science correctly, and it's rare that dishonest uses are punished. Citing a scientific result gives an easy patina of authority, which is rarely scratched by a casual reader. Without actually lying, the arguer may select from dozens of studies only the few with the strongest effect in their favor, when the overall body of evidence may point at no effect or even in the opposite direction. The reader only sees "statistically significant evidence for X". In some fields, the majority of published studies claim unjustified significance in order to gain publication, inciting these abuses.

Here are two recent examples:

Women are often better communicators because their brains are more networked for language. The majority of women are better at "mind-reading," than most men; they can read the emotions written on people's faces more quickly and easily, a talent jump-started by the vast swaths of neural real estate dedicated to processing emotions in the female brain.

- Susan Pinker, a psychologist, in NYT's "DO Women Make Better Bosses"

Twin studies and adoptive studies show that the overwhelming determinant of your weight is not your willpower; it's your genes. The heritability of weight is between .75 and .85. The heritability of height is between .9 and .95. And the older you are, the more heritable weight is.

- Megan McArdle, linked from the LW article The Obesity Myth

continue reading »

View more: Prev | Next