Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

NYC Rationalist Community

17 Cosmos 29 March 2010 04:59PM

For those who don't yet know, there has been a thriving rationalist community in NYC since April 2009.  We've been holding weekly meetups for the past several months now, and often have game nights, focused discussions, etc.  For those of you who live in the area, and not yet involved, I highly encourage you to join the following two groups:

This Meetup group is our public face, which draws new members to the meetups.

This Google Group was our original method of coordination, and we still use it for private communication.

The reason I am posting this is because there has been interest by several members in sharing an apartment/loft, or even multiple apartments on a floor of a building if there are enough people.  The core interest group is going to be meeting soon to figure out the logistics, so I wanted to extend this opportunity to any aspiring rationalists who either currently or would like to live in NYC.  If you are interested, please join the Google Group and let us know, so that we can include you in the planning process.  Additionally, if anyone has experience living with other rationalists, or more generally in a community setting, please feel free to share your knowledge with us so we can avoid any common pitfalls.

Bad reasons for a rationalist to lose

30 matt 18 May 2009 10:57PM

Reply to: Practical Advice Backed By Deep Theories

Inspired by what looks like a very damaging reticence to embrace and share brain hacks that might only work for some of us, but are not backed by Deep Theories. In support of tinkering with brain hacks and self experimentation where deep science and large trials are not available.

Eliezer has suggested that, before he will try a new anti-akraisia brain hack:

[…] the advice I need is from someone who reads up on a whole lot of experimental psychology dealing with willpower, mental conflicts, ego depletion, preference reversals, hyperbolic discounting, the breakdown of the self, picoeconomics, etcetera, and who, in the process of overcoming their own akrasia, manages to understand what they did in truly general terms - thanks to experiments that give them a vocabulary of cognitive phenomena that actually exist, as opposed to phenomena they just made up.  And moreover, someone who can explain what they did to someone else, thanks again to the experimental and theoretical vocabulary that lets them point to replicable experiments that ground the ideas in very concrete results, or mathematically clear ideas.

This doesn't look to me like an expected utility calculation, and I think it should. It looks like an attempt to justify why he can't be expected to win yet. It just may be deeply wrongheaded.

I submit that we don't "need" (emphasis in original) this stuff, it'd just be super cool if we could get it. We don't need to know that the next brain hack we try will work, and we don't need to know that it's general enough that it'll work for anyone who tries it; we just need the expected utility of a trial to be higher than that of the other things we could be spending that time on.

So… this isn't other-optimizing, it's a discussion of how to make decisions under uncertainty. What do all of us need to make a rational decision about which brain hacks to try?

  • We need a goal: Eliezer has suggested "I want to hear how I can overcome akrasia - how I can have more willpower, or get more done with less mental pain". I'd push cost in with something like "to reduce the personal costs of akraisia by more than the investment in trying and implementing brain hacks against it plus the expected profit on other activities I could undertake with that time".
  • We need some likelihood estimates:
    • Chance of a random brain hack working on first trial: ?, second trial: ?, third: ?
    • Chance of a random brain hack working on subsequent trials (after the third - the noise of mood, wakefulness, etc. is large, so subsequent trials surely have non-zero chance of working, but that chance will probably diminish): →0
    • Chance of a popular brain hack working on first (second, third) trail: ? (GTD is lauded by many many people; your brother in law's homebrew brain hack is less well tried)
    • Chance that a brain hack that would work in the first three trials would seem deeply compelling on first being exposed to it: ?
      (can these books be judged by their covers? how does this chance vary with the type of exposure? what would you need to do to understand enough about a hack that would work to increase its chance of seeming deeply compelling on first exposure?)
    • Chance that a brain hack that would not work in the first three trials would seem deeply compelling on first being exposed to it: ? (false positives)
    • Chance of a brain hack recommended by someone in your circle working on first (second, third) trial: ?
    • Chance that someone else will read up "on a whole lot of experimental psychology dealing with willpower, mental conflicts, ego depletion, preference reversals, hyperbolic discounting, the breakdown of the self, picoeconomics, etcetera, and who, in the process of overcoming their own akrasia, manages to understand what they did in truly general terms - thanks to experiments that give them a vocabulary of cognitive phenomena that actually exist, as opposed to phenomena they just made up.  And moreover, someone who can explain what they did to someone else, thanks again to the experimental and theoretical vocabulary that lets them point to replicable experiments that ground the ideas in very concrete results, or mathematically clear ideas", all soon: ? (pretty small?)
    • What else do we need to know?
  • We need some time/cost estimates (these will vary greatly by proposed brain hack):
    • Time required to stage a personal experiment on the hack: ?
    • Time to review and understand the hack in sufficient detail to estimate the time required to stage a personal experiment?
    • What else do we need?

… and, what don't we need?

  • A way to reject the placebo effect - if it wins, use it. If it wins for you but wouldn't win for someone else, then they have a problem. We may choose to spend some effort helping others benefit from this hack, but that seems to be a different task - it's irrelevant to our goal.


How should we decide how much time to spend gathering data and generating estimates on matters such as this? How much is Eliezer setting himself up to lose, and how much am I missing the point?

Two-Tier Rationalism

40 Alicorn 17 April 2009 07:44PM

Related to: Bayesians vs. Barbarians

Consequentialism1 is a catchall term for a vast number of specific ethical theories, the common thread of which is that they take goodness (usually of a state of affairs) to be the determining factor of rightness (usually of an action).  One family of consequentialisms that came to mind when it was suggested that I post about my Weird Forms of Utilitarianism class is called "Two-Tier Consequentialism", which I think can be made to connect interestingly to our rationalism goals on Less Wrong.  Here's a summary of two-tier consequentialism2.

(Some form of) consequentialism is correct and yields the right answer about what people ought to do.  But (this form of) consequentialism has many bad features:

  • It is unimplementable (because to use it correctly requires more calculation than anyone has time to do based on more information than anyone has time to gather and use).
  • It is "alienating" (because people trying to obey consequentialistic dictates find them very unlike the sorts of moral motivations they usually have, like "I want to be a nice person" or "so-and-so is my friend")3.
  • It is "integrity-busting" (because it can force you to consider alternatives that are unthinkably horrifying, if there is the possibility that they might lead to the "best" consequences).
  • It is "virtue-busting" (because it too often requires a deviation from a pattern of behavior that we consider to be an expression of good personal qualities that we would naturally hope and expect from good people).
  • It is prone to self-serving abuse (because it's easy, when calculating utilities, to "cook the books" and wind up with the outcome you already wanted being the "best" outcome).
  • It is "cooperation-busting" (because individuals don't tend to have an incentive to avoid free-riding when their own participation in a cooperative activity will neither make nor break the collective good).


To solve these problems, some consequentialist ethicists (my class focused on Railton and Hare) invented "two-tier consequentialism".  The basic idea is that because all of these bad features of (pick your favorite kind of) consequentialism, being a consequentialist has bad consequences, and therefore you shouldn't do it.  Instead, you should layer on top of your consequentialist thinking a second tier of moral principles called your "Practically Ideal Moral Code", which ought to have the following more convenient properties:

continue reading »

How Much Thought

37 jimrandomh 12 April 2009 04:56AM

We have many built in heuristics, and most of them are trouble. The absurdity heuristic makes us reject reasonable things out of hand, so we should take the time to fully understand things that seem absurd at first. Some of our beliefs are not reasoned, but inherited; we should sniff those out and discard them. We repeat cached thoughts, so we should clear and rethink them. The affect heuristic is a tricky one; to work around it, we have to take the outside view. Everything we see and do primes us, so for really important decisions, we should never leave our rooms. We fail to attribute agency to things which should have it, like opinions, so if less drastic means don't work, we should modify English to make ourseves do so.

All of these articles bear the same message, the same message that can be easily found in the subtext of every book, treatise and example of rationality. Think more. Look for the third alternative. Challenge your deeply held beliefs. Drive through semantic stop signs. Prepare a line of retreat. If you don't understand, you should make an extraordinary effort. When you do find cause to change your beliefs, complete a checklist, run a script and follow a ritual. Recheck your answers, because thinking helps; more thought is always better.

The problem is, there's only a limited amount of time in each day. To spend more time thinking about something, we must spend less time on something else. The more we think about each topic, the fewer topics we have time to think about at all. Rationalism gives us a long list of extra things to think about, and angles to think about them from, without guidance on where or how much to apply them. This can make us overthink some things and disastrously underthink others. Our worst mistakes are not those where our thoughts went astray, but those we failed to think about at all. The time between when we learn rationality techniques and when we learn where to apply them is the valley.

continue reading »

Rationalists should beware rationalism

27 Kaj_Sotala 06 April 2009 02:16PM

Rationalism is most often characterized as an epistemological position. On this view, to be a rationalist requires at least one of the following: (1) a privileging of reason and intuition over sensation and experience, (2) regarding all or most ideas as innate rather than adventitious, (3) an emphasis on certain rather than merely probable knowledge as the goal of enquiry. -- The Stanford Encyclopedia of Philosophy on Continental Rationalism.

By now, there are some things which most Less Wrong readers will agree on. One of them is that beliefs must be fueled by evidence gathered from the environment. A belief must correlate with reality, and an important part of that is whether or not it can be tested - if a belief produces no anticipation of experience, it is nearly worthless. We can never try to confirm a theory, only test it.

But yet, we seem to have no problem coming up with theories that are either untestable or that we have no intention of testing, such as evolutionary psychological explanations for the underdog effect.

I'm being a bit unfair here. Those posts were well thought out and reasonably argued, and Roko's post actually made testable predictions. Yvain even made a good try at solving the puzzle, and when he couldn't, he reasonably concluded that he was stumped and asked for help. That sounds like a proper use of humility to me.

But the way that ev-psych explanations get rapidly manufactured and carelessly flung around on OB and LW has always been a bit of a pet peeve for me, as that's exactly how bad evpsych gets done. The best evolutionary psychology takes biological and evolutionary facts, applies those to humans and then makes testable predictions, which it goes on to verify. It doesn't take existing behaviors and then try to come up with some nice-sounding rationalization for them, blind to whether or not the rationalization can be tested. Not every behavior needs to have an evolutionary explanation - it could have evolved via genetic drift, or be a pure side-effect from some actual adaptation. If we set out by trying to find an evolutionary reason for some behavior, we are assuming from the start that there must be one, when it isn't a given that there is. And even a good theory need not explain every observation.

continue reading »

Science vs. art

4 PhilGoetz 16 March 2009 03:48PM

In the comments on Soulless Morality, a few people mentioned contributing to humanity's knowledge as an ultimate value.  I used to place a high value on this myself.

Now, though, I doubt whether making scientific advances would give me satisfaction on my deathbed.  All you can do in science is discover something before someone else discovers it.  (It's a lot like the race to the north pole, which struck me as stupid when I was a child; yet I never transferred that judgement to scientific races.)  The short-term effects of your discovering something sooner might be good, and might not.  The long-term effects are likely to be to bring about apocalypse a little sooner.

Art is different.  There's not much downside to art.  There are some exceptions - romance novels perpetuate destructive views of love; 20th-century developments in orchestral music killed orchestral music; and Ender's Game has warped the psyches of many intelligent people.  But artists seldom worry that their art might destroy the world.  And if you write a great song, you've really contributed, because no one else would have written that song.

EDIT: What is above is instrumental talk.  I find that, as I get older, science fails to satisfy me as much.  I don't assign it the high intrinsic value I used to.  But it's hard for me to tell whether this is really an intrinsic valuation, or the result of diminishing faith in its instrumental value.

I think that people who value rationality tend to place an unusually high value on knowledge.  Rationality requires knowledge; but that gives knowledge only instrumental value.  It doesn't (can't, by definition) justify giving knowledge intrinsic value.

What do the rest of you think?  Is there a strong correlation between rationalism, giving knowledge high intrinsic value, and giving art low intrinsic value?  If so, why?  And which would you rather be - a great scientist, or a great artist of some type?  (Pretend that great scientists and great artists are equally well-paid and sexually attractive.)

(I originally wrote this as over-valuing knowledge and under-valuing art, but Roko pointed out that that's incoherent.)

Under a theory that intrinsic and instrumental values are separate things, there's no reason why giving science a high instrumental value should correlate with giving it a high intrinsic value, or vice-versa.  Yet the people here seem to be doing one of those things.

My theory is that we can't keep intrinsic and instrumental values separate from each other.  We attach positive valences to both, and then operate on the positive valences.  Or, we can't distinguish our intrinsic values from our instrumental values by introspection.  (You may have noticed that I started using examples that refer to both intrinsic and instrumental values.  I don't think I can separate them, except retrospectively; and with about as much accuracy as a courtroom witness asked to testify about an event that took place 20 years ago.)

It's tempting to mention friends and family in here too, as another competing fundamental value.  But that would demand solving the relationship between personal values that you yourself take, and the valuations you would want a society or a singleton AI to make.  That's too much to take on here.  I want to talk just about intrinsic value given to science vs. art.

Oh, and saying science is an art is a dodge.  You then have to say whether you value the knowledge, or the artistic endeavor.  Also, ignore the possibility that your scientific work can make a safe Singularity.  That would be science as instrumental value.  I'm asking about science vs. art as intrinsic values.

EDIT:  An obvious explanation:  I was assuming that people here want to be rational as an instrumental value, and that we should find the distribution of intrinsic values to be the same as in the general populace.  But of course some people are drawn here because rationality is an intrinsic value to them, and this heavily biases the distribution of intrinsic values found here.

View more: Next