Dissenting Views
Occasionally, concerns have been expressed from within Less Wrong that the community is too homogeneous. Certainly the observation of homogeneity is true to the extent that the community shares common views that are minority views in the general population.
Maintaining a High Signal to Noise Ratio
The Less Wrong community shares an ideology that it is calling ‘rationality’(despite some attempts to rename it, this is what it is). A burgeoning ideology needs a lot of faithful support in order to develop true. By this, I mean that the ideology needs a chance to define itself as it would define itself, without a lot of competing influences watering it down, adding impure elements, distorting it. In other words, you want to cultivate a high signal to noise ratio.
For the most part, Less Wrong is remarkably successful at cultivating this high signal to noise ratio. A common ideology attracts people to Less Wrong, and then karma is used to maintain fidelity. It protects Less Wrong from the influence of outsiders who just don't "get it". It is also used to guide and teach people who are reasonably near the ideology but need some training in rationality. Thus, karma is awarded for views that align especially well with the ideology, align reasonably well, or that align with one of the directions that the ideology is reasonably evolving.
Bad reasons for a rationalist to lose
Reply to: Practical Advice Backed By Deep Theories
Inspired by what looks like a very damaging reticence to embrace and share brain hacks that might only work for some of us, but are not backed by Deep Theories. In support of tinkering with brain hacks and self experimentation where deep science and large trials are not available.
Eliezer has suggested that, before he will try a new anti-akraisia brain hack:
[…] the advice I need is from someone who reads up on a whole lot of experimental psychology dealing with willpower, mental conflicts, ego depletion, preference reversals, hyperbolic discounting, the breakdown of the self, picoeconomics, etcetera, and who, in the process of overcoming their own akrasia, manages to understand what they did in truly general terms - thanks to experiments that give them a vocabulary of cognitive phenomena that actually exist, as opposed to phenomena they just made up. And moreover, someone who can explain what they did to someone else, thanks again to the experimental and theoretical vocabulary that lets them point to replicable experiments that ground the ideas in very concrete results, or mathematically clear ideas.
This doesn't look to me like an expected utility calculation, and I think it should. It looks like an attempt to justify why he can't be expected to win yet. It just may be deeply wrongheaded.
I submit that we don't "need" (emphasis in original) this stuff, it'd just be super cool if we could get it. We don't need to know that the next brain hack we try will work, and we don't need to know that it's general enough that it'll work for anyone who tries it; we just need the expected utility of a trial to be higher than that of the other things we could be spending that time on.
So… this isn't other-optimizing, it's a discussion of how to make decisions under uncertainty. What do all of us need to make a rational decision about which brain hacks to try?
- We need a goal: Eliezer has suggested "I want to hear how I can overcome akrasia - how I can have more willpower, or get more done with less mental pain". I'd push cost in with something like "to reduce the personal costs of akraisia by more than the investment in trying and implementing brain hacks against it plus the expected profit on other activities I could undertake with that time".
- We need some likelihood estimates:
- Chance of a random brain hack working on first trial: ?, second trial: ?, third: ?
- Chance of a random brain hack working on subsequent trials (after the third - the noise of mood, wakefulness, etc. is large, so subsequent trials surely have non-zero chance of working, but that chance will probably diminish): →0
- Chance of a popular brain hack working on first (second, third) trail: ? (GTD is lauded by many many people; your brother in law's homebrew brain hack is less well tried)
- Chance that a brain hack that would work in the first three trials would seem deeply compelling on first being exposed to it: ?
(can these books be judged by their covers? how does this chance vary with the type of exposure? what would you need to do to understand enough about a hack that would work to increase its chance of seeming deeply compelling on first exposure?) - Chance that a brain hack that would not work in the first three trials would seem deeply compelling on first being exposed to it: ? (false positives)
- Chance of a brain hack recommended by someone in your circle working on first (second, third) trial: ?
- Chance that someone else will read up "on a whole lot of experimental psychology dealing with willpower, mental conflicts, ego depletion, preference reversals, hyperbolic discounting, the breakdown of the self, picoeconomics, etcetera, and who, in the process of overcoming their own akrasia, manages to understand what they did in truly general terms - thanks to experiments that give them a vocabulary of cognitive phenomena that actually exist, as opposed to phenomena they just made up. And moreover, someone who can explain what they did to someone else, thanks again to the experimental and theoretical vocabulary that lets them point to replicable experiments that ground the ideas in very concrete results, or mathematically clear ideas", all soon: ? (pretty small?)
- What else do we need to know?
- We need some time/cost estimates (these will vary greatly by proposed brain hack):
- Time required to stage a personal experiment on the hack: ?
- Time to review and understand the hack in sufficient detail to estimate the time required to stage a personal experiment?
- What else do we need?
… and, what don't we need?
- A way to reject the placebo effect - if it wins, use it. If it wins for you but wouldn't win for someone else, then they have a problem. We may choose to spend some effort helping others benefit from this hack, but that seems to be a different task - it's irrelevant to our goal.
How should we decide how much time to spend gathering data and generating estimates on matters such as this? How much is Eliezer setting himself up to lose, and how much am I missing the point?
Akrasia and Shangri-La
Continuation of: The Unfinished Mystery of the Shangri-La Diet
My post about the Shangri-La Diet is there to make a point about akrasia. It's not just an excuse: people really are different and what works for one person sometimes doesn't work for another.
You can never be sure in the realm of the mind... but out in material foodland, I know that I was, in fact, drinking extra-light olive oil in the fashion prescribed. There is no reason within Roberts's theory why it shouldn't have worked.
Which just means Roberts's theory is incomplete. In the complicated mess that is the human metabolism there is something else that needs to be considered. (My guess would be "something to do with insulin".)
But if the actions needed to implement the Shangri-La Diet weren't so simple and verifiable... if some of them took place within the mind... if it took, not a metabolic trick, but willpower to get to that amazing state where dieting comes effortlessly and you can lose 30 pounds...
Then when the Shangri-La Diet didn't work, we unfortunate exceptions would get yelled at for doing it wrong and not having enough willpower. Roberts already seems to think that his diet ought to work for everyone; when someone says it's not working, Roberts tells them to drink more extra-light olive oil or try a slightly different variant of the diet, rather than saying, "This doesn't work for some people and I don't know why."
If the failure had occurred somewhere inside the dark recesses of my mind where it could be blamed on me, rather than within my metabolism...
The Unfinished Mystery of the Shangri-La Diet
Followup to: Beware of Other-Optimizing
Once upon a time, Seth Roberts (a professor of psychology at Berkeley, on the editorial board of Nutrition) noticed that he'd started losing weight while on vacation in Europe. For no apparent reason, he'd stopped wanting to eat.
Some time later, The Shangri-La Diet swept... the econoblogosphere, anyway. People including some respectable economists tried it, found that it actually seemed to work, and told their friends.
The Shangri-La Diet is unfortunately named - I would have called it "the set-point diet". And even worse, the actual procedure sounds like the wackiest fad diet imaginable:
Just drink two tablespoons of extra-light olive oil early in the morning... don't eat anything else for at least an hour afterward... and in a few days it will no longer take willpower to eat less; you'll feel so full all the time, you'll have to remind yourself to eat.
Why? I'm tempted to say "No one knows" just to see what kind of comments would show up, but that would be cheating. Roberts does have a theory motivating the diet, an elegant combination of pieces individually backed by previous experiments:
- Your metabolism has a set point, like the setting on a thermostat: when your weight is below the set point, you feel hungry; when your weight is above the set point, you feel full.
- But the set point is not a constant; it is raised and lowered by what you eat.
- This mechanism in turn seems to be regulated by a flavor-calorie association. (Possibly as a famine-storage mechanism that tries to store more resources when dense food sources are available.) If you eat something with flavor X, which is followed by your metabolism detecting a large source of calories, flavor X will (a) seem more appealing and taste better, and (b) will raise your set point whenever you eat items with flavor X.
- Your set point is always naturally dropping, but is raised by eating; usually these forces are in dynamic balance and your weight stays constant.
I'm not going to go into all the existing evidence that backs up each step of this theory, but the theory is very beautiful and elegant. The actual Shangri-La Diet is painfully simple by comparison: consume nearly tasteless extra-light olive oil, being careful not to associate it with any flavors before or after, to raise your body weight a little without raising your set point. Your body weight goes above your set point, and you stop feeling hungry. Then you eat less... and your weight drops... and your set point drops a little less than that... but then next morning it's time for your next dose of extra-light olive oil, which once again puts your (decreased) weight a bit above the set point. The regular dose of almost flavorless calories tilts the dynamic balance downward. That's the theory.
Many people, including some trustworthy econblogger types, have reported losing 1-2 pounds/week by implementing the actual actions of the Shangri-La Diet, up to 30 pounds or even more in some cases. Without expending willpower.
I tried it. It didn't work for me.
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)