Comment author: [deleted] 06 February 2011 01:10:44PM *  30 points [-]

Awesome article as always. I really like your recent high-quality posts, Luke.

A few additional notes.

1) I was already more or less aware of this research through the language learning community, mostly the Japanese one. For example, Khatzumoto has been advocating this for some time now, see this article for an explanation or this trilogy in 9 parts for practical advice how to fix it. (Because LW isn't really about learning languages, I'll just leave it at this.)

These techniques try not to fix your own attitude (like, giving you lower Impulsiveness, changing the Value you assign or affecting your optimism), but instead change the learning strategies in such a way that they work regardless of these problems. So instead of learning how to tackle larger goals, they instead choose really tiny ones. Khatz for example strongly advocates timeboxes of 90 seconds or less, or changing the learning material to intrinsically fun stuff (manga instead of textbooks). This is something that the traditional procrastination literature doesn't really address very much. It has helped me a lot, in addition to all the approaches you already described.

2) I strongly agree with this model, but I'm not sure that this covers all of procrastination. I have seen additional (albeit not nearly as common) failure modes where all 4 variables given seem to be just fine, but still nothing got done. For example, I know quite a few experienced meditators that were horrible procrastinators in certain domains (e.g. Shinzen Young, see part 2 of this interview). (This includes myself, too, but I'm not nearly as experienced as I wish I was.) Through strong concentration meditation, you can easily make any task fun by going into Flow at will (or even stronger states than that), through variants of metta meditation, failure becomes no big deal and someone that can sit an hour or more paying detailed attention to pain (physical or emotional) doesn't really have a problem with Impulsiveness per se. I'm not sure that these factors are really the main cause here.

To give a personal example (a fairly common one among advanced vipassana practitioners) of such a failure mode, there's all-consuming nihilism, where you still have high concentration, lots of pleasure and so on, but find every possible action intrinsically empty, so you can't be bothered at all to do anything. In the extreme case, people simply lie around all day, doing nothing. (This is distinct from depression in that pleasure and motivation still exist as sensations, but are rejected, although from the outside it looks very similar.) The fix to this is not to try to arbitrarily assign Value to activities again, as the equation would predict (because activities are already enjoyable, but that doesn't help at all), but instead to turn this nihilism on itself and realize that "wanting meaning" is just as meaningless as everything else. So in that case, more specific insight and the uprooting of beliefs is necessary, not a better technique. (PJ Eby provides plenty of practical examples and great fixes for related situations, imo.)

3) Rewards can backfire horribly if done wrong. I have tried to use operant conditioning for not-so-pleasant, but necessary tasks. (Similar to taw's point system and strongly influenced by Don't Shoot The Dog.) The problem is that I came to replace my intrinsic (albeit limited) motivation entirely with an external one. Now once I either found a way to game the system (get the same reinforcements in an easier fashion) or skipped the rewards for some reason, all my motivation was gone completely. (Gabe Zichermann in his talk on gamification gives another example of this replacement.) So I'd highly advice against using reward systems except maybe for short, one-off goals.

(However, I have successfully exploited this to stop behavior. Don't Shoot The Dog explains this in detail. Essentially, you practice the behavior you don't want to do, put it on a reward system, give it an explicit cue and then you don't give the cue, ever. It's a bit tricky and dark-artsy, but works.)

In response to comment by [deleted] on How to Beat Procrastination
Comment author: MondSemmel 30 December 2013 01:06:27PM 0 points [-]

(It's hardly relevant to the parent comment, but the Shinzen Young interview linked above is behind a paywall nowadays. But it can still be read here.)

In response to Why CFAR?
Comment author: MondSemmel 29 December 2013 04:31:42PM 23 points [-]

Donated 40€. I was going to donate to MIRI or CFAR, and chose CFAR due to this Facebook discussion.

Comment author: lukeprog 25 May 2012 03:18:42AM *  15 points [-]

Here is a skeptical reply to Baumeister on willpower:

This idea has a visceral, compelling feel to it - it does feel like I'm tired and drained after making decisions - and the idea has received a tremendous amount of attention from scientific and lay communities alike, perhaps partially explaining the duo of scientist/journalist on the dust jacket.

There are a number of problems with the model, and in this post, I'll talk about one of them: the model has been falsified a number of times, including by research performed in the lab of the proponents of the model.

Note that one's perception of cognitive resource depletion matters.

Comment author: MondSemmel 13 December 2013 06:05:22PM *  1 point [-]

For those interested, that blog post has two follow-up posts which criticize the part of the theory claiming that the resource depleted during ego-depletion is glucose: Glucose Is Not Willpower Fuel and Should You Consume Sugar to Improve Your Self-Control?.

With all that, ego-depletion theory really looks well beyond shaky. The author of these posts also claims we still have no actual working theory of fatigue.

Comment author: MondSemmel 02 December 2013 05:16:24PM 16 points [-]

Answered the survey, including the bonus questions. Took me 32 min altogether. Comments:

How many people are aware of their IQs? I'm from Germany and have never taken an IQ test. Is knowing about one's IQ common enough in the US that not making that question a bonus question made sense?

There were quite a few questions (e.g. estimate weekly internet consumption, estimate how often you read about ideas for self-improvement) which felt pointless - how could you possibly get accurate estimates from people, given how ambiguous these questions were, and how difficult these estimates are?

The money question: After I failed to come up with a unique passphrase, I chose cooperate and left the rest blank. This kind of stuff tempts my perfectionism, and that's a lose-lose situation for me.

Comment author: MondSemmel 03 October 2013 02:14:34PM *  0 points [-]

Hi there! I'm a German physics student at LMU Munich and might be interested in the meetup. It's just...I'm shy, have never been to any kind of meetup, and have no idea what to expect. So...what can I expect? How many people typically show up? How many showed up last time?

And I read there might be a Facebook group - if there is, how can I join it?

EDIT: Found and joined the Facebook group here. I'll participate.

In response to Fundamental Doubts
Comment author: MondSemmel 20 September 2013 03:17:42PM 0 points [-]

One of the benefits I've drawn from Less Wrong so far - via posts like The Simple Truth - is more solid foundations for my beliefs. Since I study physics, I wasn't particularly worried about philosophical arguments against the scientific method anymore - science seemed to work, after all - but of those doubts that remained, many more still got (apparently) dispelled or dissolved.

That said, I never had doubts that fundamental. Could anybody really live that way? I don't have a coherent mental model for such a situation. Take Mad-Eye Moody in HPMoR with his constant paranoia, as in "Unless, of course, that's what they want you to think." - could such a character really function in the real world? Could they eat food without starving themselves to death out of paranoia? If you are paranoid about both X -> Y and X -> ~Y, how do you decide whether to do Y or ~Y? (It seems more plausible to me that Moody doesn't actually exhibit doubts that fundamental, than that he or anyone else could function properly despite those doubts.)

Comment author: hackerkiba 23 August 2013 02:02:09AM *  1 point [-]

As much as I like reading the sequences, I am skeptical about their utility in increasing rationality, or rather, the rationality increases in the lesswrong community is not measured or quantified scientifically.

Comment author: MondSemmel 23 August 2013 07:49:49AM 0 points [-]

Thanks for mentioning this. Many posts in the sequences I've read so far, especially those concerning biases, seemed interesting, but not necessarily useful: I don't really see how to apply that knowledge to my own life. And when debiasing techniques are suggested, they often sound prohibitively expensive in terms of willpower. That said, I've also read quite a few posts of whose eventual usefulness I am reasonably confident. Off the top of my head, the sequence Joy in the Merely Real seemed really beneficial to me - if only because it gave me a strong argument to read more textbooks.

Another Anki deck for Less Wrong content

14 MondSemmel 22 August 2013 07:31PM

Anki decks of Less Wrong content have been shared here before. However, they felt a bit huge (one deck was >1500 cards) and/or not helpful to me. As I go through the sequences, I create Anki cards, and I've decided they are at a point where I can share them. Maybe someone else will benefit from them.

Current content: The deck currently consists of 186 Anki cards (82 Q&A, 104 cloze deletion), covering the following Less Wrong sequences: The Map and the Territory, Mysterious Answers to Mysterious Questions, How to Actually Change Your Mind, A Human's Guide to Words, and Reductionism.
All cards contain an extra field for their source, usually 1-2 Less Wrong posts, rarely a link to Wikipedia. Some mathy cards use LaTeX. I don't know what happens if you don't have LateX installed. Though if this is a problem, I think I can convert the LaTeX code to images with an Anki plugin.

Important caveats:

  1. My cards tend to have more context than those I've seen in most other decks, to the point that one might consider them overloaded with information. That's partly due to personal preference, and partly because I need as much context as possible so I memorize more than just a teacher's password.
  2. In contrast to previously shared Anki decks of Less Wrong content, I do not aim to make this deck comprehensive. Rather, I create cards for content which I understood and which seems suitable for memorization and which seemed particularly useful to me. Conversely, I did not create cards when I couldn't think of a way to memorize something, or when I did not understand (the usefulness of) something. (For instance, Original Seeing and Priming and Contamination did not work for me.)
  3. I've tried a few shared decks so far, and everybody seems to create cards differently. So I'm not sure to which extent this deck can be useful to anyone who isn't me.

Open question: I'm still not sure to which extent I'm memorizing internalized and understood knowledge with these cards, and to which extent they are just fake explanations or attempts to guess at passwords.

And a final disclaimer: The content is mostly taken verbatim from Yudkowsky's sequences, though I've often edited the text so it fit better as an Anki card. I checked the cards thoroughly before making the deck public, but any remaining errors are mine.

I'm thankful for suggestions and other feedback.

Comment author: gothgirl420666 11 August 2013 05:06:19AM *  2 points [-]

Honestly, I feel like if Eliezer had left out any mention of the math of Bayes' Theorem from the sequences, I would be no worse off. The seven statements you wrote seem fairly self-evident by themselves. I don't feel like I need to read that P(A|B) > P(A) or whatever to internalize them. (But perhaps certain people are highly mathematical thinkers for whom the formal epistemology really helps?)

Lately I kind of feel like rationality essentially comes down to two things:

  1. Recognizing that as a rule you are better off believing the truth, i.e. abiding by the Litany of Tarski.

  2. Having probabilistic beliefs, i.e. abiding by the Bayesian epistemology and not the Aristotelian or the Anton-Wilsonian as Yvain defined in his reaction to Chapman, or having an many-color view as opposed to a two-color view or a one-color view as Eliezer defined in the Fallacy of Gray.

Once you've internalized these two things, you've learned this particular Secret of the Universe. I've noticed that people seem to have their minds blown by the sequences, not really learn all that much more by spending a few years in the rationality scene, and then go back to read the sequences and wonder how they could have ever found them anything but obvious. (Although apparently CFAR workshops are really helpful, so if that's true that's evidence against this model.)

Comment author: MondSemmel 12 August 2013 09:19:15AM 4 points [-]

I've noticed that people seem to have their minds blown by the sequences, not really learn all that much more by spending a few years in the rationality scene, and then go back to read the sequences and wonder how they could have ever found them anything but obvious.

What happens when they reach this post?

Comment author: MondSemmel 02 August 2013 04:55:43PM 5 points [-]

[Meta comment: In the welcome post, the links to the open threads link to two different tags, with different time dates. This is confusing. One of them hasn't been updated since 10/2011. If you fix this, you might have to do the same in the template for creating new welcome threads. Also, I think the same issue exists elsewhere on the site, e.g. in the Less Wrong FAQ.]

View more: Prev | Next