Akrasia survey data analysis

13 John_Maxwell_IV 08 December 2012 03:53AM

Followup toAkrasia hack survey


p(hack akrasia|heard of hack and thought it was worth trying)

What are the odds of you succumbing to "hack akrasia", never trying or not consistently applying a hack, given that you'd heard of it and thought it was worth trying?

lukeprog's algorithm for beating procrastination: 83%

The Pomodoro Technique: 68%

Exercise for increased energy: 60%

LeechBlock or similar: 38%

Comments: Hack akrasia seems pretty darn high overall.  LeechBlock is least susceptible.

 

p(using hack profitably|heard of hack and thought it was worth trying)

The "real success rate".  What percentage of the time does thinking a hack is worth trying translate in to adopting it and using it consistently?

lukeprog's algorithm for beating procrastination: 02%

The Pomodoro Technique: 04%

Exercise for increased energy: 25%

LeechBlock or similar: 15%

Comments: Exercise is the clear winner.  If you didn't think exercise was worth trying (5% of survey respondents), you might want to reconsider.

 

p(hack seems to work|tried hack)

In a world without hack akrasia, what success rates would be be seeing?

lukeprog's algorithm for beating procrastination: 42%

The Pomodoro Technique: 58%

Exercise for increased energy: 84%

LeechBlock or similar: 37%

Comments: Again, exercise is the clear winner.  If you don't exercise, next time you're in an akrasia-killing mood, it seems you'd be well advised to try and set up some sort of regular exercise regimen for yourself.  Setting up a Pomodoro regime for yourself seems like a solid 2nd choice.

 

p(hack seemed worth trying|heard of hack)

lukeprog's algorithm for beating procrastination: 75%

The Pomodoro Technique: 79%

Exercise for increased energy: 94%

LeechBlock or similar: 60%

Comments: This was for comparison with actual success rates.  Multiple people wrote in that they didn't have the problem LeechBlock tries to solve, so this may account for its low rate.  If you do have the problem LeechBlock tries to solve but you did't think it's worth trying, you may wish to revise your opinion, as its "real success rate" is in 2nd place at 15%.

Yes, LeechBlock may be relatively easy to subvert.  I was turned off by this initially as well.  But now I think that it's not all that important--the main thing is to disrupt your distraction-seeking behavior, not present an impenetrable barrier.  I'd guess that if you could set up LeechBlock as a reminder to engage in some non-variable-reinforcement break activity, that'd be ideal.

By the way, does anyone have an opinion on the best LeechBlock equivalent for Google Chrome?

 

Graphs from Villiam Bur

 

 

More commentary

Initially I'd been thinking of "hack akrasia" as a different type of akrasia than the regular akrasia it tried to defeat.  But recently it occurred to me to question this.

There are probably a variety of akrasia subtypes, some of which have disproportionate impact on executing hacks vs doing other stuff.  Akrasia subtypes I can think of offhand:

  • Ugh field akrasia.
  • Near/far akrasia: Something seems like a good idea, but you never think "oh, I should actually do this now" or make a plan to do it at a specific time or in a specific situation.
  • Forgetting akrasia: You do make a plan/resolution but you forget about it.
  • Slippery slope akrasia: "Just this once" and other ways to gradually rationalize your way around good intentions.  (My recent policy design post offers some suggestions for this.)
  • Low morale, low energy, low motivation, depression.

Anyway, given the high rate of hack akrasia, it may make sense to concentrate on developing hacks that are themselves substantially less susceptible to akrasia.  For example, if I told you to watch funny videos on the internet when your morale is low (psychologists have speculated that laughter helps with ego depletion--works OK for me), it seems unlikely you'd fall prey to any akrasia subtype except forgetting akrasia.  (Optimal Breaks in general seems like it might fall in this category, especially if the breaks involve 0 setup cost.)

To fight inconsistent application of hacks that you know work when you use them, BeeMinder might be useful.

Regarding write-ins: They were under 5% for every category and I threw them out when computing conditional probabilities.

Conditional probability calculator here: https://gist.github.com/4238473  Hopefully there aren't any bugs.

Akrasia hack survey

10 John_Maxwell_IV 30 November 2012 01:09AM

Survey here.  Analysis here.

Thoughts on designing policies for oneself

74 John_Maxwell_IV 28 November 2012 01:27AM

Note: This was originally written in relation to this rather scary comment of lukeprog's on value drift.  I'm now less certain that operant conditioning is a significant cause of value drift (leaning towards near/far type explanations), but I decided to share my thoughts on the topic of policy design anyway.


Several years ago, I had a reddit problem.  I'd check reddit instead of working on important stuff.  The more I browsed the site, the shorter my attention span got.  The shorter my attention span got, the harder it was for me to find things that were enjoyable to read.  Instead of being rejuvenating, I found reddit to be addictive, unsatisfying, and frustrating.  Every time I thought to myself that I really should stop, there was always just one more thing to click on.

So I installed LeechBlock and blocked reddit at all hours.  That worked really well... for a while.

Occasionally I wanted to dig up something I remembered seeing on reddit.  (This wasn't always bad--in some cases I was looking up something related to stuff I was working on.)  I tried a few different policies for dealing with this.  All of them basically amounted to inconveniencing myself in some way or another whenever I wanted to dig something up.

After a few weeks, I no longer felt the urge to check reddit compulsively.  And after a few months, I hardly even remembered what it was like to be an addict.

However, my inconvenience barriers were still present, and they were, well, inconvenient.  It really was pretty annoying to make an entry in my notebook describing what I was visiting for and start up a different browser just to check something.  I figured I could always turn LeechBlock on again if necessary, so I removed my self-imposed barriers.  And slid back in to addiction.

After a while, I got sick of being addicted again and decided to do something about it (again).  Interestingly, I forgot my earlier thought that I could just turn LeechBlock on again easily.  Instead, thinking about LeechBlock made me feel hopeless because it seemed like it ultimately hadn't worked.  But I did try it again, and the entire cycle then finished repeating itself: I got un-addicted, I removed LeechBlock, I got re-addicted.

This may seem like a surprising lack of self-awareness.  All I can say is: Every second my brain gathers tons of sensory data and discards the vast majority of it.  Narratives like the one you're reading right now don't get constructed on the fly automatically.  Maybe if I had been following orthonormal's advice of keeping and monitoring a record of life changes attempted, I would've thought to try something different.

continue reading »

Room for more funding at the Future of Humanity Institute

18 John_Maxwell_IV 16 November 2012 08:45PM

In case you didn't already know: The Future of Humanity Institute, one of the three organizations co-sponsoring LW, is a group within the University of Oxford's philosophy department that tackles important, large-scale problems for humanity like how to go about reducing existential risk.

I've been casually corresponding with the FHI in an effort to learn more about the different options available for purchasing existential risk reduction. Here's a summary of what I've learned from research fellow Stuart Armstrong and academic project manager Sean O'Heigeartaigh:

  • Sean reports that since this SIAI/FHI achievements comparison, FHI's full-time research team has expanded to 7, the biggest it's ever been.  Sean writes: "Our output has improved dramatically by all tangible metrics (academic papers, outreach, policy impact, etc) to match this."
  • Despite this, Sean writes, "we’re not nearly at the capacity we’d like to reach. There are a number of research areas in which we would very like to expand (more machine intelligence work, synthetic biology risks, surveillance/information society work) and in which we feel that we could make a major impact. There are also quite a number of talented researchers over the past year who we haven’t been able to employ but would dearly like to."
  • They'd also like to do more public outreach, but standard academic funding routes aren't likely to cover this.  So without funding from individuals, it's much less likely to happen.
  • Sean is currently working overtime to cover a missing administrative staff member, but he plans to release a new achievement report (see sidebar on this page for past achievement reports) sometime in the next few months.
  • Although the FHI has traditionally pursued standard academic funding channels, donations from individuals (small and large) are more than welcome.  (Stuart says this can't be emphasized enough.)
  • Stuart reports current academic funding opportunities are "a bit iffy, with some possible hopes".
  • Sean is more optimistic than Stuart regarding near-term funding prospects, although he does mention that both Stuart and Anders Sandberg are currently being covered by FHI's "non-assigned" funding until grants for them can be secured.

Although neither Stuart nor Sean mentions this, I assume that one reason individual donations can be especially valuable is if they free FHI researchers up from writing grant proposals so they can spend more time doing actual research.

Interesting comment by lukeprog describing the comparative advantages of SIAI and FHI.

Empirical claims, preference claims, and attitude claims

5 John_Maxwell_IV 15 November 2012 07:41PM

What do the following statements have in common?

  • "Atlas Shrugged is the best book ever written."
  • "You break it, you buy it."
  • "Earth is the most interesting planet in the solar system."

My answer: None of them are falsifiable claims about the nature of reality.  They're all closer to what one might call "opinions".  But what is an "opinion", exactly?

There's already been some discussion on Less Wrong about what exactly it means for a claim to be meaningful.  This post focuses on the negative definition of meaning: what sort of statements do people make where the primary content of the statement is non-empirical?  The idea here is similar to the idea behind anti-virus software: Even if you can't rigorously describe what programs are safe to run on your computer, there still may be utility in keeping a database of programs that are known to be unsafe.

Why is it useful to be able to be able to flag non-empirical claims?  Well, for one thing, you can believe whatever you want about them!  And it seems likely that this pattern-matching approach works better for flagging them than a more constructive definition.

continue reading »

Economy gossip open thread

22 John_Maxwell_IV 28 October 2012 04:10AM

Diego Caleiro writes:

It amazes me that there is no "Sequence on how to make money in intelligent ways" or something of the sort in LessWrong.

I don't think a sequence is the quite the right approach to this.  The economy is huge, complicated, and heterogeneous, and it's changing constantly.  (Think of how complicated people are, then imagine how complicated their economy is.)  It's hard for a single person to have a thorough understanding of everything, and even if someone did, their understanding would start becoming obsolete immediately.

And making money intelligently is very closely related to knowing what's going on in the economy.  If you're training for a job, you want to develop skills that are in high demand.  If you're starting a business, you want to sell stuff for a profit.  You can only go so far identifying these opportunities by thinking things through from first principles.

I don't think most useful knowledge Less Wrong can share about what's going on in the economy is going to approach the certainty of carefully derived math or settled science.  Instead, it'll be more like gossip.  Since no one understands the entire economy, we won't get much firsthand knowledge.  Lots of information will consist of hearsay and speculation that's subject to change at any time.  (Incidentally, gathering such info and trying lots of stuff out may be a good preparatory activity for starting a business.)

Anyway, here's some gossip from me; feel free to post yours in the comments.

 

Employment

The US Bureau of Labor Statistics maintains an Occupational Outlook Handbook with information on salary, education requirements, and job growth for a wide variety of jobs.  Here are a few random interesting ones:

  • It may be possible to become an actuary (~$90K/yr, faster than average job growth, involves math) without a college degree if you pass a few actuarial exams
  • Diagnostic medical sonographer: requires only a 2-year degree, ~$65K/yr, much faster than average job growth
  • Michael Vassar was telling everyone to become a police officer a while ago.  The BLS page looks only OK, maybe you can get a much higher salary in the right municipality?

Payscale.com looks pretty nice for salary info: college education return on investment data, top paying majors.  See also: 5 ways to be mislead by salary rankings, GlassDoor.com

UC Berkeley alumni surveys: What can I do with a major in...

This guy claims you can make $45-60 an hour playing poker after a year's practice and $5000 lost.  Some blackjack card counting guru I emailed years ago claimed you could learn to count cards in 100 hours and make 6 figures for years if you were diligent (cover play, travelling for new opportunities, etc.)  I'm not sure how workable these ideas are if you don't already have a large bankroll.  (See Kelly criterion.)

If you're interested in getting paid to practice rejection therapy and you're in the SF Bay Area, PM me and I can put you in touch with someone in California's ballot signature collection industry.  The job consists of standing somewhere where lots of people walk by and asking them to sign your petitions.  You get paid per signature, and if you find a good spot (and keep it secret from other signature gatherers), it's possible to make a lot of money.  A friend averaged $300 a day; I wasn't sufficiently dedicated/psychologically resilient to get anything like those results.  The business is seasonal; if I recall correctly, February is an especially good month.

It looks as though high-end, college-educated call girls can make $300/hour.

Tutoring websites: UniversityTutor, TutorSpree, Care.com, Craigslist, Wyzant.  One thing to keep in mind with tutoring: Since you generally work so few hours per gig, transportation-time overhead per hour worked is higher.

80,000 hours offers one-on-one career advising, if you're in to effective altruism.

 

Salary Negotiation

"In a study conducted at Carnegie Mellon University's business school, Professor Linda Babcock discovered that [those MBA students who negotiated their starting salary instead of just accepting their initial offer] received an average of $4,053 more than those who did not."  (Bargaining for Advantage, p. 16.)  Women seem much more reluctant to ask for more money, which may go a ways towards explaining gender pay gaps.  The book recommends thorough preparation prior to any negotiation.

 

Business Gossip

Why rely on just one guru when you can draw inferences from the experiences of many?  But beware sampling effects: you'll likely hear less from people who didn't end up accomplishing anything worth writing about.

As business gurus go, Paul Graham is highly rational, has an unmatchable resume, and all his stuff is free.

 

Previously on Less Wrong

 

Passive income for dummies

17 John_Maxwell_IV 27 October 2012 07:25AM

I read one too many breathless blog posts on the virtues of passive income and decided to write a rebuttal.  Much of this should be obvious to folks with a solid economics background; in fact, please correct me if I got anything wrong.


People seem to think in an odd way about passive income.  One Helium.com author writes

I think the first time there was any money [in my Helium.com writer's account] was when it showed about 3 cents. That actually thrilled me.

...

Les called me and said "Quick, check your earnings page, tell me what you see for your credit card article (I thought, "here we go again with the 'Reddit' business"). I checked and lo and behold I had made around $4.00 in just a couple of hours for that one article he had submitted. Wow! Seriously. As the day progressed, it reached $8.00, then $10.00 and finally a little over $12.00.

Sub-minimum wages really are exciting, aren't they?  (At the end of the story, the guy has won the proverbial Helium.com lottery, and ends up making $1,246 submitting an article on how he hacked a credit card company's balance transfer offer to reddit.  If I recall correctly, this inspired a spate of Helium.com writers spamming reddit with their posts.  Probably why I ended up reading his article in the first place.)

Now I just have to think of a way to employ people normally and trick them in to thinking that the income they're making is "passive"...

But even smart people are in to passive income.  Here's Brian Armstrong, who looks pretty sharp: economics degree, Airbnb software engineer, and now Y Combinator startup founder.  His blog is largely about passive income, and he even wrote a book about it a few years ago.  So being an expert like this, we'd expect him to be making lots, right?

  • In this post, he gives his breakdown for December 2008: $250.00 from real estate, $1,260.40 from his blog/book about how to make passive income, $129.20 from his new tutor matching site UniversityTutor.com (best tutor matching site I've found, BTW)
  • In this September 2010 post, he reports UniversityTutor is making over $2,000 a month and set to keep growing.

So first, it looks like selling people the dream of passive income is a very profitable business to be in, and actually having passive income is not a requirement for entering it.

But is that even true?  Brian reports that his high December earnings are due to book promotional efforts.  At some point, he decided to make his book available free.  Would he have done that if he was still collecting substantial revenue from it?  Doesn't seem that likely.

Does your blog really count as a source of "passive" income if people gradually stop visiting when you stop making new posts?

And second, yeah, maybe if you're a good software engineer with a good idea you can build a passive income business.

In fact, working for "active" income vs building a "passive income" business is a bit of a false dichotomy.  You can convert the cash you make through a regular job in to "passive income" by investing it and collecting interest.  And you can convert your "passive income" business in to regular old chunk of cash by selling it.

If you're interested in making more money in your off hours, starting a business is one option.  Doing freelance work is another.  Starting a business is higher variance--you could make it big, or you could waste a lot of time and effort.  You'll most likely have to do something no one else is doing, or at least do it better than everyone else is doing it, so any kind of highly specific guide or formula for making passive income is probably out.  (Ask yourself why the person selling you the guide doesn't just hire people to complete all the steps in the guide and collect the profits for themselves.)

Interestingly, however, it seems as though internet businesses may be underpriced as an investment class.  This guy writes:

I think a fair price to sell websites for is around 12 months income, depending entirely on how much time you put into the site for that income to continue. If the site is running on complete autopilot then you may be able to get 2 years revenue or even more.

...

For example, if I find a site on Flippa that is ranking well for a certain keyphrase in Google and it’s making a lot of money, I might not always be able to purchase it for what I consider is a fair price — some bidders are happy to wait over 3 years for a return on their investment.

If we assume the market price of a completely hands-off internet business is 3 years' profits, then you'll have doubled your initial investment in 3 years, since you've still got the business to sell afterwards.  Abusing the rule of 72, I estimate that this is equivalent to getting a 24% annual return on your investment, which is obviously absurdly high.  (And this doesn't even take in to account the fact that you could reinvest your internet business' earnings.)

Of course, there are reasons to be wary about getting in to internet businesses as well.  Ease of buying and selling isn't the greatest and you might get scammed.  But probably the biggest thing is the Internet's short attention span.  Google changes its ranking algorithm periodically, and beyond that, it's easy to be eclipsed by something newer and better (see MySpace).  So if I were wealthier, I'd want to have a portion of my net worth in internet businesses, but not all of it.

BTW, http://www.rolfnelson.com/ is good for rational analysis of stuff related to entrepreneurship.

Morale management for entrepreneurs

9 John_Maxwell_IV 30 September 2012 05:35AM

One of the odd things about the procrastination equation is that part of it resembles an expected value calculation: value * expectancy.  Why does the equation's numerator present a problem at all then, if it's just the expected value of what you're trying to do?  Shouldn't that be the main factor in your motivation anyway?

One answer: In lukeprog's post, he conflates the "value" that task presents intrinsically (how much you enjoy doing it), and possible extrinsic motivators (some reward you hope to achieve after the task is completed).  So part of the reason your motivation system is miscalibrated is because not all valuable tasks are proportionately enjoyable.

But today I thought of another answer: Your subconscious expected value calculation may be falling prey to biases that aren't affecting your conscious expected value calculation.  Thus you correctly assign the task a high value consciously, but subconsciously, a particular bias may be bringing your estimate off.

Paul Graham writes:

Morale is tremendously important to a startup—so important that morale alone is almost enough to determine success. Startups are often described as emotional roller-coasters. One minute you're going to take over the world, and the next you're doomed. The problem with feeling you're doomed is not just that it makes you unhappy, but that it makes you stop working.

Let's pretend that we were running a betting market for your startup's chance of success.  If you and your cofounders are the only people in the market, you could picture the value of a contract in this market fluctuating up and down wildly.  But if you let others play in the market, there's an obvious money-making strategy: take the average of recent fluctuations.  Whenever the price fluctuates below that average, buy.  Whenever it fluctuates above that average, sell.  You and your cofounders can expect to lose a lot of money playing this market, at least early on in your startup's life.

The point I'm trying to make here is that this "emotional roller coaster" represents a kind of irrationality on the part of entrepreneurs.  And fixing this irrationality, especially in a way that hooks in to your motivation system and changes the numerator of your internal procrastination equation, could be very valuable for them.

One idea for a bias that contributes to this effect is the availability heuristic.  This suggests that your subconscious rates very recent, "available" events related to your startup higher than earlier less "available" events.  To fix this, you might be able to try to bring to mind older, less "available" data that suggests your startup will be successful and make it more salient.

Another possible bias is simple overconfidence.  It's really very difficult to know in advance whether your startup should succeed, so if you're either very bullish or very bearish, you're probably overconfident.  A common path to startup success seems to be discovering some fact about the market you're in that lets you re-make your business as something much better.  Since it's hard to predict the discovery of such facts in advance, it's hard to say much about how you will do.

Could evolution have selected for moral realism?

2 John_Maxwell_IV 27 September 2012 04:25AM

I was surprised to see the high number of moral realists on Less Wrong, so I thought I would bring up a (probably unoriginal) point that occurred to me a while ago.

Let's say that all your thoughts either seem factual or fictional.  Memories seem factual, stories seem fictional.  Dreams seem factual, daydreams seem fictional (though they might seem factual if you're a compulsive fantasizer).  Although the things that seem factual match up reasonably well to the things that actually are factional, this isn't the case axiomatically.  If deviating from this pattern is adaptive, evolution will select for it.  This could result in situations like: the rule that pieces move diagonally in checkers seems fictional, while the rule that you can't kill people seems factual, even though they're both just conventions.  (Yes, the rule that you can't kill people is a very good convention, and it makes sense to have heavy default punishments for breaking it.  But I don't think it's different in kind from the rule that you must move diagonally in checkers.)

I'm not an expert, but it definitely seems as though this could actually be the case.  Humans are fairly conformist social animals, and it seems plausible that evolution would've selected for taking the rules seriously, even if it meant using the fact-processing system for things that were really just conventions.

Another spin on this: We could see philosophy as the discipline of measuring, collating, and making internally consistent our intuitions on various philosophical issues.  Katja Grace has suggested that the measurement of philosophical intuitions may be corrupted by the desire to signal on the part of the philosophy enthusiasts.  Could evolutionary pressure be an additional source of corruption?  Taking this idea even further, what do our intuitions amount to at all aside from a composite of evolved and encultured notions?  If we're talking about a question of fact, one can overcome evolution/enculturation by improving one's model of the world, performing experiments, etc.  (I was encultured to believe in God by my parents.  God didn't drop proverbial bowling balls from the sky when I prayed for them, so I eventually noticed the contradiction in my model and deconverted.  It wasn't trivial--there was a high degree of enculturation to overcome.)  But if the question has no basis in fact, like the question of whether morals are "real", then genes and enculturation will wholly determine your answer to it.  Right?

Yes, you can think about your moral intuitions, weigh them against each other, and make them internally consistent.  But this is kind of like trying to add resolution back in to an extremely pixelated photo--just because it's no longer obviously "wrong" doesn't guarantee that it's "right".  And there's the possibility of path-dependence--the parts of the photo you try to improve initially could have a very significant effect on the final product.  Even if you think you're willing to discard your initial philosophical conclusions, there's still the possibility of accidentally destroying your initial intuitional data or enculturing yourself with your early results.

To avoid this possibility of path-dependence, you could carefully document your initial intuitions, pursue lots of different paths to making them consistent in parallel, and maybe even choose a "best match".  But it's not obvious to me that your initial mix of evolved and encultured values even deserves this preferential treatment.

Currently, I disagree with what seems to be the prevailing view on Less Wrong that achieving a Really Good Consistent Match for our morality is Really Darn Important.  I'm not sure that randomness from evolution and enculturation should be treated differently from random factors in the intuition-squaring process.  It's randomness all the way through either way, right?  The main reason "bad" consistent matches are considered so "bad", I suspect, is that they engender cognitive dissonance (e.g. maybe my current ethics says I should hack Osama Bin Laden to death in his sleep with a knife if I get the chance, but this is an extremely bad match for my evolved/encultured intuitions, so I experience a ton of cognitive dissonance actually doing this).  But cognitive dissonance seems to me like just another aversive experience to factor in to my utility calculations.

Now that you've read this, maybe your intuition has changed and you're a moral anti-realist.  But in what sense has your intuition "improved" or become more accurate?

I really have zero expertise on any of this, so if you have relevant links please share them.  But also, who's to say that matters?  In what sense could philosophers have "better" philosophical intuition?  The only way I can think of for theirs to be "better" is if they've seen a larger part of the landscape of philosophical questions, and are therefore better equipped to build consistent philosophical models (example).

Personal information management

14 John_Maxwell_IV 11 September 2012 11:40AM

Several weeks ago, I began using personal wiki software Zim Wiki (free and cross-platform for Linux & Windows; I recommend nvALT on Mac OS X) to record all of my notes-to-self.  I've found it to be a nice software tool for implementing some of the effectiveness advice I've read on Less Wrong.  This post is a fairly personal overview of my usage.

I looked at a lot of personal information managers before choosing Zim.  Here are the features that caused me to choose it over the other software I looked at:

  • Probably the most important feature: Jump-to-note capability with autocomplete.  Pressing Control-J gives a text box.  Start typing in the text box and it autocompletes with the names of any of the notes in my notebook (or allows me to create a new note).  This is the proverbial 10% of the feature set that provides 90% of the benefit over scattered text files.  Opening a specific note to add another thought or idea to it is a very common operation for me and this feature makes it very quick.  Only a few tools I've found seem to have comparable functionality: WikidPad (with Control-O), and the Notational Velocity family of information managers kind of have it.  (For Notational Velocity/nvALT, I recommend coming up with some kind of namespacing scheme so note names collide with note text less frequently in your searches.  For example, I prepend reminders for future situations with "f.", journal notes with "j.", policy notes with "p.", Less Wrong post drafts with "l.", etc.  Then command-L works as a pretty good "jump to note" shortcut.)

  • Pressing Control-D, then pressing return inserts a timestamp at the position of my cursor.  This has been useful for a variety of logging-type applications.  (I replicated the same thing with nvALT on OS X with aText.)

  • Zim is a desktop application.  This has a couple advantages:

    • I configured a keyboard shortcut to open it, or bring it to the front if it was already open, using a modified version of the Linux shell script in this forum thread.  (Alfred is nice for this on OS X.)

    • All my notes are stored as plain text files on my hard drive.  I keep them under version control, which lets me do things like answer the question "what new ideas for becoming more effective have I had over the past week?"  (I didn't use the built-in version control plugin because I found its UI glitchy.)

  • There's inter-note linking capability, also with an autocompletion dialogue.  (Press Control-L to create a link.)

  • Moving through note browsing history can be done with Alt-Left and Alt-Right.

  • It starts fast.
  • Notes are saved automatically, hierarchical note organization is possible, backlinks are tracked, incremental keyword search within a note is possible, and there appear to be a variety of other features I haven't yet had a chance to abuse.

Using Zim has meant a really low level of friction for writing new stuff and retrieving/reading/adding to stuff I wrote.  I've been using it about a month and I've got ~46K words in it, which seems to be around the length of a short novel. RescueTime says I use it 4-8 hours per week.  Some stuff I'm using it for:

  • Strategizing.  There's something kind of calming about writing my thoughts out when I'm choosing between several options or trying to figure out what to do.  I suspect that as soon as the amount of information related to a decision exceeds the capacity of my working memory, I benefit from writing stuff down.

  • Logging stuff.

  • Writing therapy.

  • Recording business ideas, self-experimentation ideas, essay ideas, etc.

  • Making plans and filing away notes related to future circumstances.

  • Taking notes related to software I'm developing.

It's hard to measure how much benefit I'm getting out of all this, though it feels pretty useful.  I'm inclined to agree with Paul Graham:

...actually there is something druglike about [the notebook and pen], in the sense that their main purpose is to make me feel better. I hardly ever go back and read stuff I write down in notebooks. It's just that if I can't write things down, worrying about remembering one idea gets in the way of having the next. Pen and paper wick ideas.

View more: Prev | Next