People have been encouraging me to share my anti-akrasia tricks, but it feels inappropriate to dedicate a top-level post solely to unproven techniques that work for some person and may not work for others, so:
Go ahead and share your anti-akrasia tricks!
Let's make it an open thread where we just share what works and what doesn't, without worrying (yet) about having to explain tricks with deep theories, or designing proper experiments to verify them. However, if you happen to have a theory or a proposed experiment in mind, please share.
Bragging is fine, but please share the failures of your techniques as well – they are just as valuable, if not more.
Note to readers – before you read the comments and try the tricks, keep in mind that the techniques below are not yet proven supported or explained by proper experiments, and are not yet backed by theory. They may work for their authors, but are not guaranteed to work for you, so try them at your own risk. It would be even better to read the following posts before rushing to try the tricks:
That does not really mean anything.
"Exponential" refers to how a quantity relates to another. For example, we would say that (until environmental limits are encountered) a population's size is exponential with respect to time, and mean, that there is an initial population size P0 at a time t0, and a doubling time T, such that the population at a given time, P(t) = P0 * 2^((t - t0)/T). In computer science, we might say that the time or memory requirement of an algorithm is exponential with respect to the size of a list, or the number of nodes or edges in a graph, which could be represented by a similar equation, assigning different meanings to the variables. (Often, we really the mean the equation to be an approximation, or an upper or lower bound on the actual quantity.)
But if you say that designing and programming an AI is exponentially hard, you have not identified a variable of the problem that is analogous to the time in population growth. "Exponential" is not a vague superlative, it has a precise meaning. If all you mean to say is that AI is much harder than conventional programming, then just say that. Yes it is vague, but that is better than having your communication be more precise than your understanding.
Targeted commenter doesn't really deserve being hit that hard, but voted up anyway.
The thing I really despise is when people use "exponential" as a superlative to describe fast-growing quantifiable processes that are not known to be exponential.