Well, I didn't exactly state any particular experiments in the above post, but I did get some results.
First, the system of measuring my time worked just fine. RescueTime and similar software products do this as well and I encourage anyone considering doing experiments on yourself to get one or arrange a system like I did and then just start measuring. You'll get a nice baseline to compare to. It's surprisingly difficult to notice a significant difference and if you don't have a quantitative approach and historical data, it might be impossible to say if some experiment made any difference. You might think that improving your productivity with some method will feel somehow different, but it won't. The only way you can say for sure is to have some kind of measuring system.
The measurement system and subsequent noticing that I wasn't nearly as productive as I'd like to be didn't make much of a difference. I could clearly see how I spend my time and what kind of events hindered my productivity, but this alone didn't improve my overall efficiency.
The experiment I did on myself was to start using the Pomodoro method. On average, I got roughly 20-25% more real work done per workday. (Say the baseline was 4 hours which improved to approx. 5 hours a day.) It sounds somewhat pathetic, but I could sustain this over long term. (Since then I've switched jobs and I have different kind of desktop setup and I don't have a similar measurement anymore.) I didn't become a productivity monster over-night and I do have difficulty motivating myself some days. Pomodoro doesn't help when I just don't have the motivation. But now I know that I can improve my efficiency when I am on the groove. I think the difference is that the normal way of chunking the workday drains some mental resource faster and sometimes that will result in the disability to re-focus after a longer pause.
So, all in all, I recommend setting up a system of measuring what you really do during your computer time. But that won't, in and of itself, make a difference. But it will provide a platform that enables for you to experiment on yourself.
I updated by beliefs based on the criticisms of the studies and I now feel confident in my expectations about parental influence.
I'm curious as to what your updated beliefs are on parental influence. Can you summarize in couple of paragraphs?
(I think the original description matches how I view the issue, but I feel the topic doesn't have enough importance for me to spend a lot of time trying to update my beliefs.)
I'm interested to hear what are your thoughts on Bjork's lab's findings. As I understand, they do recognize spacing effect, but try to make theories beoynd that.
A related note is that the neurophysiological effect of the epiphany wears out really quickly. I haven't studied which neurotransmitters exactly produce the original good feeling, but I remember reading (apologies for not having a source here) that the effect is pretty strong for the first time, but fails to produce pretty much any neurological effect after just few repeats. By repeats, I mean thinking about the concept or idea and perhaps writing about it.
In another words, say you get a strong epiphany and subsequent strong feeling that some technique, for an example Pomodoro technique, will make you more efficient. After mulling over this idea or concept for a while, the epiphany and the related feeling fades out. You might still think that the technique would help you, but you lose the feeling. Without the feeling, it is unlikely you will do anything in practice. After losing the feeling, you might even start to doubt that the technique could help you at all. When in fact all that has changed is the neurological feedback because you have repeatedly been processing the idea.
I think this is particularly relevant related to instrumental rationality, because I have not found this to have much effect on how I understand things in general. In the case of some behavioral change, I think it requires a certain amount of such "neurotransmitter-based motivation" in order to have any chance of being implemented. I have pretty successfully implemented a couple of behavioral changes which at the time produced a strong epiphanic feeling, but which nowadays don't evoke pretty much any feeling. I implemented them pretty instantaneously (because they were easy to implement) and had them running before the feeling wore out.
One minor exception to this is that you get a new dose of epiphany if you happen to make a new, novel connection related to the technique you're mulling over. This way you can keep the feeling alive longer, but there are only so many of such new connections and they too wear out eventually.
This is why I think that not only do you have to do something about the epiphany in practice, but you have to do it pretty quickly.
I would argue based on my own experience, that it is very difficult to maintain this type of attention when practicing any type of complex skill. I think the typical pattern of rapid learning at the beginner stage and then stopping improving completely is the result of mind resisting continuous, persistent attention. The beginner's state of mind is not a pleasant one to be in and we want to start feeling comfortable quickly. Easiest way to do this is to stop paying so close attention. I don't think this is an explicit decision. It's just our tendency to not want to be in beginner's state of mind.
I think the best performers in almost any field continue to feel like beginners even though their skills keep improving. Of course, a skilled performer knows that he's better than vast majority of others. But this doesn't make him feel comfortable. Skilled performer concentrates on the aspects that he's bad at and he compares himself to better performers and not to his past performance but what he feel he could be.
It's easy to agree with the original article but without actually implementing what it suggests. As a personal anecdote, I often play video games with my kids and I recently noticed that at some point I just stopped paying attention of what is going around. Sure, usually I don't take the game too seriously, but it's more fun when I do and the game itself is pretty challenging. When I realized that I didn't give it my full attention, I decided to concentrate on seeing the game more clearly. Suddenly I noticed a lot more and the driving improved significantly. The difference what I saw was pretty dramatic at start. It felt like I was half-blind before deciding to pay more attention.
The decline in our ability and willingness to pay close attention is, in my own experience, inevitable. There's no magic insight or system to keep us from falling back to auto-piloting. You just have to rediscover the attention over and over again.
I think any article proposing a solution to procrastion would do well to relate to pjeby's Improving The Akrasia Hypothesis. I'm not saying that the hypothesis there is necessarily the right one, but what seems to be lacking in these types of systems is exactly what pjeby's article is describing. Namely, how the system is going to help to resolve particular conflicts. I don't think this algorithm proposes any novel approaches to conflict resolutions. (Note that I'm not saying that the article itself isn't useful.)
Of course, you could claim that the hypothesis is not useful. But if so, it might be worth mentioning explicitly.
I do accept that the equation is a pretty accurate description of akrasia and has been proven empirically, but personally I've found that the type of strategy OP proposes is not effective for me.
First, the crucial steps of the algorithm require the exact same mental resources that are missing when I have the worst bouts of procrastination. When it's clear that I'm procrastinating because I haven't divided the task into smaller subtasks, the idea of doing this division is as difficult as it is to try to start the task itself.
Second, the attacking part of the algorithm seems to provoke far/abstract thinking mode, which makes me more prone to procrastination. Any algorithm or strategy that does not contain ridiculously concrete steps has failed me, sooner or later. Anything that lures me to thinking of, say, long term achievements of using the strategy has made it much more likely to just not use the strategy.
In general, I think it's useful to establish some baseline measurement for one's productivity. At the time of worst procrastination, it seems obvious that a successful strategy will cure whatever it is one is suffering from at the moment. But if you adopt a long-term strategy, the effect is probably going to be much smaller than you initially thought and is going to be difficult to distinguish.
I personally measure the time I've spent in workspaces I've nominated to different types of tasks ("zoning out" (random web-surfing), meta-work (email, instant messaging with colleagues etc), real work). I had to use the system for quite a while to begin experimenting with different strategies. Now I can see if a strategy makes a difference and whether I can maintain it for long term.
For what it's worth, the book has been published and should answer anyone's questions on the subject. I have it, but I've only just began to read it. The book might be somewhat disappointing to some people in the sense that not everything falls in place once you hear the theory. The theory is rather blunt, but sounds convincing so far.
They have a summary of the theory in the introduction:
"Our brains are engaged full time in real-time (risky) heuristic search, generating presumptions about what will be experienced next in every domain. This time-pressured, unsupervised generation process has necessary lenient standards and introduces content --not all of which can be properly checked for truth-- into our mental spaces. If left unexamined, the inevitable errors in these vestibules of consciousness would ultimately continue on to contaminate our world knowledge store. So there has to be a policy of double-checking these candidate beliefs and surmisings, and the discovery and resolution of these at breakneck speed is maintained by a powerful reward system --the feeling of humor; mirth-- that must support this activity in competition with all the other things you could be thinking about."
In fact, they argue that such facility might be necessary for truly intelligent computational agent:
"... We propose to tackle this prejudice head on, arguing that a truly intelligent computational agent could not be engineered without humor and some other emotions."
You may want to warn people that "a large amount of hands" means in the order of hundred thousand hands and more.
And to be more exact, variance only goes down relative to the expected winnings. The standard deviation of a sample increases as a square root to the number of hands. Whereas the expected winnings increases linearly. In Limit Hold'em, a 1,5BB/100 hands expected winrate just barely covers two standard deviations from the mean over 100,000 hands. Experienced player can perhaps play 4-6 tables simultaneously, which means that he can accumulate approximately 500 hands per hour. So 100,000 hands would take around 200 hours of play.
The real challenge of poker is dealing with the inherent variance of the game. The immense variance is the reason why poker is so profitable, but even the most experienced players are unable to cope with the most extreme swings of negative luck. The brain constantly tries to pattern-match the immediate results and however much you reason that it's just bad luck (when it really is bad luck!) it will make you sick psychologically.
Note that we assumed we know the expected winrate of a given player. However, conditions change, profitability of the games fluctuate, etc, so it's practically impossible to quantify any given player's current profitability. This makes it vastly more difficult to know whether bad past results are because of variance or because of sub-optimal play.
I think intelligence and productiveness are inversely correlated.