You wrote "causing Y in order to achieve X" but I believe you meant "causing Y to prevent X"
I've often seen this with hooking up computers, TVs and/or audio equipment. Many people seem to treat it as incomprehensible, even though with computers (particularly) it's just cable to connector, no real thinking needed. For a/v equipment it's just "flows" out-to-in.
Specialization is fantastic, but there is real value to cross-training in other disciplines. It's hard to predict what insights in other fields might assist with your primary. Also, even if you use a specialist, it's impossible to evaluate them if you blank-out in the area. For...
The original reason for the 15 minutes sampling was due to how we do billing, but I've never tried to "game" it, and if I'm distracted enough to be able to anticipate the next ping, there is something seriously wrong with me since I'm clearly not focused at all. :) If I work on two projects during an interval, and am not sure (roughly) how I split my time, I'll split it even. It's worked out pretty well.
I'll take a look at tagtime at some point next week. I'd guess that there's a way to tune lambda based on the minimum feature size you're trying to capture, right? It's been a while since I've dealt much with Poisson distributions, and never had to generate them.
Interestingly I wrote something very similar to tagtime a number of years ago, and am still using it. I don't do random sampling (didn't think of it at the time), but at 15 minute intervals. I've got short cuts and defaults to remember the last thing I was working on, automatic (and manual) time division when I've worked on multiple projects in the interval. Over the last year, I've gotten it the point where it automatically fills in timesheets for me. Mine too is Perl.
Of course, this sort of thing only works as long as you're honest about what you're ...
I spent some time learning about this, when I was dabbling with the Netflix contest. There was a fair bit of discussion on their forum regarding SVD & related algorithms. The winner used a "blended" approach, which weighted results from numerous different algorithms. The success of the algorithm was based on the RMSE of the predictions compared with a test set of data.