This is an important consideration. I just can't figure out how to test it.
Furthermore, it is possible to let the head-land simulations run and remain emotionally abstracted from the results.
This is wise. Getting the necessary distance would indeed work, as would improving head-land accuracy, though I'm dubious about the extent to which it can be improved. In any case, I'm not quite to either goal myself yet. And if your own head-land making accurate predictions, that's a good thing; I just can't get those kinds of results out of mine. Yet.
I second this request.
I'd like to make it that, but we'll see what I can do.
Nah; it was supposed to read "in which I construct." I just fumbled the editing.
Thank y'kindly. I upvote any and all comments that correct mistakes that would've made me look like a sub-lingual doof otherwise.
Glad to hear it. I aim to please.
Thanks; duly noted. I plan to write a few posts on the "road testing" of Less Wrong and Less Wrong-y theories about rationality and the defeat of akrasia, so these are helpful pointers.
Thanks. I expect most of my posts here will be more Useful Practice than True Theory, but only just; my hope is that the Less Wrong community won't spare the downvotes if I stray too far from rationality and too close to self-help territory.
Sounds like the concept of "agility" could be generalized richly indeed.