Posts

Sorted by New

Wiki Contributions

Comments

Richard Hollerith pointed out that the glucose drinks only last for about an hour, which is true. It is less certain that they are followed by a dip in self-control or that their effect is by affecting the neurotransmittors. Even if it is, it is a start for some serious neuropharmacological hacking. The serotonin system is already of interest in this respect: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?tmpl=NoSidebarfile&db=PubMed&cmd=Retrieve&list_uids=17360806&dopt=Abstract

A real willpower enhancement is likely something much more complex. Ideally we would like to strengthen our second order desires (at least some of them, some of the time). That might involve finding some pretty disperse neural nets, or more likely having a close symbiosis with software acting as an artificial superego. Design and implementation is left as an exercise :-)

It seems that one could trap this kind of desires too by games fulfilling them. If I have creative desires, spending a lot of time creating in Second Life would meet them. Maybe one could create "games" that help the third world in some way, trapping people with an altruist motivation (my favorite example is the spiders in the comic http://www.e-sheep.com/spiders/ ). The worrying thing is when the game shapes or constrains the activity into something less useful than it would otherwise have been. First order desire games trap us by providing simple stimulation, second order games by providing complex stimulation and meaning. But just as the stimulation in Tetris is fairly scripted the meaning and interaction in WoW is limited. Perhaps the healthiest aspect of the online games is how many players deliberately set out to expand their scope and circumvent their limitations. Maybe that is the solution of game addiction: try to ensure that the games are expandable and could in principle become arbitrarily complex. But that will still not help the people content with fulfilling first order desires.

Another approach might of course be to try to boost self-control. Self-control correlates with being able to delay gratification, long-term planning and career success. An enhancement of this would be socially and individually beneficial, probably far more than most other cognition enhancers.

There are experiments that demonstrate that glucose drinks can actually boost depleted self-control: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=Abstract&list_uids=17279852 and presumably there might be more deep methods of enhancing it. So one strategy of handling the rise in temptations is to make us better at handling them.

An interesting and rather worrying paper is this, http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?itool=abstractplus&db=pubmed&cmd=Retrieve&dopt=abstractplus&list_uids=14756934 which argues that it is the flow experience of games that hooks us. Normally flow is hard to achieve through stimuli, which at most just generate happiness, but the more profound flow state seems triggerable by the right games. This goes beyond mere habit-formation to get a pleasurable stimuli since the highly motivated flow state involves using all mental resources to prolong and expand the state. So we might not just have to learn how to overcome simple pleasure temptations but also complex flow temptations.

Overall, enhancements of strategic individual thinking would be extremely useful. But there are no guarantees they will be developed at an equal pace as the games.

Bruce Schneier discusses "CYA Security" in his latest Crypto-gram: http://www.schneier.com/crypto-gram-0703.html#1 Much of the security reactions that occur are less aimed at achieving safety and more with ensuring that the agency cannot be criticised for not having done its job, even when the reactions are irrational and counterproductive. I guess this is part of the "poor incentives" term of Robins equation.

Security is perhaps one of the most clearcut forms of paternalism, where certain groups are expected to act to protect everyone. It also seems to be more vulnerable to overreactions like above than other forms of paternalism. Perhaps this is because of the larger power distance between the security people and the protected. The former have been given monopolies of coercive power, which means that they are scrutinized more heavily both internally and externally. There is also a psychological effect of power bias and separation from the "civilians" that means that they are less likely to accept disconfirming external information. Finally security problems often involve malign agency, which is something we humans understand in a very different way than other risks.

The health care paternalist who fails to detect and stop a health problem until some deaths occur can usually get away with it by imposing after-the-fact regulations.

I guess this line of reasoning would imply that we should expect paternalism in areas where the "outrage" aspect of risk is higher to be biased towards overreaction.

We seem to have a disproportionate number of sayings and heuristics making us less impulsive and making our time horizons longer. That might have developed as a way of sustaining the long-term discounting we humans have in comparision to other animals; http://www.wjh.harvard.edu/~mnkylab/publications/animalcommunication/constraints.pdf has a nice diagram (figure 3) showing the difference between human (slowest), rats and pidgeons (fastest discounting). Slow discounting might be linked to our foraging lifestyle, http://www.wjh.harvard.edu/~mnkylab/publications/animalcommunication/constraints.pdf but since human societies have developed quickly recently the benefits of discounting have risen faster than evolution could have adapted (or impulsive individuals have a fitness advantage by having children earlier).

So maybe we have a culturally transmitted bias towards slow discounting and persistence, that normally counteracts our too fast discounting. But in some individuals it becomes maladaptive, perhaps because they already are naturally stubborn.