Are Cognitive Load and Willpower drawn from the same pool?
I was recently reading a blog here, that referenced a paper done in 1999 by Baba Shiv and Alex Fedorikhin (Heart and Mind in Conflict: The Interplay of Affect and Cognition in Consumer Decision Making). In it, volunteers are asked to memorise short or long numbers and then asked to chose a snack as a reward. The snack is either fruit or cake. The actual paper seems to go into a lot of details that are irrelevent to the blog post, but doesn't actually seem to contradict anything the blog post says. The result seems to be that those with a higher cognitive load were far more likely to chose the cake than those who weren't.
I was wondering if anyone has read any further on this line of research? The actual experiment seems to imply that the connection between cognitive load and willpower may be an acute effect - possibly not lasting very long. The choice of snack is made seconds after memorising a number and while actively trying to keep the number in memory for short term recall a few minutes later. There doesn't seem to be anything about the effect on willpower minutes or hours later.
Does anyone know if the effect lasts longer than a few seconds? If so, I would be interested in whether this affect has been incorporated into any dieting strategies.
We prosecute CEOs for failing to do due diligence. But with people, we call it 'faith'
I wrote the following on my blog last night. I thought that I'd run it past an intelligent audience. Note that what I have referred to as an idea is what we here at lesswrong would call a 'belief'. I changed the name to remove any strange foggy baggage that might appear in the heads of potential readers who are not familier with belief vs belief-in-belief and other concepts like that.
What are your thoughts?
I recently got into a discussion on Facebook that started with an assertion that free-thought/atheism/humanism/etc was no different than the certainties of fundamentalism. But that discussion moved into many topics, one of which is why it should not be controversial to assert that one idea can be more 'right' than another.
I asserted that the view that the universe was created 13.72 billion years ago was more 'right' than the view that the universe was sneezed out of a giant space cow. My interlocutor felt that the giant space cow could be 'right' for one person, even if it is not 'right' for others.
It is at this point that we ran into a problem, as it became apparent that her view of the meaning of 'right' and my own were different. As best I can tell, she felt that 'right' meant that it feels right or brings comfort. I, of course, use the word 'right' to mean that an idea contains explanatory and predictive power. The idea that the universe started in a big bang explains a lot about what we see in the cosmos and predicts what we will see as we keep looking - with a high degree of accuracy. That makes is 'right'. And it's more 'right' now than it was two decades ago because we have found places where it is 'wrong' (the increase in the rate of expansion of the universe) and revised the idea to explain it (dark energy), making it more 'right'.
So we had two versions of 'right'. It wasn't a given that she would accept my version, so I had to come up with a good reason why my version of 'right' was, well, 'right'.
What I came up with was to point out an ethical imperative to be as 'right' as possible - using my definition. Consider this: If there are two ideas, one of which is 'right' enough to predict certain unintended consequences of an action that the other idea fails to predict. If you consciously choose the less 'right' of the two ideas (perhaps because it is 'more right for you'), you have consciously chosen to risk harming others in ways that could have been prevented by choosing the more 'right' of the two options.
Perhaps it will be clearer with an example: Sally is worried about vaccinating herself before travelling to another country. She knows that the doctor says that it is necessary and safer than not being vaccinated. But she's also heard some bad stories about the side-effects of vaccinations. She decides that not vaccinating is 'right for her'. After all, if she's wrong, what's the harm? She might get sick, but that's a fate she brought on herself. What she fails to realise is that the more 'right' idea (that vaccines are safer than not having them) also predicts that if she fails to vaccinate, she can bring those diseases back to Australia and infect others.
But this isn't just a problem on the left wing: Consider the case of Josephine, who concedes that there is little evidence for the existence of an afterlife. But she chooses to believe anyway because it is 'right for her'. Why not? If she's wrong, she'll never know it because she'll have ceased to have existed. But here comes those unintended consequences again. This time, they come in the form of predictions that the less 'right' of the two ideas makes - that death is not the end of all existence, but a transition to a greater existence. As it happens, Josephine is an Australian senator and is about to vote on an authorisation for the ADF to bomb some village in Afghanistan. She briefly worries about the fate of any innocent bystanders but is comforted by the fact if the ADF's aim is off, any innocents will go to heaven. But she's so busy that she fails to remember that her assumption about heaven was an arbitrary one for her own comfort and shouldn't be used outside the confines of her own skull.
Now consider this case: The CEO of a company sees credible evidence that the government is about to change, leading to a major change in policies directly affecting the company's business environment. She should probably hedge her bets to prepare for the likely change. But what if she really liked the current government? What if the prospective change to the opposition caused her distress? Might she choose to believe that the government will almost certainly win the next election because that idea feels 'right for her'? I would suggest that the stock holders would feel that her due diligence required her to hedge the company's bets, whatever her feelings.
But change this CEO to a mother making a choice on matters of vaccines or faith healing, and now she hasn't made any kind of ethical lapse - she's has just exercised her faith. We owe it ourselves and to those who are affected by our actions (which is everyone really) to try to be as 'right' as possible as often as possible. Never chose an idea because it is 'right for you'. And always be on the lookout for ideas that are even more 'right' than the ones you already cling to.
Mailing List for Digitized Belief Network Discussion
Hi all,
This is a follow-up to a previous post of mine - 'A digitized belief network?'.
I have now created a discussion group for anyone who wants to discuss the problems involved in creating a digital representation of a human's beliefs. Anyone who is interested in joining us can sign up here.
See you all around the list,
Avi
A digitized belief network?
Hello to all,
Like the rest of you, I'm an aspiring rationalist. I'm also a software engineer. I design software solutions automatically. It's the first place my mind goes when thinking about a problem.
Today's problem is the fact that our beliefs all rest on beliefs that rest on beliefs. Each one has a <100% probability of being correct. Thus, each belief built on it has an even smaller chance of being correct.
When we discover a belief is false (or less dramatically, revise its probability of being true), it propagates to all other beliefs that are wholly or partially based on it. This is an imperfect process and can take a long time (less in rationalists, but still limited by our speed of thought and inefficiency in recall).
I think that software can help with this. If a dedicated rationalist spent a large amount of time committing each belief of theirs to a database (including a rational assessment of its probability overall and given that all other beliefs that it rests on are true) as well as which other beliefs their beliefs rest on, you would eventually have a picture of your belief network. The software could then alert you to contradictions between your estimate of a belief's probability of being true and its estimate based on the truth estimate of the beliefs that it rests on. It could also find cyclical beliefs and other inconsistencies. Plus, when you update a belief based on new evidence, it can spit out a list of beliefs that should be reconsidered.
Obviously, this would only work if you are brutally honest about what you believe and fairly accurate about your assessments of truth probabilities. But I think this would be an awesome tool.
Does anyone know of an effort to build such a tool? If not, would anyone be interested in helping me design and build such a tool? I've only been reading LessWrong for a little while now, so there's probably a bunch of stuff that I haven't considered in the design of such a tool.
Your's rationally,
Avi
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)