Theories of Regret

6 SilentCal 15 June 2015 06:22PM

(The following is armchair psychological speculation based on anecdotal evidence. If anyone can respond with relevant science, that'd be awesome, but otherwise responses in kind are quite welcome.)

I'm puzzled regarding the motivational effects of negative emotion, particularly shame, guilt, and regret--I'll just say 'regret' going forward, though there could be important differences between them. In particular, I've observed people being oddly unmotivated to avoid doing things that will very predictably cause them unhappiness, in a way that seems to go beyond garden variety akrasia. It recently occurred to me that homo hypocritus theories of regret predict this result.

In contrast to the naive theory, under which regret is inflicted by a brain on itself to teach it to change its behavior, the homo hypocritus theory holds that regret exist to convince other observers that its behavior will change. This allows someone to continually do antisocial things while convincing others that they won't do so in the future. Look at it from the perspective of a gene-selfish brain designer: self-inflicted regret, unlike externally-inflicted injuries, doesn't carry any intrinsic harms, so there's no reason to behave in a way that avoids it.

On closer inspection, neither theory makes a whole lot of sense on its own. Regret has distinctive displays that would be pointless if they were only for internal consumption. And there would be no reason for others to think regret would lead to changed behavior if this were never the case. So a revised theory combines the two as follows:

Naive 2.0) Regret originally evolved to act both as impetus for an individual to change behavior and signal to others that such a correction was taking place.

Homo Hypocritus 2.0) Sometimes brains exploit this by feeling regret and sending the signal while somehow blocking the behavior update--this is advantageous if the regretted action only harms the agent via others' disapproval and the emotional display allays that disapproval.

This could vary by individual, by situation, or both. And it comes with the standard evo-psych disclaimer that people with HH brains aren't faking their suffering. Rather, this might explain behavior we'd call 'compulsive' or 'self-destructive'--it could be that the compulsion to do regrettable things isn't extra-strong, but rather that the brain's motivation to avoid those behaviors is blocked. In many cases these individuals, subject to constant cycles of action and regret, would be the primary victims of their brain's cynical adaptation.

So what testable predictions would this yield? (We can worry about how to test these ethically later).

* There should be people and/or situations where concrete external punishment would be much more motivating than regret even if the latter causes much more suffering.

* If we could arrange for people to experience regret inside an MRI machine, we might observe variations in how much lasting change occurs, and observe those whose brains change more change their behavior more. This might also correlate with life outcomes or general tendency to do regrettable things.

* At least some humans should have innate defenses against evolutionarily-hypocritical but personally-sincere regret.

* There should be distinct and identifiable modes of compulsive/self-destructive behavior that only occur with behaviors whose harmful results are mediated by other humans' reactions, at least as far as the savannah-brain can tell.

Thoughts, all?

Update: A failed attempt at rationality testing

-9 SilentCal 30 January 2015 10:43PM

This post was originally a link post to

http://arstechnica.com/business/2015/01/fcc-chairman-mocks-industry-claims-that-customers-dont-need-faster-internet/

together with an instruction to read the article before proceeding, and then the following text rot13'd:

I believe this article is a nice rationality test. Did you notice that you were reading a debate over a definition and try to figure out what the purpose of the classification was? Or did you get carried away in the condemnation of the hated telecoms? If you noticed, how long did it take you?

 

I'm open to feedback on whether this test was worthwhile and also on whether I could have presented it better. There's a tradeoff here where explaining the post's value to Less Wrong undermines that value. Had I put "Rationality Test" in the title, I could have avoided the appearance of posting an inappropriate article but made the test weaker.

and filler so you couldn't see any comments without scrolling.

As you can see from the comments here, it didn't work very well.

I'm mostly editing this now because the apparent outrage-bait link in the discussion section was a bit of a nuisance, but I'll take the chance to list what I've learned:

 

  • Not many LWers are susceptible to this genre of outrage-bait. That is, they don't have the intended gut reaction in the first place, so this didn't test whether they overcame it.
  • The only commenter who admits having had said reaction immediately and effortlessly accounted for the fact that the debate was over a definition. This suggests the test was on the easy side, even for those eligible. (Unless a bunch of people failed and didn't comment, but I doubt that)
  • Most commenters did not indicate finding it obvious that this was a test. The sort of misdirection I employed is quite viable.
  • Feedback on the idea of the test is mixed. People don't seem to mind the concept of being misdirected, but (if I read the top comment correctly) being put through the experience of an outrage-bait link was annoying and the test didn't offer enough value to justify that.

 

The Correct Use of Analogy

25 SilentCal 16 July 2014 09:07PM

In response to: Failure by AnalogySurface Analogies and Deep Causes

Analogy gets a bad rap around here, and not without reason. The kinds of argument from analogy condemned in the above links fully deserve the condemnation they get. Still, I think it's too easy to read them and walk away thinking "Boo analogy!" when not all uses of analogy are bad. The human brain seems to have hardware support for thinking in analogies, and I don't think this capability is a waste of resources, even in our highly non-ancestral environment. So, assuming that the linked posts do a sufficient job detailing the abuse and misuse of analogy, I'm going to go over some legitimate uses.

 

The first thing analogy is really good for is description. Take the plum pudding atomic model. I still remember this falsified proposal of negative 'raisins' in positive 'dough' largely because of the analogy, and I don't think anyone ever attempted to use it to argue for the existence of tiny subnuclear particles corresponding to cinnamon. 

But this is only a modest example of what analogy can do. The following is an example that I think starts to show the true power: my comment on Robin Hanson's 'Don't Be "Rationalist"'. To summarize, Robin argued that since you can't be rationalist about everything you should budget your rationality and only be rational about the most important things; I replied that maybe rationality is like weightlifting, where your strength is finite yet it increases with use. That comment is probably the most successful thing I've ever written on the rationalist internet in terms of the attention it received, including direct praise from Eliezer and a shoutout in a Scott Alexander (yvain) post, and it's pretty much just an analogy.

Here's another example, this time from Eliezer. As part of the AI-Foom debate, he tells the story of Fermi's nuclear experiments, and in particular his precise knowledge of when a pile would go supercritical.

What do the above analogies accomplish? They provide counterexamples to universal claims. In my case, Robin's inference that rationality should be spent sparingly proceeded from the stated premise that no one is perfectly rational about anything, and weightlifting was a counterexample to the implicit claim 'a finite capacity should always be directed solely towards important goals'. If you look above my comment, anon had already said that the conclusion hadn't been proven, but without the counterexample this claim had much less impact.

In Eliezer's case, "you can never predict an unprecedented unbounded growth" is the kind of claim that sounds really convincing. "You haven't actually proved that" is a weak-sounding retort; "Fermi did it" immediately wins the point. 

The final thing analogies do really well is crystallize patterns. For an example of this, let's turn to... Failure by Analogy. Yep, the anti-analogy posts are themselves written almost entirely via analogy! Alchemists who glaze lead with lemons and would-be aviators who put beaks on their machines are invoked to crystallize the pattern of 'reasoning by similarity'. The post then makes the case that neural-net worshippers are reasoning by similarity in just the same way, making the same fundamental error.

It's this capacity that makes analogies so dangerous. Crystallizing a pattern can be so mentally satisfying that you don't stop to question whether the pattern applies. The antidote to this is the question, "Why do you believe X is like Y?" Assessing the answer and judging deep similarities from superficial ones may not always be easy, but just by asking you'll catch the cases where there is no justification at all.

On Irrational Theory of Identity

15 SilentCal 19 March 2014 12:06AM

Meet Alice. Alice alieves that losing consciousness causes discontinuity of identity.

 

Alice has a good job. Every payday, she takes her salary and enjoys herself in a reasonable way for her means--maybe going to a restaurant, maybe seeing a movie, normal things. And in the evening, she sits down and does her best to calculate the optimal utilitarian distribution of her remaining paycheck, sending most to the charities she determines most worthy and reserving just enough to keep tomorrow-Alice and her successors fed, clothed and sheltered enough to earn effectively. On the following days, she makes fairly normal tradeoffs between things like hard work and break-taking, maybe a bit on the indulgent side.

 

Occasionally her friend Bob talks to her about her strange theory of identity. 

 

"Don't you ever wish you had left yourself more of your paycheck?" he once asked.

"I can't remember any of me ever thinking that." Alice replied. "I guess it'd be nice, but I might as well wish yesterday's Bill Gates had sent me his paycheck."

 

Another time, Bob posed the question, "Right now, you allocate yourself enough to survive with the (true) justification that that's a good investment of your funds. But what if that ever ceases to be true?"

Alice resopnded, "When me's have made their allocations, they haven't felt any particular fondness for their successors. I know that's hard to believe from your perspective, but it was years after past me's started this procedure that Hypothetical University published the retrospective optimal self-investment rates for effective altruism. It turned out that Alices' decisions had tracked the optimal rates remarkably well if you disregard as income the extra money the deciding Alices spent on themselves.

So me's really do make this decision objectively. And I know it sounds chilling to you, but when Alice ceases to be a good investment, that future Alice won't make it. She won't feel it as a grand sacrifice, either. Last week's Alice didn't have to exert willpower when she cut the food budget based on new nutritional evidence."

 

"Look," Bob said on a third occasion, "your theory of identity makes no sense. You should either ignore identity entirely and become a complete maximizing utilitarian, or else realize the myriad reasons why uninterrupted consciousness is a silly measure of identity."

"I'm not a perfect altruist, and becoming one wouldn't be any easier for me than it would be for you," Alice replied. "And I know the arguments against the uninterrupted-consciousness theory of identity, and they're definitely correct. But I don't alieve a word of it."

"Have you actually tried to internalize them?"

"No. Why should I? The Alice sequence is more effectively altruistic this way. We donate significantly more than HU's published average for people of similar intelligence, conscientiousness, and other relevant traits."

"Hmm," said Bob. "I don't want to make allegations about your motives-"

"You don't have to," Alice interrupted. "The altruism thing is totally a rationalization. My actual motives are the usual bad ones. There's status quo bias, there's the desire not to admit I'm wrong, and there's the fact that I've come to identify with my theory of identity.

I know the gains to the total Alice-utility would easily overwhelm the costs if I switched to normal identity-theory, but I don't alieve those gains will be mine, so they don't motivate me. If it would be better for the world overall, or even neutral for the world and better for properly-defined-Alice, I would at least try to change my mind. But it would be worse for the world, so why should I bother?"

 

.

 

.

 

If you wish to ponder Alice's position with relative objectivity before I link it to something less esoteric, please do so before continuing.

 

.

 

.

 

.

 

Bob thought a lot about this last conversation. For a long time, he had had no answer when his friend Carrie asked him why he didn't sign up for cryonics. He didn't buy any of the usual counterarguments--when he ran the numbers, even with the most conservative estimates he considered reasonable, a membership was a huge increase in Bob-utility. But the thought of a Bob waking up some time in the future to have another life just didn't motivate him. He believed that future-Bob would be him, that an uploaded Bob would be him, that any computation similar enough to his mind would be him. But evidently he didn't alieve it. And he knew that he was terribly afraid of having to explain to people that he had signed up for cryonics.

So he had felt guilty for not paying the easily-affordable costs of immortality, knowing deep down that he was wrong, and that social anxiety was probably preventing him from changing his mind. But as he thought about Alice's answer, he thought about his financial habits and realized that a large percentage of the cryonics costs would ultimately come out of his lifetime charitable contributions. This would be a much greater loss to total utility than the gain from Bob's survival and resurrection.

He realized that, like Alice, he was acting suboptimally for his own utility but in such a way as to make the world better overall. Was he wrong for not making an effort to 'correct' himself?

 

Does Carrie have anything to say about this argument?