Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Metus 16 February 2012 02:28:42AM 0 points [-]

Seeing as insurers have a commercial interest in their data being correct, the data they use should be of very high quality. Thus seeing what insights are really robust, like wearing a seatbelt is reducing the rate of death, should be useful.

Comment author: NickiH 16 February 2012 09:40:26AM 6 points [-]

I am an (almost qualified) actuary, working for a life insurance company.

I would love it if I had data of a very high quality. However, most insurance companies can't use population statistics because of differences with underwriting standards (we don't cover the very bad risks), target markets (we advertise in the Daily Slum, so only cover low socioeconomic classes, for example), and claim definitions (what is a disease in the population might not be a claim for the insurance company). So we use our own experience to modify the population stats. Very large companies might use entirely their own data.

Generally, there is not enough of it to be sure that it's totally credible, especially when it comes to fine differences such as how much you smoke or drink. And that's ignoring problems like non-disclosure. Age and Sex are easier, but there's not much you can do about changing those, so it doesn't help with the question at hand.

Of course, for some types of insurance, such as compulsory car insurance, there is more data to work with - I've never worked in general insurance, so I can't comment on that.

Comment author: NickiH 13 February 2012 10:30:24PM 0 points [-]

This is interesting. But I'm not sure I followed it properly. Is there a post about Type 1/Type 2 mental processes? It might be good to link to it for those of us who need a refresher.

Comment author: NickiH 07 February 2012 08:53:34PM 2 points [-]

This explains why so many text books are so badly written. The authors were aiming too high.

Comment author: gwern 20 January 2012 09:13:38PM 4 points [-]

Then Mark leaves the following message on June's answering machine

A recent version of this for email: http://faculty.chicagobooth.edu/nicholas.epley/Krugeretal05.pdf

Without the benefit of paralinguistic cues such as gesture, emphasis, and intonation, it can be difficult to convey emotion and tone over electronic mail (e-mail). Five experiments suggest that this limitation is often underappreciated, such that people tend to believe that they can communicate over e-mail more effectively than they actually can. Studies 4 and 5 further suggest that this overconfidence is born of egocentrism, the inherent difficulty of detaching oneself from one’s own perspective when evaluating the perspective of someone else. Because e-mail communicators “hear” a statement differently depending on whether they intend to be, say, sarcastic or funny, it can be difficult to appreciate that their electronic audience may not.

Comment author: NickiH 02 February 2012 06:52:49PM 4 points [-]

This is an interesting read. Specifically, their work suggests what could be a potentially very useful way of reducing miscommunication.

One of the experiments the authors ran tried to reduce the overconfidence they saw in predicting whether people would understand or not. They asked people to write sarcastic sentances, and then read them back, out loud, in a tone of voice which made them sound completely serious. They also did serious read in a sarcastic way. They found that people were then less confident that the email would be understood in the way it was intended, because they had changed the way they "heard" it in their head.

I propose that this could be useful in the following way: if you write an email, read it aloud to yourself in the opposite tone of voice. If you are still confident that it will be taken the way you originally thought it should, it's probably safe to send. But if you can now see how it might be misunderstood, redraft it. Repeat until you feel ready to send.

There are times in my past when this advice would have been very useful to me.

Comment author: [deleted] 19 January 2012 03:26:17AM *  43 points [-]

I recently had a frightening first-hand brush with socially induced irrationality. My parents are devout Catholics who are not too pleased with my "aversion" to their religion. They send me to a Jesuit School and naturally it works to my advantage with them to appear as if I'm engaged in deep "reflection" on the question of if a loving, Christian, god of the Bible exists (obviously I am not.) One of the implicit social expectations at my school is to attend a retreat called "Kairos" as a senior. It's a 4 day deal with plenty of prayer and new-age garbage; typically something that'd be no match for my powers of rationality. I signed up to ease my situation at home, expecting no harm to come from the retreat.

At first I thought Kairos would entail your typical retreaty nonsense. It turned the "search for god" into a social activity, not-so-subtly building links from normal friendship to Jesus Christ Lord And Savior Of His Anointed Flock. This wouldn't be a problem for me under normal circumstances; but Kairos was not your typical retreat.

We were deprived of sleep, didn't have a single (waking) moment alone, weren't allowed to know what time it was, and were forced to pray and "reflect" in a circle for hours at a time multiple times per day. This wasn't just indoctrination. This was brainwashing. I wasn't gullible enough to accept one iota of the spiritual garbage, but to understand my failing of mental hygiene it's important to know that Kairos is a very secretive retreat. Its rituals, itinerary, and operations are supposed to be unknown to all but retreat alumni to ensure that future attendees get to experience all the great "surprises" and such.

One of my friends (who didn't attend) has been involved with Lesswrong far longer than I have; and upon my return he asked me what specifically happened on the retreat. He asked with the intent to publish the information, exposing this blatant brainwashing for what it is. Then I did the (to me) unthinkable; I refused to disclose, solely to preserve Kairos' secrets. I was so caught up in the social bonds I formed and the general emotional hokum that I was convinced to actually defend such a terrible institution.

Now, just a week later, I am ashamed. I utterly failed my art. Perhaps if not for the intervention of my friend I'd still be protecting the secrets of Kairos. I was so easily put in a position where I would knowingly allow minds to fall victim to brainwashing, and I gave my tacit sanction to the ritualistic breaking of my peers' psyches for the sake of a retreat whose singular goal is to convert them to Catholicism. All of this because I got lost in the sociality of the retreat. I've since resolved that I must never permit my mental integrity to be compromised. Not for the sake of a group, not for the sake of a better social life, and not for the sake of my emotions. I think it's definitely warranted to be incredibly selective in who you associate with and how; the effects they have on your mind could be devastating under the right conditions.

Comment author: NickiH 31 January 2012 06:34:16PM 7 points [-]

I utterly failed my art.

You did not fail. It took you only one week, and a simple question from your friend, to break out of a mindset that some people never break out of. What's more, you learnt a lesson from it. I would count that as a win.

Comment author: Desrtopa 18 January 2012 01:22:07AM 10 points [-]

So I think that when you notice that feeling, you should stand up for the sanctity of your mind. Even listening to that stuff puts gunk in your gears. You should have called the guy out (politely) for depriving people of the ability to help each other reach a better understanding of things.

I expect that he would have responded that if people are afraid their contributions will be criticized, they'll be less likely to share them, depriving the group of their potentially valuable contributions and risking creating a hostile environment. And he'd have a point, since fear of criticism is normal, and anything which makes people less comfortable with putting themselves forward is likely to filter people out.

If you're not discriminating with respects to beliefs or viewpoints, then you'll see yourself as standing to lose much more by discouraging sharing than discouraging criticism. If you're too undiscriminating, you risk believing stupid things, while if you're too discriminating, you risk filtering out potentially valuable input (which is why we rarely tell newcomers here straight out to "read the sequences" these days; asking that much is too strong a filter.)

In order to convince him that he ought to be allowing criticism of ideas in the discussion, you'd probably have to convince him that he's not intellectually discriminating enough. It's not a simple, one sided proposition, it carries a lot of inferential distance.

Comment author: NickiH 30 January 2012 06:20:19PM 2 points [-]

if people are afraid their contributions will be criticized, they'll be less likely to share them

And if people think that their opposing contributions will be taken as criticism, they'll be less likely to share them, as demonstrated by the OP.

In response to comment by thomblake on Existential Risk
Comment author: Randolf 17 November 2011 03:03:57PM 1 point [-]

Strange enough. After all, while I am a transhumanist to some degree and also enjoy scifi, I am far from being a genious. Still the message of the pictures were immeditately obvious.This would suggest towards what you said: they maybe appealing to general people, while not necessarily as appealing to those already very familiar with scifi and transhumanism.

In response to comment by Randolf on Existential Risk
Comment author: NickiH 22 November 2011 04:39:26PM 0 points [-]

I would count myself among "general people". I didn't get it at all. In fact, having read the comments, I'm still not sure I get it. It's a pretty picture and all, but why is it there?

Comment author: [deleted] 05 April 2011 07:18:41PM 68 points [-]

If you think that humans are nothing but Turing machines, why is it morally wrong to kill a person but not morally wrong to turn off a computer?

Your question has the form:

If A is nothing but B, then why is it X to do Y to A but not to do Y to C which is also nothing but B?

This following question also has this form:

If apple pie is nothing but atoms, why is it safe to eat apple pie but not to eat napalm which is also nothing but atoms?

And here's the general answer to that question: the molecules which make up apple pie are safe to eat, and the molecules which make up napalm are unsafe to eat. This is possible because these are not the same molecules.

Now let's turn to your own question and give a general answer to it: it is morally wrong to shut off the program which makes up a human, but not morally wrong to shut off the programs which are found in an actual computer today. This is possible because these are not the same programs.

At this point I'm sure you will want to ask: what is so special about the program which makes up a human, that it would be morally wrong to shut off the program? And I have no answer for that. Similarly, I couldn't answer you if you asked me why the molecules of apple pie are safe to eat and the those of napalm are not.

As it happens, chemistry and biology have probably advanced to the point at which the question about apple pie can be answered. However, the study of mind/brain is still in its infancy, and as far as I know, we have not advanced to the equivalent point. But this doesn't mean that there isn't an answer.

In response to comment by [deleted] on Rationality Quotes: April 2011
Comment author: NickiH 05 April 2011 08:10:20PM 16 points [-]

what is so special about the program which makes up a human, that it would be morally wrong to shut off the program?

We haven't figured out how to turn it back on again. Once we do, maybe it will become morally ok to turn people off.

Comment author: TheOtherDave 04 April 2011 10:00:31PM 2 points [-]

No. Why would it?

Justification for an act is not something that emerges full-blown out of nothing. My act cannot be justified by of my faith in X if that faith is itself unjustified.

And if I have faith in X within certain constraints and with certain reservations (as I do with governments, for example), that doesn't somehow make that faith less justified than if I "_really believe in" X without constraints or reservations.

And all of that is true whether X is my government, my god, or my grandmother.

Comment author: NickiH 05 April 2011 08:01:58PM 2 points [-]

From the point of view of the bomber, faith in God is not itself unjustified. It is in fact a vital part of his psychology.

The original point was the difference in the psychologies of bombers and soldiers. They are both doing it because they were told to, but their confidence in the judgement of the one telling them to is different. So the one with the higher confidence feels more "justified". That's what I thought you meant, anyway. If it's not, could you please clarify?

Perhaps I should have said "the bomber thinks he has more justification than the soldier".

Comment author: TheOtherDave 03 April 2011 04:55:33PM 2 points [-]

Well, one salient difference might have to do with comparing the available mechanisms for calibrating my confidence in the judgment of a government with those for calibrating my confidence in the judgment of a god.

Comment author: NickiH 04 April 2011 08:17:16PM 3 points [-]

Given that people who believe in god tend to really believe in god, and people who trust governments do so usually with a number of reservations, does that mean that the bomber has more justification than the soldier?

View more: Next