[LINK] The power of fiction for moral instruction

11 David_Gerard 24 March 2013 09:19PM

From Medical Daily: Psychologists Discover How People Subconsciously Become Their Favorite Fictional Characters

Psychologists have discovered that while reading a book or story, people are prone to subconsciously adopt their behavior, thoughts, beliefs and internal responses to that of fictional characters as if they were their own.

Experts have dubbed this subconscious phenomenon ‘experience-taking,’ where people actually change their own behaviors and thoughts to match those of a fictional character that they can identify with.

Researcher from the Ohio State University conducted a series of six different experiments on about 500 participants, reporting in the Journal of Personality and Social Psychology, found that in the right situations, ‘experience-taking,’ may lead to temporary real world changes in the lives of readers. 

They found that stories written in the first-person can temporarily transform the way readers view the world, themselves and other social groups. 

I always wondered at how Christopher Hitchens (who, when he wasn't being a columnist, was a professor of English literature) went on and on about the power of fiction for revealing moral truths. This gives me a better idea of how people could imprint on well-written fiction. More so than, say, logically-reasoned philosophical tracts.

This article is, of course, a popularisation. Anyone have links to the original paper?

Edit: Gwern delivers (PDF): Kaufman, G. F., & Libby, L. K. (2012, March 26). "Changing Beliefs and Behavior Through Experience-Taking." Journal of Personality and Social Psychology. Advance online publication. doi: 10.1037/a0027525

You only need faith in two things

22 Eliezer_Yudkowsky 10 March 2013 11:45PM

You only need faith in two things:  That "induction works" has a non-super-exponentially-tiny prior probability, and that some single large ordinal is well-ordered.  Anything else worth believing in is a deductive consequence of one or both.

(Because being exposed to ordered sensory data will rapidly promote the hypothesis that induction works, even if you started by assigning it very tiny prior probability, so long as that prior probability is not super-exponentially tiny.  Then induction on sensory data gives you all empirical facts worth believing in.  Believing that a mathematical system has a model usually corresponds to believing that a certain computable ordinal is well-ordered (the proof-theoretic ordinal of that system), and large ordinals imply the well-orderedness of all smaller ordinals.  So if you assign non-tiny prior probability to the idea that induction might work, and you believe in the well-orderedness of a single sufficiently large computable ordinal, all of empirical science, and all of the math you will actually believe in, will follow without any further need for faith.)

(The reason why you need faith for the first case is that although the fact that induction works can be readily observed, there is also some anti-inductive prior which says, 'Well, but since induction has worked all those previous times, it'll probably fail next time!' and 'Anti-induction is bound to work next time, since it's never worked before!'  Since anti-induction objectively gets a far lower Bayes-score on any ordered sequence and is then demoted by the logical operation of Bayesian updating, to favor induction over anti-induction it is not necessary to start out believing that induction works better than anti-induction, it is only necessary *not* to start out by being *perfectly* confident that induction won't work.)

(The reason why you need faith for the second case is that although more powerful proof systems - those with larger proof-theoretic ordinals - can prove the consistency of weaker proof systems, or equivalently prove the well-ordering of smaller ordinals, there's no known perfect system for telling which mathematical systems are consistent just as (equivalently!) there's no way of solving the halting problem.  So when you reach the strongest math system you can be convinced of and further assumptions seem dangerously fragile, there's some large ordinal that represents all the math you believe in.  If this doesn't seem to you like faith, try looking up a Buchholz hydra and then believing that it can always be killed.)

(Work is ongoing on eliminating the requirement for faith in these two remaining propositions.  For example, we might be able to describe our increasing confidence in ZFC in terms of logical uncertainty and an inductive prior which is updated as ZFC passes various tests that it would have a substantial subjective probability of failing, even given all other tests it has passed so far, if ZFC were inconsistent.)

(No, this is *not* the "tu quoque!" moral equivalent of starting out by assigning probability 1 that Christ died for your sins.)

Rationalist fiction brainstorming funtimes

7 Manfred 09 March 2013 01:53PM

The title should make things clear enough, so let's start with my description of the target, rationalist fiction: fiction that tries to teach the audience rationalist cognitive skills by having characters model those skills for the reader.

So for example, Luminosity is to a large extent about the questions "What do I want?, What do I have?, and How can I best use the latter to get the former?"  Oh, and using empiricism on magic.

Another example is Harry Potter and the Methods of Rationality, which goes more in-depth about the laundry list of human biases. In fact, many of the more iconic moments (measured by what I remember and what other people like to copy) are about biases to avoid, rather than about modeling good behavior.

 

This thread is about ideas, from general to specific, for rationalist fiction. I'll give some obvious examples.

General idea: having a rational character encountering magic or amazing technology is a great chance to showcase the power of empiricism.  (Has anyone gotten on this one yet? :3 )

Story idea: Okay, so we take the Dresden Files universe, and our rational protagonist is some smart kid who just started a summer job as an assistant radio technician or something. It turns out he's got one in a hundred magical talent, enough to cut off his budding career, he manages to find the magic community, figures out just enough, embarks on heroic quest to run a magitech radio station. (Okay, this last bit isn't obvious - for one, more character development would probably have him wanting something else.  For another, the obvious thing is to take over the world if Luminosity and HPMOR are anything to go by.)

Specific idea: A character could model the skill of testing stuff by testing stuff.  When characters are performing a big search, have someone actually stop to think about false positives, or more generally "how could things be going wrong, and how can I prevent that?", and have it actually be a false positive once.

 

But really, there's an explosion of possibilities out there to explore, and I feel like we have "Rationalist meets magic. Rationalist does science to magic.  Rationalist kicks butt with magic" fairly well-covered.  We have all these different biases categorized, with corresponding right ways to do things, and there are plenty of good behaviors we can try to teach an audience without the empiricism-fodder and high stakes that is a fantasy setting. Or even if you do a Dresden Files fic, you could ignore the empricism stuff and just, like, pick a habit from Anna's checklist and write a short story :D. Here's an idea I quite fancy, I'll save everything else for comments:

General idea: Giving people the benefit of the doubt and managing to lose arguments when they need to be lost is the closest thing to a rationalist superpower I have. Can I work that into a story somehow?

Story ideas: A James Herriot sort of thing, where the protagonist has their daily life (Maybe veterinarian, or materials scientist, or line cook, or model rocket hobbyist), and relatably goes about it, occasionally giving people the benefit of the doubt and losing arguments, and sometimes using other rationalist skills, and usually ending up on the right side of things in the end. At this point it might be too subtle to actually teach the audience, one solution to this would be a designated person in-story to periodically notice how awesome the protagonist is.

"Why doesn't that cool thing happen when I *try* to do it?"

0 Rukifellth 06 March 2013 08:21PM

For the past year I've been noticing an interesting phenomenon, the "Why can't I do that on purpose?"-effect. This usually happens when I'm just walking by my computer desk or other piece of furniture, and throw whatever object I'm holding on it, in this case, a balled up bit of tin-foil from a piece of chocolate. The ball bounces off an emptied drumstick of chicken, instead of landing on the glass desk immediately.

Fascinated, I try and hit the chicken drumstick again with the balled tin-foil, without success.

"How the hell did I do that by accident?"

There are actually a number of different things on my desk that the balled up bit of tin-foil could have hit to elicit that same reaction from me; the plastic candy wrappers around the chicken drumstick, the fork next to it, anything. However, if I try to hit the chicken drumstick, my reaction to the balled up tin-foil hitting the candy wrapper instead will be "Why didn't I hit the chicken wing?"

In other words, suppose there's a 50% of eliciting reaction A, due to there being 5 objects, for each of which there is a 10% chance of hitting them. If I hit one of them and elicit reaction A, I decrease the probability of re-eliciting reaction A to 10%, because the other 4 objects, if hit, will be disregarded.

Caelum est Conterrens: I frankly don't see how this is a horror story

26 chaosmage 06 March 2013 10:31AM

So Eliezer said in his March 1st HPMOR progress report:

I recommend the recursive fanfic “Friendship is Optimal: Caelum est Conterrens” (Heaven Is Terrifying).  This is the first and only effective horror novel I have ever read, since unlike Lovecraft, it contains things I actually find scary.

So I read that and it was certainly very much worth reading - thanks for the recommendation! Obviously, the following contains spoilers.

I'm confused about how the story is supposed to be "terrifying". I rarely find any fiction scary, but I suspect that this is about something else: I didn't think Failed Utopia #4-2 was "failed" either and in Three Worlds Collide, I thought the choice of the "Normal" ending made a lot more sense than choosing the "True" ending. The Optimalverse seems to me a fantastically fortunate universe, pretty much the best universe mammals could ever hope to end up in, and I honestly don't see how it is a horror novel, at all.

So, apparently there's something I'm not getting. Something that makes an individual's hard-to-define "free choice" more valuable than her much-easier-to-define happiness. Something like a paranoid schizophrenic's right not to be treated,

So I'd like the dumb version please. What's terrifying about the Optimalverse?

Subjective Realities

2 TheatreAddict 20 September 2011 07:10PM

So I have a friend who I sit next to in class, and we talk about philosophy. Well today, he brought up that when people leave your presense, and you can't observe them any longer, you no longer have proof they exist.

Well I pointed out that it would violate the conservation of mass law, right?

So then, with a bit more prodding, I figured out that by "no longer exist", he means they exist in their world, but they no longer exist in mine. So basically you can't prove that anyone exists unless they're directly in front of you.

I'm really not certain how to go about answering this question. I mean, he challenged me to prove that my mother existed, without seeing her. Obviously I couldn't.

Is he right? Or is there some flaw in his argument, some fallacy that I'm missing?

I went through a few of the Sequences, and the closest article I could find was about not believing in the invisible. But in this case, he doesn't literally (I think) believe they just vanish, he believes they enter alternate universes that are selected when I come in contact them again.

My mind is boggled. I also apologize if this is dumb question, and it's common knowledge or has already been answered, and to my credit, I did make an attempt to figure out the answer before bothering you all. Thanks.

Imitation is the Sincerest Form of Argument

74 palladias 18 February 2013 05:05PM

I recently gave a talk at Chicago Ideas Week on adapting Turing Tests to have better, less mindkill-y arguments, and this is the precis for folks who would prefer not to sit through the video (which is available here).

Conventional Turing Tests check whether a programmer can build a convincing facsimile of a human conversationalist.   The test has turned out to reveal less about machine intelligence than human intelligence.  (Anger is really easy to fake, since fights can end up a little more Markov chain-y, where you only need to reply to the most recent rejoinder and can ignore what came before).  Since normal Turing Tests made us think more about our model of human conversation, economist Bryan Caplan came up with a way to use them to make us think more usefully about our models of our enemies.

After Paul Krugman disparaged Caplan's brand of libertarian economics, Caplan challenged him to an ideological Turing Test, where both players would be human, but would be trying to accurately imitate each other.  Caplan and Krugman would each answer questions about their true beliefs honestly, and then would fill out the questionaire again in persona inimici - trying to guess the answers given by the other side.  Caplan was willing to bet that he understood Krugman's position well enough to mimic it, but Krugman would be easily spotted as a fake!Caplan.

Krugman didn't take him up on the offer, but I've run a couple iterations of the test for my religion/philosophy blog.  The first year, some of the most interesting results were the proxy variables people were using, that weren't as strong as indicators as the judges thought.  (One Catholic coasted through to victory as a faux atheist, since many of the atheist judges thought there was no way a Christian would appreciate the webcomic SMBC).

The trouble was, the Christians did a lot better, since it turned out I had written boring, easy to guess questions for the true and faux atheists.  The second year, I wrote weirder questions, and the answers were a lot more diverse and surprising (and a number of the atheist participants called out each other as fakes or just plain wrong, since we'd gotten past the shallow questions from year one, and there's a lot of philosophical diversity within atheism).

The exercise made people get curious about what it was their opponents actually thought and why.  It helped people spot incorrect stereotypes of an opposing side and faultlines they'd been ignoring within their own.  Personally, (and according to other participants) it helped me have an argument less antagonistically.  Instead of just trying to find enough of a weak point to discomfit my opponent, I was trying to build up a model of how they thought, and I needed their help to do it.  

Taking a calm, inquisitive look at an opponent's position might teach me that my position is wrong, or has a gap I need to investigate.  But even if my opponent is just as wrong as zer seemed, there's still a benefit to me.  Having a really detailed, accurate model of zer position may help me show them why it's wrong, since now I can see exactly where it rasps against reality.  And even if my conversation isn't helpful to them, it's interesting for me to see what they were missing.  I may be correct in this particular argument, but the odds are good that I share the rationalist weak-point that is keeping them from noticing the error.  I'd like to be able to see it more clearly so I can try and spot it in my own thought.  (Think of this as the shift from "How the hell can you be so dumb?!" to "How the hell can you be so dumb?").

When I get angry, I'm satisfied when I beat my interlocutor.  When I get curious, I'm only satisfied when I learn something new.

Giving in to small vices

7 [deleted] 03 March 2013 08:46PM

When I was in Seoul three years ago to visit a friend, I was not impressed by the city. The people there were always in a hurry, and struck me as generally unfriendly. When you apologise for accidentally bumping into someone, your apology will usually be coldly ignored. There are also very strict social rules in place. E.g., on the trains, there are seats specially reserved for small children, elderly people, and the physically disabled. If you do not fall into any of these three categories, you are not allowed to take any of those seats, even when you are travelling during the off-peak hours and there are few other passengers. Of course, there are no laws in place to forbid you to do so, but you will be met with (silent) disapproval from the South Koreans. Or so my Korean friend warned me.

Another thing that struck me was the fact that the streets were strewn with litter everywhere. It was very unpleasant. How can one go about resolving this issue? After all, the lack of civic-mindedness is something that takes time to address, but you want clean streets now. Maybe you are thinking of making the act of littering legally punishable. That will certainly teach those litterbugs to be more considerate. So you pass a law saying that those who are caught littering will have to pay a fine.

This sounds like a good idea. After all, the advantage is that fining people for littering is a quick and easy way of filling up the governmental coffers, and so you instruct policemen to strategically station themselves in busy areas. But using manpower from the police forces to ensure public cleanliness seems like a colossal waste of all the specialised training all these policemen have received in preparation for their jobs.

What to do then? Maybe during the first two weeks after the ratification of the law, you delegate the assignment of catching litterbugs to a few policemen, to send the message that you mean business. After the fear of getting caught has been sufficiently instilled in the people, you surreptitiously transfer the policemen to resume their former responsibilities. You trust that there will be little littering now, because what matters is not the actual presence of the policemen, but the belief that those policemen are present, even when they are not.

But this might not work in the long run. People are not oblivious to their surroundings -- soon they'd realise that no one is actually enforcing the no-littering law, and they'd return to their old ways of leaving trash on the streets. So, periodically, you will have to make sure that there are policemen stationed in busy areas to deter littering. But over the long run, it would still lead to a huge waste of the police forces' resources and manpower. Besides, the policemen might not be so happy about having their other responsibilities interrupted just because they have to catch litterbugs. It is more important for them to catch thieves instead of fining people who throw away used napkins on the streets.

So what can you do? After all, you would really like to punish those litterbugs for their lack of civic-mindedness.

I would say that it is the wrong way of thinking about things. No doubt that littering shows a lack of civic-mindedness. But littering is also generally a small vice. My suggestion is perhaps rather unorthodox: Instead of punishing it, I suggest that we go out of our way to accommodate it. Accommodation does necessarily not mean that we have to continue living with the unpleasant effects of these small vices. Simply place so many rubbish bins on the streets of Seoul that there would be absolutely no reason for littering. Of course, there will probably be a few people who enjoy littering out of malice, but I believe that most people litter simply because they are too lazy to carry their trash with them if there are no rubbish bins within proximity.

Accommodate their laziness. By placing lots of rubbish bins along the streets, you are telling them, "I know that you are lazy, but instead of punishing you for it, guess what? I am going to make things easier for you!" Sure, you will have to spend quite a lot on money on the purchase of so many rubbish bins, but over the long run the benefits far outweigh the costs -- those who live in Seoul will get to enjoy a much cleaner living environment, and instead of hiring cleaners to work long shifts cleaning dirty streets, you can just hire them to empty and replace the trash bags, which takes a much shorter time.

Passing a law to punish litterbugs would also detract from people's ability to enjoy going out -- e.g., they would wonder whether to buy food from roadside stalls, because they would fear having nowhere to properly dispose of their trash, and at the same time they have no wish to carry the trash with them long periods of time. Placing lots of rubbish bins along the streets, on the other hand, makes it a lot easier for them to enjoy going out.

Our first reaction to vices is usually the desire to criticise or to penalise. Certain vices no doubt deserve such hostility. But it is important to pick your battles -- focus on combating certain vices, and give in to the rest. In fact, sometimes, going out of your way to accommodate a small vice would surprisingly end up making things better for everyone involved. This principle is useful in all aspects of life, and it is useful in both interpersonal relations and policy-making.

Decide for yourself what you can forgive and what you cannot. For big vices, it is important to ask, "What is right?" For small vices, it is perhaps more important to ask, "What works?"

Meetup : Berkeley: Dungeons & Discourse

3 Nisan 03 March 2013 06:13AM

Discussion article for the meetup : Berkeley: Dungeons & Discourse

WHEN: 06 March 2013 07:00:00PM (-0800)

WHERE: Berkeley, CA

This week's meetup is about Scott's philosophy RPG Dungeons and Discourse. Here is the comic strip that inspired Dungeons & Discourse:

http://dresdencodak.com/2009/01/27/advanced-dungeons-and-discourse/

Scott's rulebook is available here:

http://slatestarcodex.com/2013/02/22/dungeons-and-discourse-third-edition-the-dialectic-continues/

It includes an epic narration of the first campaign/musical, The King Under The Mountain. There's an html version of that here, complete with music: http://lesswrong.com/lw/8kn/king_under_the_mountain_adventure_log_soundtrack/ I'm going to print out a couple copies of the rulebook. The purpose of the meetup is to do some combination of the following:

  • Look at the rules.
  • Kibitz about the rules.
  • Ask each other to explain all the references.
  • Ask each other how role-playing games work.
  • Make characters (just for fun).
  • Listen to Less Wrong filk.
  • Decide whether or not to join a group to play Scott's upcoming campaign Fermat's Last Stand.

We will not be starting a campaign at this meetup this week. You can have fun at this meetup even if you don't intend to play or DM the campaign! If you do want to play the campaign, I encourage you to post to the coordination thread on Scott's blog:

http://slatestarcodex.com/2013/02/26/fermats-last-stand-coordination-thread/

There are people interested in playing in Berkeley and in the South Bay.

The meetup will begin on Wednesday at 7:30pm. For directions to Zendo, see the mailing list:

http://groups.google.com/group/bayarealesswrong

or call me at:

http://i.imgur.com/Vcafy.png

Discussion article for the meetup : Berkeley: Dungeons & Discourse

[Link] Colonisation of Venus

10 [deleted] 24 February 2013 10:16PM

I was wondering what people thought of this paper by Geoffrey Landis on colonising Venus. In it he suggests that cloud-top Venus is one of the most benign environments in the Solar System. Temperature and gravity are similar to Earth, there's some radiation shielding and useful resources, and aerostats filled only with breathable air would float at that height. I'm no expert so can't speak to how accurate it is, but it's certainly very thought-provoking for such a short paper.

View more: Next