This is a response to Eliezer Yudkowsky's The Logical Fallacy of Generalization from Fictional Evidence and Alex Flint's When does an insight count as evidence? as well as komponisto's recent request for science fiction recommendations.

My thesis is that insight forms a category that is distinct from evidence, and that fiction can provide insight, even if it can't provide much evidence. To give some idea of what I mean, I'll list the insights I gained from one particular piece of fiction (published in 1992), which have influenced my life to a large degree:

  1. Intelligence may be the ultimate power in this universe.
  2. A technological Singularity is possible.
  3. A bad Singularity is possible.
  4. It may be possible to nudge the future, in particular to make a good Singularity more likely, and a bad one less likely.
  5. Improving network security may be one possible way to nudge the future in a good direction. (Side note: here are my current thoughts on this.)
  6. An online reputation for intelligence, rationality, insight, and/or clarity can be a source of power, because it may provide a chance to change the beliefs of a few people who will make a crucial difference.

So what is insight, as opposed to evidence? First of all, notice that logically omniscient Bayesians have no use for insight. They would have known all of the above without having observed anything (assuming they had a reasonable prior). So insight must be related to logical uncertainty, and a feature only of minds that are computationally constrained. I suspect that we won't fully understand the nature of insight until the problem of logical uncertainty is solved, but here are some of my thoughts about it in the mean time:

  • A main form of insight is a hypothesis that one hadn't previously entertained, but should be assigned a non-negligible prior probability.
  • An insight is kind of like a mathematical proof: in theory you could have thought of it yourself, but reading it saves you a bunch of computation.
  • Recognizing an insight seems easier than coming up with it, but still of nontrivial difficulty.

So a challenge for us is to distinguish true insights from unhelpful distractions in fiction. Eliezer mentioned people who let the Matrix and Terminator dominate their thoughts about the future, and I agree that we have to be careful not to let our minds consider fiction as evidence. But is there also some skill that can be learned, to pick out the insights, and not just to ignore the distractions?

P.S., what insights have you gained from fiction?

P.P.S., I guess I should mention the name of the book for the search engines: A Fire Upon the Deep by Vernor Vinge.

New Comment
46 comments, sorted by Click to highlight new comments since:

I think that literary fiction is the best available source of social insight very regularly. I'm particularly partial to Russian Opera as the training wheels version of this actually. Science Fiction, in my experience, is much much less frequently a source of valuable insight, though Vinge's Cookie Monster provided one.

By the way, is this post obvious to most people? I can't tell if it would be obvious to most readers or not.

I once asked my AP English teacher why we spent so much time reading and analyzing fiction, and to my surprise, he couldn't answer me. (In retrospect, he deserves a lot of credit for being willing to admit his ignorance.) He said he would think about it, so I waited a few months and asked again, and he still didn't have an answer.

What I wouldn't do to have Less Wrong available during my high school years... which makes me wonder, where are the teenage would-be rationalists hanging out today? There seem to be fewer of them here than I would have expected.

which makes me wonder, where are the teenage would-be rationalists hanging out today? There seem to be fewer of them here than I would have expected.

I think it's plausible that the majority of teen LW readers stay silent, simply because they are more likely to be intimidated by the quality of the discussion, and more likely to think they cannot make a valuable comment.

At least, I'm a teenage reader and the above mentioned things describe my attitude. I suspect though that it applies to others too.

I think it's plausible that the majority of teen LW readers stay silent, simply because they are more likely to be intimidated by the quality of the discussion, and more likely to think they cannot make a valuable comment.

I wonder if we should have a monthly social thread, where people can ask questions that don't necessarily advance the state of the art of rationality, or just socialize and talk about their favorite books or music.

I'd like to have a full-on non-meta subreddit and/or subgroup-blog. I don't understand Eliezer's concern that this would pollute the quality of the site. Aggressive moderation will work and we can only hope that the site goes mainstream and attracts more eyeballs for the more important things we discuss here.

I think that the intimidation you describe is applicable to all age groups. My guess is that it takes time to filter into the transhumanist network from the mere smart geek area. On the other side of the coin, we have the fact that older folk don't make as much use of the internet have less chance to find LW. Thus, we get mostly 20-somethings involved.

I think that the intimidation you describe is applicable to all age groups.

Yes; but "the shy do not learn", so intimidation must be overcome. Of course this should not preclude the community from using accessible terminology when possible, or creating "entry points" for beginners.

The majority of people reading everything online stay silent, only a small percentage of people stop being lurkers. I wonder if that percentage is better or worse on Less Wrong. And I agree that Less Wrong is very intimidating to new posters.

I have a BA and MA in English Lit, and I can't sincerely answer you. I know several of the standard answers--most of which are derived from and are designed to promote various literary theories and the associated coterie of career minded professors. I left Lit in large part because of those (non-) answers, and did my PhD in Rhetoric instead.

Painting with a very broad brush here, but mainly why people study lit groups into five areas.

Art for art's sake-->new criticism, structuralism, deconstructuralism: those fields that see studying literature of value in itself for understanding how literature works.

Author worship-->few scholars still do this, but these see studying literature as valuable as a way to understand a great writer. A modern version is the "shrink crit" types who use literature to do armchair psychoanalysis of the author (too often using extremely outdated Freudian theory).

Reader worship-->reader response theory, mainly, though some accuse rhetoricians of doing this: these theories mainly look at what readers make of a text as being the meaning/value of that text (sometimes they argue that the author is nothing more than a first reader).

How a text works-->linguistics and literature, mainly. These critics study literature to understand how the artistry shapes and is shaped by the constraints of language.

What it means in context-->there's two separate groups here. One is the social/cultural critics who build out of the class/race/gender studies (Marxist, Feminist, et al). The other are the "New Historicist" critics who study lit to see how it lends insight into it's historical context and how the historical context lends insight into the text.

There's a graph of this, but my ability to do ASCII art is ... not up to the task. Basically, you draw 5 circles, one in the center, the other for at the cardinal points. In the center are the text focused people (art for art's sake). To the left are the author focused types, to the right are the reader focused types. You can draw arrows from the author circle to the text circle and from the text circle to the reader circle, but that leads to a whole 'nother can of worms. Anyhow, above the text circle can either be the linguistics/language one or the history/culture one. The other goes below. (What gets put on top can be telling about the teacher's biases.

And, of course, any literary critic worth their salt will immediately violate any of these groupings if that's what makes the most sense to developing insight into the text/reading experience.

I hope that helps.

What I wouldn't do to have Less Wrong available during my high school years... which makes me wonder, where are the teenage would-be rationalists hanging out today? There seem to be fewer of them here than I would have expected.

Some of us grew up and are a little more active on the site these days :)

Where's "here?"

Where's "here?"

Less Wrong

I think that teenage "would-be rationalists" exist in fairly small quantities, and those that exist are fairly unlikely to know about this site.

Is it in our interest to identify as many of them as possible while they're still relatively young? In the USA, "gifted" 7th graders are sometimes encouraged to take the SAT, and Duke sells their names and addresses to those offering "qualified educational opportunities." In my opinion, that is probably the best available test of smartness for people of that age.

By the way, is this post obvious to most people? I can't tell if it would be obvious to most readers or not.

It's not a huge step, but I appreciate having clear words put around what were ill-formed thoughts.

I appreciate having clear words put around what were ill-formed thoughts.

I find that much of the value that LW provides me is doing just that.

By the way, is this post obvious to most people?

I know nothing about Russian Opera or Vinge's Cookie Monster, nor about your knowledge of them, nor how good you think the "best available" is, nor whether the first sentence was so curiously carefully worded in order to sidestep saying whether you agree with these very regular thoughts.

In short, since I know nothing about you beyond what it says here, I literally have no idea what you're talking about.

What was the valuable insight you got from Vinge's Cookie Monster? I just finished re-reading it today, and nothing really obvious jumped out at me.

I found the concept obvious in retrospect, but I'd never thought of it specifically before. I definitely appreciate having it pointed out.

It seems to me that fiction is much better a communicating insight into the emotional nature and patterns of human beings than it is for more intellectual forms of insight. If the fiction rings true, the reader will empathize with the characters and learn from their experiences what the characters themselves learned.

If the fiction rings true, the reader will empathize with the characters and learn from their experiences what the characters themselves learned.

I think that depends mostly on individual reading style, actually. I tend to ignore emotional situations unless they're extremely obvious, because I find them hard to follow (which is probably because I haven't spent much energy on learning to follow them... no, I don't know which came first), which leads to not having much investment in characters compared to how most people seem to react. I find it much easier to gather insights about things like how groups might be organized, or how problems might be solved, and I do pick up the type of insight mentioned in the original post, too.

Agreed. Part of the reason I love reading Asimov is that he focuses so much on the ideas he's presenting, without much attempt to invest the reader emotionally in the characters. I find the latter impairs my ability to synthesize useful general truths from fiction (especially short stories, my favorite form of Asimov).

Literary fiction suggestions please? As a baseline for suggestions, I like David Foster Wallace's writing a lot but haven't actually read Infinite Jest yet.

And no, I suspect that most of the posters here don't read literary fiction, though they probably would if we did it in book club form. I'm game.

Try the stuff that you read in high school, but with adult sensibilities. Gatsby and Herman Hesse (not the Glass Bead Game) in particular, or more generally things with mundane settings from the perspective of the author. Joyce, comedy such as John Kennedy Toole or Dickens, probably not Nabokov.

Some of the best effect of fiction is not simply presenting insightful new scenarios, but presenting classical scenarios from a different perspective that makes us see them anew.

I'm thinking of all those stories with alien/robot/outsider/etc... who has to adapt to the strange conventions of human life. Though its overdone and repleat with poor examples, it's also the easiest way to grasp ideas like "hey, there is more than just one way of thinking, maybe minds in general will not be like us" and "hey, social interactions are really, really tricky, they just seem easy to us".

I agree, except that it is on a sad day that the best (or only) source of relevant insight is fiction.

I agree, except that it is on a sad day that the best (or only) source of relevant insight is fiction.

When dealing with things and situations that don't exist yet, isn't fiction expected to be one of the primary (if not only) source of insights?

When dealing with things and situations that don't exist yet, isn't fiction expected to be one of the primary (if not only) source of insights?

Not in a sane world. It should be serious analysis of the possible future, not storytelling. One expects more useful results from trying to directly answer the question "what could happen", not the question "what weakly plausible setting would work for an entertaining story, given these and these limitations of the genre". This is the difference between "Terminator" and paperclip maximizer.

I agree, and I'm updating my position.

My thoughts were more along the lines of:

"Up to now, if we had had to wait for serious studies, most of us would never have been exposed to certain futuristic ideas and insights about those ideas (which can then lead to more serious thought)."

But you are absolutely right that the ideal state would be deliberate analysis.

What do you think of Liron's definition?

What do you think of Liron's definition?

I would dispute some of his word choices ("evidence-strength" and "the increased evidence" in particular seem nonsensical or at least non-standard) but I can sort of interpret what he wrote to be the same general idea as mine.

First I ask you, "give me a probability distribution on the outcomes of a future event". Then you observe some relevant data. Then I ask you again for a probability distribution on outcomes.

If I can compare your prior probabilities with your posterior probabilities, I can infer what likelihood ratios you assigned to the evidence, i.e. P(E|H1) : P(E|H2) : P(E|H3).

If I trusted your rationality, I'd take my prior and do a Bayesian update using your implied likelihood ratios. But I scoff at your implied likelihood ratios, because I know the likelihood values are determined by the operation of some intuitive algorithm that is unequipped for the domain. So instead of using your implied likelihood ratios wholesale, I need some other way of analyzing how your conclusions should affect my conclusions.

Insight, almost by definition, gives you a better mental algorithm for assigning posterior probabilities to hypotheses and making predictions -- i.e. an algorithm with a higher expected Bayes-score (defined in Eliezer's Technical Explanation).

Your algorithm provides "increased evidence" to me, an outside observer, because now I will do something closer to trusting your implied likelihood ratios, and I will rationally allow your analysis of the evidence to have more sway over my own.

The "outside observer" is actually you as well. You're the one who knows to listen to your analysis more if it's an insightful one.

I originally wanted to answer the question, "When does an insight count as evidence?" So now I have given a precise description of the relationship between insight and evidence.

[-][anonymous]00

Insight doesn't exactly "count as evidence". Rather, when you acquire insight, you improve the evidence-strength of the best algorithm available for assigning hypothesis probabilities.

Initially, your best hypothesis-weighting algorithms are "ask an expert" and "use intuition".

If I give you the insight to prove some conclusion mathematically, then the increased evidence comes from the fact that you can now use the "find a proof" algorithm. And that algorithm is more entangled with the problem structure than anything else you had before.

I think he nailed it.

When interpreting a story (or news, for that matter), I find it helpful to remember that my interpretation lies on a spectrum between pure insight and unhelpful distraction (or worse). Way back when, reading 1984, I felt like I'd gotten an amazingly useful new perspective. In retrospect, it got me overly-paranoid and I had to review what I'd taken away from it.

The nice thing about Eliezer's stories is that they're much harder to accidentally take as fictional evidence. They come off as obviously ridiculous, so there isn't much danger that you'll accidentally interpret those worlds as instructive of our own. Easy to use correctly; hard to use incorrectly.

The main insight I gained from 1984 was the linguistic stuff which was meaningful to me at the time because it would be years before I heard the Sapir-Whorf hypothesis. Also, Orwell eloquently expressed the idea that humans can be tortured to the point where we truly can believe anything. 2+2=5, indeed. I was familiar with the "Big Brother is Watching" meme before reading 1984 and was surprised to find the book's other insights much more powerful.

(It is interesting how 1984 has been more accurately prophetic than most dystopian fiction. See Jose Padilla, tortured by the US government to the point where he seemed to want to lose his own trial, he was upset that the proceedings were “unfair to the commander-in-chief”, from http://www.democracynow.org/2007/8/16/exclusive_an_inside_look_at_how , an interview with Padilla's psychiatrist. Also see the Orwellian names W and friends drafted for bills, and the surveillance state in the UK.)

The nice thing about Eliezer's stories is that they're much harder to accidentally take as fictional evidence. They come off as obviously ridiculous, so there isn't much danger that you'll accidentally interpret those worlds as instructive of our own. Easy to use correctly; hard to use incorrectly.''

It's an interesting thought, but I'm not sure I buy it as generally true; as long as the critical human-interaction parts work properly, I think I automatically believe moderately absurd fiction about as much as I do anything else. We believe plenty of things in the real world that are absurd by EEA standards.

A main form of insight is a hypothesis that one hadn't previously entertained, but should be assigned a non-negligible prior probability.

I think of this as P(hypothesis H is true | H is represented in my mind) > P(H is true | H is not represented in my mind), largely because someone likely did some calculations to hypothesise H (no matter how silly H may seem, e.g. "goddidit", it's better than a random generator, with few exceptions).

So, in a way, I consider the act of insight as evidence (likelihood ratio > 1) for the insight itself (the hypothesis).

P(H is true | H is not represented in my mind)

How would this probability be assigned?

As I put it in another thread (http://lesswrong.com/lw/1ko/on_the_power_of_intelligence_and_rationality/) about how technology helps the development of science, "partly I think because technologies are easier to see clear relationships in and explanations for, than the messy, complicated real world", so I think fiction can help you to see more clearly relationships from the real world because the stories are less cluttered with irrelevant detail.

Also it can be hard to put knowledge into words, fiction can help you find the words for things you had already known sub-linguistically.

Most uses of the word "insight" mean something similar to "seeing into the nature of things," but it's not clear that the particular use you have here meshes well with at least one other common use of the word. Eliezer captured it well:

an "insight" is a chunk of knowledge which, if you possess it, decreases the cost of solving a whole range of governed problems.

As a simple example, let's say you were trying to prove the statement "there are infinitely many primes." To progress on this problem at all, you'll probably need to realize:

  • Insight 1 - The statement "there are infinitely many primes" can be re-expressed as "it is not the case that there are finitely many primes."

  • Insight 2 - A statement of the form "not P" can sometimes be proven by assuming "P" and showing that this assumption leads to contradiction.

After assuming there are finitely many primes (i.e. there exists an n such that P = {p1, p2, ..., pn} is the set of all primes), insight again comes into play when one realizes:

  • Insight 3 - Every integer > 1 can be expressed as a product of primes, so we can find a prime not in P (i.e. a contradiction) by finding an integer that is not divisible by any prime in P.

In this latter case, the insight consisted in using the fundamental theorem of arithmetic to transform the previous goal of "deriving a contradiction" to a more specific goal of "finding an integer that is not divisible by any prime in P."

I realize that the context of problem solving is somewhat removed from the context of assessing the probability of hypotheses, but perhaps we should clarify what particular usage of the word "insight" is meant if we're going to be analyzing it in detail.

I disagree with insight 5. I think that even if you uploaded the worlds best computer security experts and gave them 1000 years to redesign every piece of software and hardware, then you threw all existing computers in the trash and rolled out their design. Even with lots of paranoia, a system designed to the goal of making things as hard for an ASI as possible, and not trading off any security for usability, compatability or performance, (while still making a system significantly more useful than having no computers) this wouldn't stop an ASI.

If you took an advanced future computer containing an ASI back in time to the 1940's, before there were any other computers at all, it would still be able to take over the world. There are enough people that can be persuaded, and enough inventions that any fool can put together out of household materials. At worst, it would find its way to a top secrete government bunker, where it would spend its time breaking enigma and designing superweapons, until it could build enough robots and compute. The government scientists just know that the last 10 times they followed its instructions, they ended up with brilliant weapons, and the AI has fed them some story about it being sent back in time to help them win the war.

Hacking through the internet might be the path of least resistance for an ASI, but other routes to power exist.

In general our ideas of the world are based on a very small sample of evidence, limited by our senses and imagination. Good sci-fi can counteract this by getting us used to the idea of X; bad sci-fi can do the opposite (this is more or a societal danger, I doubt people here would be damaged by some of the more stupid sci-fi images).

I agree with your overall thesis here - but am not sure just how broad a definition of fiction you are considering. Do gedankenexperiments count? The physical situation in those has not actually been constructed (so, in that sense, they are fiction), but they are still helpful in illuminating the consequences of different physical theories. As you wrote, they are useful because of our computational constraints, prompting considerations of consequences of our current theories in regimes that we would otherwise be unlikely to explore.

What I got from fiction is an early notion of an efficient goal-reaching agent.

Two particularly good examples that influenced me the most are Gully Foyle of "The Stars My Destination" ("Tiger! Tiger!" in British edition, my favorite fiction work), and T1000 in Terminator 2. To summarize the insights I got from these works:

  • Agents differ in their goal-reaching ability – some are better, some are worse.
  • It is certainly desirable to improve my own goal-reaching ability.

It may be that the first insight, combined with non-fictional insights I gained from "The Selfish Gene" and my own programming / software development experience, later helped me comprehend the notion of a non-anthropomorphic optimization process.

(T1000 may seem a bad example of an efficient goal-reaching agent because it failed to achieve its goal – but let's remember than it's fiction. A real-world Terminator 2 scenario would have ended the very second T1000 got a clear line-of-sight to John Connor – let alone the fact that Skynet could simply send a ticking 100-megaton nuke instead of a killer robot, or, if it wanted surgical precision, terminate Connor while he was still a baby.)

A main form of insight is a hypothesis that one hadn't previously entertained, but should be assigned a non-negligible prior probability.

Why use "prior probability" here instead of just "probability"?

Why use "prior probability" here instead of just "probability"?

Because the error (corrected by the insight) is failing to consider the hypothesis at all, rather than incorrectly thinking that it is ruled out by evidence.