See also: Hack Away at the Edges.

The streetlight effect

You've heard the joke before:

Late at night, a police officer finds a drunk man crawling around on his hands and knees under a streetlight. The drunk man tells the officer he’s looking for his wallet. When the officer asks if he’s sure this is where he dropped the wallet, the man replies that he thinks he more likely dropped it across the street. Then why are you looking over here? the befuddled officer asks. Because the light’s better here, explains the drunk man.

The joke illustrates the streetlight effect: we "tend to look for answers where the looking is good, rather than where the answers are likely to be hiding."

Freedman (2010) documents at length some harms caused by the streetlight effect. For example:

A bolt of excitement ran through the field of cardiology in the early 1980s when anti-arrhythmia drugs burst onto the scene. Researchers knew that heart-attack victims with steady heartbeats had the best odds of survival, so a medication that could tamp down irregularities seemed like a no-brainer. The drugs became the standard of care for heart-attack patients and were soon smoothing out heartbeats in intensive care wards across the United States.

But in the early 1990s, cardiologists realized that the drugs were also doing something else: killing about 56,000 heart-attack patients a year. Yes, hearts were beating more regularly on the drugs than off, but their owners were, on average, one-third as likely to pull through. Cardiologists had been so focused on immediately measurable arrhythmias that they had overlooked the longer-term but far more important variable of death.


Start under the streetlight

Of course, there are good reasons to search under the streetlight:

It is often extremely difficult or even impossible to cleanly measure what is really important, so scientists instead cleanly measure what they can, hoping it turns out to be relevant.

In retrospect, we might wish cardiologists had done a decade-long longitudinal study measuring the long-term effects of the new anti-arrhythmia drugs of the 1980s. But it's easy to understand why they didn't. Decades-long longitudinal studies are expensive, and resources are limited. It was more efficient to rely on an easily-measurable proxy variable like arrhythmias.

We must remember, however, that the analogy to the streetlight joke isn't exact. Searching under the streetlight gives the drunkard virtually no information about where his wallet might be. But in science and other disciplines, searching under the streetlight can reveal helpful clues about the puzzle you're investigating. Given limited resources, it's often best to start searching under the streetlight and then, initial clues in hand, push into the shadows.1

The problem with streetlight science isn't that it relies on easily-measurable proxy variables. If you want to figure out how some psychological trait works, start with a small study and use free undergraduates at your home university — that's a good way to test hypotheses cheaply. The problem comes in when researchers don't appropriately flag the fact their subjects were WEIRD and that a larger study needs to be done on a more representative population before we start drawing conclusions. (Another problem is that despite some researcher's cautions against overgeneralizing from a study of WEIRD subjects, the media will write splashy, universalizing headlines anyway.)

But money and time aren't the only resources that might be limited. Another is human reasoning ability. Human brains were built for hunting and gathering in the savannah, not for unlocking the mysteries of fundamental physics or intelligence or consciousness. So even if time and money aren't limiting factors, it's often best to break a complex problem into pieces and think through the simplest pieces, or the pieces for which our data are most robust, before trying to answer the questions you most want to solve.

As Pólya advises in his hugely popular How to Solve It, "If you cannot solve the proposed problem, try to solve first some related [but easier] problem." In physics, this related but easier problem is often called a toy model. In other fields, it is sometimes called a toy problem. Animal models are often used as toy models in biology and medicine.

Or, as Scott Aaronson put it:

...I don’t spend my life thinking about P versus NP [because] there are vastly easier prerequisite questions that we already don’t know how to answer. In a field like [theoretical computer science], you very quickly get used to being able to state a problem with perfect clarity, knowing exactly what would constitute a solution, and still not having any clue how to solve it... And at least in my experience, being pounded with this situation again and again slowly reorients your worldview... Faced with a [very difficult question,] you learn to respond: “What’s another question that’s easier to answer, and that probably has to be answered anyway before we have any chance on the original one?”

I'll close with two examples: GiveWell on effective altruism and MIRI on stability under self-modification.


GiveWell on effective altruism

GiveWell's mission is "to find outstanding giving opportunities and publish the full details of our analysis to help donors decide where to give."

But finding and verifying outstanding giving opportunities is hard. Consider the case of one straightforward-seeming intervention: deworming.

Nearly 2 billion people (mostly in poor countries) are infected by parasitic worms that hinder their cognitive development and overall health. This is also producing barriers to economic development where parasitic worms are common. Luckily, deworming pills are cheap, and early studies indicated that they improved educational outcomes. The DCP2, produced by over 300 contributors and in collaboration with the World Health Organization, estimated that a particular deworming treatment was one of the most cost-effective treatments in global health, at just $3.41 per DALY.

Unfortunately, things are not so simple. A careful review of the evidence in 2008 by The Cochrane Collaboration concluded that, due to weaknesses in some studies' designs and other factors, "No effect [of deworming drugs] on cognition or school performance has been demonstrated." And in 2011, GiveWell found that a spreadsheet used to produce the DCP2's estimates contained 5 separate errors that, when corrected, increased the cost estimate for deworming by roughly a factor of 100. In 2012, another Cochrane review was even more damning for the effectiveness of deworming, concluding that "Routine deworming drugs given to school children... has not shown benefit on weight in most studies... For haemoglobin and cognition, community deworming seems to have little or no effect, and the evidence in relation to school attendance, and school performance is generally poor, with no obvious or consistent effect."

On the other hand, Innovations for Poverty Action critiqued the 2012 Cochrane review, and GiveWell said the review did not fully undermine the case for its #3 recommended charity, which focuses on deworming.

What are we to make of this? Thousands of hours of data collection and synthesis went into producing the initial case for deworming as a cost-effective intervention, and thousands of additional hours were required to discover flaws in those initial analyses. In the end, GiveWell recommends one deworming charity, the Schistosomiasis Control Initiative, but their page on SCI is littered with qualifications and concerns and "We don't know"s.

GiveWell had to wrestle with these complications despite the fact that it chose to search under the streetlight. Global health interventions are among the easiest interventions to analyze, and have often been subjected to multiple randomized controlled trials and dozens of experimental studies. Such high-quality evidence usually isn't available when trying to estimate the cost-effectiveness of, say, certain forms of political activism.

GiveWell co-founder Holden Karnofsky suspects that the best giving opportunities are not in the domain of global health, but GiveWell began their search in global health — under the spotlight — (in part) because the evidence was clearer there.2

It's difficult to do counterfactual history, but I suspect GiveWell made the right choice. While investigating global health, GiveWell has learned many important lessons about effective altruism — lessons it would have been more difficult to learn with the same clarity if they had begun with investigations of even-more-challenging domains like meta-research and pollitical activism. But now that they've learned those lessons, they're beginning to push into the shadows where the evidence is less clear, via GiveWell Labs.


MIRI on stability under self-modification

MIRI's mission is "to ensure that the creation of smarter-than-human intelligence has a positive impact."

Many different interventions have been proposed as methods for increasing the odds that smarter-than-human intelligence has a positive impact, but for several reasons MIRI decided to focus its efforts on "Friendly AI research" during 2013.

The FAI research program decomposes into a wide variety of technical research questions. One of those questions is the question of stability under self-modification:

How can we ensure that an AI will serve its intended purpose even after repeated self-modification?

This is a challenging and ill-defined question. How might we make progress on such a puzzle?

For puzzles such as this one, Scott Aaronson recommends a strategy he calls "bait and switch":

[Philosophical] progress has almost always involved a [kind of] “bait-and-switch.” In other words: one replaces an unanswerable philosophical riddle Q by a “merely” scientific or mathematical question Q′, which captures part of what people have wanted to know when they’ve asked Q. Then, with luck, one solves Q′... this process of “breaking off” answerable parts of unanswerable riddles, then trying to answer those parts, is the closest thing to philosophical progress that there is.

Successful examples of this breaking-off process fill intellectual history. The use of calculus to treat infinite series, the link between mental activity and nerve impulses, natural selection, set theory and first-order logic, special relativity, Gödel’s theorem, game theory, information theory, computability and complexity theory, the Bell inequality, the theory of common knowledge, Bayesian causal networks — each of these advances addressed questions that could rightly have been called “philosophical” before the advance was made.

The recent MIRI report on Tiling Agents performs one such "bait and switch." It replaces the philosophical puzzle of "How can we ensure that an AI will serve its intended purpose even after repeated self-modification?" (Q) with a better-specified formal puzzle on which it is possible to make measurable progress: "How can an agent perform perfectly tiling self-modifications despite Löb's Theorem?" (Q')

This allows us to state at least three crisp technical problems: Löb and coherent quantified belief (sec. 3 of 'Tiling Agents'), nonmonotonicity of probabilistic reasoning (secs. 5.2 & 7), and maximizing/satisficing not being satisfactory for bounded agents (sec. 8). It also allows us to identify progress: formal results that mankind had not previously uncovered (sec. 4).

Of course, even if Q' is eventually solved, we'll need to check whether there are other pieces of Q we need to solve. Or perhaps Q will have been dissolved by our efforts to solve Q', similar to how the question "What force distinguishes living matter from non-living matter?" was dissolved by 20th century biology.



Notes

1 Karnofsky (2011) suggests that it may often be best to start under the streetlight and stay there, at least in the context of effective altruism. Karnofsky asks, "What does it look like when we build knowledge only where we’re best at building knowledge, rather than building knowledge on the 'most important problems?'" His reply is: "Researching topics we’re good at researching can have a lot of benefits, some unexpected, some pertaining to problems we never expected such research to address. Researching topics we’re bad at researching doesn’t seem like a good idea no matter how important the topics are. Of course I’m in favor of thinking about how to develop new research methods to make research good at what it was formerly bad at, but I’m against applying current problematic research methods to current projects just because they’re the best methods available." Here's one example: "what has done more for political engagement in the U.S.: studying how to improve political engagement, or studying the technology that led to the development of the Internet, the World Wide Web, and ultimately to sites like Change.org...?" I am sympathetic with Karnofsky's view in many cases, but I will give two points of reply with respect to my post above. First, in the above post I wanted to focus on the question of how to tackle difficult questions, not the question of whether difficult questions should be tackled in the first place. And conditional on one's choice to tackle a difficult question, I recommend one start under the streetlight and push into the shadows. Second, my guess is that I'm talking about a broader notion of the streetlight effect than Karnofsky is. For example, I doubt Karnofsky would object to the process of tackling a problem in theoretical computer science or math by trying to solve easier, related problems first.

2 In GiveWell's January 24th, 2013 board meeting (starting at 6:35 in the MP3 recording), GiveWell co-founder Holden Karnofsky said that interventions outside global health are "where we would bet today that we'll find... the best giving opportunities... that best fulfill GiveWell's mission as originally [outlined] in the mission statement." This doesn't appear to be a recently acquired view of things, either. Starting at 22:47 in the same recording, Karnofsky says "There were reasons that we focused on [robustly evidence-backed] interventions for GiveWell initially, but... the [vision] I've been pointing to [of finding giving opportunities outside global health, where less evidence is available]... has [to me] been the vision all along." In personal communication with me, Karnofsky wrote that "We sought to start 'under the streetlight,' as you say, and so focused on finding opportunities to fund things with strong documented evidence of being 'proven, cost-effective and scalable.' Initially we looked at both U.S. and global interventions, and within developing-world interventions we looked at health but also economic empowerment. We ended up focusing on global health because it performed best by these criteria."

New Comment
29 comments, sorted by Click to highlight new comments since:

This post actually makes it a lot clearer to me how people can decide to make their life's work solving a problem that they have no idea how to solve, and then not go crazy doing it. This has always been a mystery to me–my brain tends to immediately reject goals if I can't visualize a direct and plausible path to accomplishing them. Thank you for an illuminating idea!

I think the main problem with ill-defined questions is that they don't sufficiently constrain the answer space: you end up with people arguing over multiple proposed answers and no clear way to determine which is right. Replacing them with crisp technical questions can be useful but the ill-defined questions tend not to fully constrain how you're supposed to translate them into crisp technical questions either.

An approach that I've found helpful is to gather many ill-defined questions in the same field and try to find some single insight (or a small set of related insights) that answers all of them simultaneously. While each question on its own may not narrow down the answer space down to a single point, the whole collection has a much better chance of doing so.

To illustrate this, compare philosophers who tries to solve individual seemingly crisp toy problems within anthropic reasoning, such as the Sleeping Beauty Problem, to those who worked on anthropic reasoning in general, to my attempt to solve many ill-defined problems simultaneously with UDT.

This isn't to deny that there are lots of other examples where "hacking around the edges" or "pushing into the shadows" did work, but one should be careful not to elevate the heuristic to some sort of dogma, even to the extent of having someone be in charge of reminding others to not "bite too much".

Don't you think that others working on Sleeping Beauty, absent-minded driver, Parfit's hitchhiker etc. helped pave the way for UDT by providing a list of questions for UDT to answer?

I mean, I'm not saying that one shouldn't try to do better than the people who worked on all these problems (I might even be tempted to agree with what Jeffreyssai might say on the subject, though I expect you wouldn't go that far) but it seems like even in a reasonably efficient way to approach the problem, "searching under the streetlight" by playing with some crisply formulated problems may help pave the way to deeper answers.

(In particular, I think that MIRI's current research strategy is not-completely-crazy in starting under the streetlight in various respects; e.g., I'd be rather surprised if the modal agent formulation turned out to be useful as is for FAI, but I do think there's a reasonable chance that it will help pave the way to deeper insights; and I agree with you that it may well turn out that probability is the wrong tool for handling logical uncertainty, but I feel that trying to use probability and seeing what results we can get is an obviously useful thing to do; and I think that it's sufficiently likely that diagonalization problems will bite a wide range of attempts to handle logical uncertainty that I think working on workarounds to diagonalization makes sense to do in parallel with work on logical uncertainty, rather than trying to solve logical uncertainty first.)

Keep in mind that generally I advocate "Explore multiple approaches simultaneously" and "Trust your intuitions, but don't waste too much time arguing for them". Sometimes I do feel obligated to explain why I'm not as excited about some research direction as might be expected given my interests (and in the case of "probabilistic reflection" there's the additional issue that I'm having trouble making intuitive sense of what the formalism is saying), but I don't mean to discourage other people from exploring their ideas if they still think it's worthwhile after hearing what I have to say.

Don't you think that others working on Sleeping Beauty, absent-minded driver, Parfit's hitchhiker etc. helped pave the way for UDT by providing a list of questions for UDT to answer?

I'm certainly not disputing that having those questions available was helpful, but just want to point out that there seems to be a danger where people focus on these relatively "crisp" problems too much, think they have solutions, and then argue over them endlessly, where they might have made better progress by zooming out and looking at the bigger picture. If you consider the dozens of academic papers published on the Sleeping Beauty, I don't think the majority of them (i.e., beyond the first few) can be said to have helped pave the way for UDT.

Keep in mind that generally I advocate "Explore multiple approaches simultaneously" and "Trust your intuitions, but don't waste too much time arguing for them".

Fair enough!

(Re your last paragraph, it sounds like we're in pretty perfect agreement about the usefulness of previous research. I suppose that upthread, you were saying "these people were following a streetlight/shadow strategy and it didn't actually work" and I was saying "retrospectively, it looks like the correct strategy would have been to first explore some anthropic problems and then try to find a common answer to all of them, which sounds like it can be described as starting under the streetlight, then moving into the shadows". So it sounds like we agree about the actual subject matter and any apparent disagreement is either due to talking about different things or due to disagreement about how to best apply the metaphor to the example, so it looks like there's nothing that would actually be useful to debate. Cool! :-))

How much of an outlier is UDT in this regard, do you think? What other examples can you think of?

I wish I was better acquainted with the history of ideas. Certainly there are insights that in retrospect are so broadly useful that they must have resolved many seemingly separate confusions when they were first developed, for example logic, Bayesian updating, expected utility maximization, computation as a mathematical abstraction, information theory. But I'm not sure how their inventors came up with them. Were they were deliberately seeking to solve multiple problems with a single insight, or at least had the multiple relevant problems in the back of their minds? Maybe somebody more familiar with the history can help with the answer?

[-]Shmi10

It's true, sometimes it is possible to have an insight which solves multiple weakly related problems at once, but it's very rare and tends to require a paradigm change, like the Einstein's willingness to discard the fixed background space on which everything else happens. But this is basically the difference between art and craft. If you want to have systematic progress, you hack around the edges. It is certainly a good thing to occasionally try to bite through the problem as a whole if you have a flash of inspiration. What is not OK is to get stuck with a mouthful unable to chew through and unwilling to spit it out. Gah, metaphors. I am not qualified to judge whether your UDT solves every problem you say it does, as I have a strong aversion to anthropics, due to their poor testability, but it does not seem like a paradigm shift to me.

Taking it meta: How would you ensure that if you want to solve hard Q and kept breaking off into easier Q', Q'', Q''', and so on, that eventually your values will remain stable such that you'll still be trying to solve Q? Especially with resource limitations. Or would you even want to? Example: researcher starts out trying to solve P vs. NP. Figures out that she has to solve some problems in information theory first. Likes information theory so much that she forgets about P vs NP and moves on to information theory because she can solve more problems there and hence gain more reputation points and so on.

If we call the above problem R, what will be an easier R'? Is R entirely isomorphic to the problem of stability under self-modification?

I think I remember hearing Holden Karnofsky give a different version of the quote in the second footnote. Something like: It's like there's not just one set of dropped keys, but there are keys everywhere. Some are in the dark, some are under the light. And you don't know where your keys are, but you have no chance of finding them in the dark, so you might as well look at the ones in the light where you have a chance of finding them. (Meaning: look at charities/interventions you have data on, because while there are probably better ones out there without data supporting them, you'll never be able to identify which ones they are.)

Are you thinking of this 80k hours post?

No, but it does seem to be in a similar vein.

The problem with streetlight science isn't that it relies on easily-measurable proxy variables.

See Douglas Hubbard's book "How to Measure Anything" (and related blog) for a discussion. Basically, he says that in the presence of uncertainty it's better to measure something and then discusses how to do that.

The joke illustrates the streetlight effect: we "tend to look for answers where the looking is good, rather than where the answers are likely to be hiding.

In the Edge Annual Question for 2013 Information Scientist Bart Kosko discusses this in the context of which probability models we tend to use (tl;dr):

30) Bart Kosko fears that, like the drunk looking for his keys in the lamplight because that’s where he can see, we restrict ourselves to just five probabilistic models because they are easier to teach and calculate. The result is that we’re not modeling the world as well as we could be, and the negative effects may especially hamper the Bayesian revolution in probabilistic computing.

Full answer here.

[-]Shmi20

Does MIRI have a person in charge of breaking off small solvable chunks of a large problem? Someone to tell the resident geniuses "you bit too much, let's look for a smaller piece"?

No dedicated person; many people do this.

One of the most disturbing problems I see in academia is our tendency to treat the world as a collection of pools of light. We start out thinking our own little street light is the only one there is. Then we think our light is the best light and everyone else is foolish for searching in vain beneath inferior lamps. Those of us who start to get interdisciplinary find ways to search under multiple lamps at the same time. We call up our friends to ask if the thing we're looking for is under their light, or maybe if they've got the other half of the broken thing we found. We use things discovered under other lamps to help us search under our home lamp. If we're really clever and successful, we make our lamps burn more brightly.

Luke's right, a pool of light a good place to start. You've got to start somewhere. But many of our disciplines have been searching through the same little pools for centuries. We need to remember, and to impress upon our students that the vast majority of what exists is out there in the darkness.

I don't think increasing the brightness of lamps so we can push outward a little at a time is enough. There's just too much darkness out there. I have always had exactly one goal as an academic, and I consider it the best hope universities have to make a dramatic positive impact on the future of humanity: We must build and distribute flashlights.

[-][anonymous]190

We must build and distribute flashlights.

I...okay, that was too much metaphor for me. Could you tell me what this means?

That is an excellent question. I would love to and I probably can, but it will take a fair amount of thinking to articulate in something unlike this overly cute way, and therefore time. In the meantime, if anyone has thoughts about what I probably ought to mean by "build and distribute flashlights", please do share.

Flashlights could be a bunch of portable methods and heuristics that can help on a wide range of problems, not just under one streetlight. Polya's book is an example, as are some of the methods of statistical learning and Feynman's "visualize a hairy green sphere" trick.

What's the hairy green sphere? My search engine gives this page as first result.

Really? When I google feynman hairy green sphere, I get as the second hit a quote from Surely You're Joking, Mr. Feynman! which runs:

Richard P. Feynman ... Finally they state the theorem, which is some dumb thing about the ball which isn't true for my hairy green ball thing, so I say, “False!

Clicking through reveals the whole story, of course. And the third hit is a blog post which excerpts the key summary:

I had a scheme, which I still use today when somebody is explaining something that I'm trying to understand: I keep making up examples.

For instance, the mathematicians would come in with a terrific theorem, and they're all excited. As they're telling me the conditions of the theorem, I construct something which fits all the conditions. You know, you have a set (one ball)-- disjoint (two balls). Then the balls turn colors, grow hairs, or whatever, in my head as they put more conditions on.

Finally they state the theorem, which is some dumb thing about the ball which isn't true for my hairy green ball thing, so I say "False!" [and] point out my counterexample.

[-]Shmi40

It's the "hairy green ball" from "Surely you are joking...":

For instance, the mathematicians would come in with a terrific theorem, and they're all excited. As they're telling me the conditions of the theorem, I construct something which fits all the conditions. You know, you have a set (one ball) disjoint (two balls). Then the balls turn colors, grow hairs, or whatever, in my head as they put more conditions on. Finally they state the theorem, which is some dumb thing about the ball which isn't true for my hairy green ball thing, so I say, "False!" If it's true, they get all excited, and I let them go on for a while. Then I point out my counterexample.

Science itself would be a major "flashlight", I guess?

The process of coalescing, separating off, or starting new disciplines or (sub-)fields. The necessity and immediacy of this can vary.

Examples:

Vectors/linear algebra/etc.—Necessary because these are minimally and sufficiently complex formalisations/frameworks for intuitive ideas. Immediate because these areas were developed for immediate use on solving linear equations/kinematics/theoretical physics.

Cell biology—Necessary, not particularly immediate: Once the existence of cells was known, it was an obvious next step to analyse them into components, and cells are complicated enough that this necessitates a new sub-field, but the cell model was not a formalisation/framework for an existing intuition; it was an unexpected discovery, and so was of course not pursued to solve a problem at hand (indeed, since it was not known in advance, it was not pursued at all), and possibly (not an expert) did not yield significant use for some time.

Mathematics—Infamous for spawning seemingly-useless-but-decades-later-turn-out-to-be-the-key-to-everything sub-fields. So often not immediate. Necessity is difficult to tell: Addition could plausibly be a necessary concept for sufficiently advanced intelligences, but, say, quaternions are very probably not.

Newtonian mechanics—Possibly necessary, immediate: It's possible that Newtonian mechanics is necessary for most intelligent species on the way to sufficiently advanced physics. Immediate because IIRC Newton's initial speculations were more towards the theoretical/'idle' natural philosophy side, but that they were quickly commissioned by Halley for immediate use.

Freudian psychoanalysis—Unnecessary, immediate: If the LW consensus is correct, then this is both an asspull and useless. It is immediately used to try to treat people.

FAI—Possibly necessary, immediate: For species that are sufficiently 'goal-driven', recursive self-improvement of the species or its constructed successor(s) seems necessary. In the latter case, FAI is intended to solve the problem of solving problems, so is immediate.

~~~~

Each discipline is a way of lossily zooming in on a particular part of the territory. New disciplines are created by new ways to lossily zoom in. Sometimes discplines split off as similar but still significantly different ways of lossily zooming in. Or if you like, each discipline is a language game that is (hopefully) useful to understand some things; sometimes new language games pop up; sometimes language games spin off others.

Philosophy, being the therapy concerned with the logical clarification of thought, is the incubator for and gives away a disproportionate variety of new fields. Examples: Logic, metamathematics, causality, theoretical physics, biology, chemistry, penology.

[-]CCC20

I'd think that calculus would be a perfect example. A mathematical analysis technique that's broadly applicable to a wide variety of fields.

Minor typo:

anti-arrhthmia > anti-arrhythmia

Fixed.

I think that this is a really good approach. In terms of philosophy, I'd suggest starting with the philosophy of mathematics as sometimes being exposed to a mathematical proof will make you realise that what you previously believed was completely wrong.

This post's title seems a little strange. "Start Under the Streetlight" is fairly-often bad advice. The correct policy is sometimes to start under the streetlight and sometimes start elsewhere - depending on the circumstances.