This is part 2 of a sequence on problem solving.  Here's part 1, which introduces the vocabulary of "problems" versus "tasks".  This post's title is a reference1 worth 15 geek points if you get it without Googling, and 20 if you can also get it without reading the rest of the post.

You have to be careful what you wish for.  You can't just look at a problem, say "That's not okay," and set about changing the world to contain something, anything, other than that.  The easiest way to change things is usually to make them worse.  If I owe the library fifty cents that I don't have lying around, I can't go, "That's not okay!  I don't want to owe the library fifty cents!" and consider my problem solved when I set the tardy book on fire and now owe them, not money, but a new copy of the book.  Or you could make things, not worse in the specific domain of your original problem, but bad in some tangentially related department: I could solve my library fine problem by stealing fifty cents from my roommate and giving it to the library.  I'd no longer be indebted to the library.  But then I'd be a thief, and my roommate might find out and be mad at me.  Calling that a solution to the library fine problem would be, if not an outright abuse of the word "solution", at least a bit misleading.

So what kind of solutions are we looking for?  How do we answer the Shadow Question?  It's hard to turn a complex problem into doable tasks without some idea of what you want the world to look like when you've completed those tasks.  You could just say that you want to optimize according to your utility function, but that's a little like saying that your goal is to achieve your goals: no duh, but now what?  You probably don't even know what your utility function is; it's not a luminous feature of your mind.

For little problems, the answer to the Shadow Question may not be complete.  For instance, I have never before thought to mentally specify, when making a peanut butter sandwich, that I'd prefer that my act of sandwich-making not lead to the destruction of the Everglades.  But it's complete enough.  The Everglades aren't close enough to my sandwich for me to think they're worth explicitly acting to protect, even now that Everglades-destruction has occurred to me as an undesirable potential side effect.  But for big problems, well - we may have a problem...

Here's a few broad approaches you could take in trying to answer the Shadow Question.  Somebody please medicate me for my addiction to cutesy reference-y titles for things:

  • First, Do No Harm: Your top priority is to avoid making anything worse than the present status quo.  This is the strategy to apply if the status quo is more-or-less acceptable but precarious, or if you're in a particularly hazardous location relative to your problem (i.e. you can very easily make something go very pear-shaped if you don't tread carefully).  For instance, you don't move somebody who's just been flung at high speed from a Prius and landed on the shoulder of the highway and isn't moving, if you aren't a paramedic.  However she's doing, you're most likely to make it worse if you try to drag her somewhere.
  • Cherry on Top: Your top priority is to make things better than the present status quo.  When your problem is mostly independent from the rest of the world, and you have some direct control over it, this is a safe bet: pick what you can mess with, and mess with it so it gets better.  It's a worse choice when anything you do will probably have a heap of side effects.  For instance, if you're not feeling well, you could drink a glass of water and take a nap.  This pretty definitely won't cure you, but it's got a good shot of helping a little.
  • Lottery Ticket: Your top priority is to enable a best case scenario.  When the best case scenario is easy and straightforward to attain, this isn't a long shot - but it's also not much of a problem.  This is the strategy to employ when you have a really awesome best case on your hands, or when the worse cases are fairly safe and you're comfortable risking them.  This is distinct from "Cherry on Top" because CoT doesn't allow a large chance for worsening the status quo; it requires the predictable outcome to be an improvement, even if it's not the most fantastic thing that could happen.  As an example, you could sign up for cryonics.  This is guaranteed to cost you financially, but if a string of "ifs" turns out nicely, it might let you be an immortal undead ice zombie with a flying car, which would be very cool (pun intended).
  • Turn Disasters Off: Your top priority is to disable a worst case scenario (or a family of them).  This is the go-to strategy when the disaster in question is really, horrendously awful and you aren't comfortable with it having any appreciable chance of realization.  You might tolerate a guaranteed reduction in the quality of the situation in order to stave off a worse one, and so it's different from "First, Do No Harm".  For instance, you could hand over children to evil aliens in order to avert global catastrophe.

These strategies tolerate plenty of overlap, but in general, the more overlap available in a situation, the less problematic a problem you have.  If you can simultaneously enable the best case, disable the worst case, make it unlikely that anything will deteriorate, and nearly guarantee that things will improve - uh - go ahead and do that, then!  Sometimes, though, it seems like you have to organize these strategies and narrow down your plan in order.  Arrange them however you like, and in the search space each one leaves behind, optimize for the next.

Part 3 of this sequence will conclude it, and will talk about resource evaluation.

 

1"The Shadow Question" refers to the question "What do you want?", which was repeatedly asked by creatures called Shadows and their agents during the course of the splendid television show Babylon 5.

New to LessWrong?

New Comment
44 comments, sorted by Click to highlight new comments since: Today at 8:07 AM

There's also "All-in", aka "Go for broke", which picks high utility OR high disutility, with a distribution of probability that is less extreme than in the case of a lottery ticket (though not necessarily fifty-fifty chances). For instance "with all the hype and all the expectations I have formed, Watchmen-the-movie is either going to be a joyride or a horrible disappointment."

Assuming I understand what you're aiming at... These four don't quite seem to answer the question itself, but rather how you evaluate the possible answers to the question.

This seems to leave open the real issue, which is how you enumerate possible answers to the question.

To take a concrete example, suppose I am fed up with my job, so fed up that I'd take "something, anything, other than that". That's not literally true - it just feels that way. I'm not going to inquire at the nearest McDonald's, for instance.

In this particular case, which should count as a "problem" by your previous definition, I don't believe I would carve up the search space first in terms of approaches such as the four you offer in this post, i.e. asking what would be a big gain, or how do I guarantee no huge loss, etc.. My very first question would be something like "what are the things I would be trading off against one another ?"

My first pass at this, by the availability heuristic, might yield things that are salient properties of the current job (salary, location, etc.). Obviously because that's the most available thing of all, my first pass will include the reason I'm unhappy about the current job: that might be annoying coworkers, a horrible boss, etc.

One of the key skills in problem solving is to also include the less obvious attributes that (possibly) have an even greater weight in my utility function. So my second pass would be "what exactly am I trying to achieve here ?" This may start to yield non-obvious insights, such as why I need a job in the first place, and what acceptable substitutes may be.

I might even say that it's better to explore as much of the problem's causal underpinnings as a first pass.

As a budding design engineer, one of the things that has been hammered into me is first to understand the problem in its wider context. Oftentimes just identifying a PROBLEM as opposed to a TASK is not enough: you need to understand the system that enabled the problem to exist. What aspect of the system is directly detrimental? Why is it detrimental? What features of the system influence that detrimental aspect? Why do those features exist in the first place? Can their core function be satisfied through a different principle of operation, or by restructuring the functions and flows of the system, or even by redefining your requirements?

Only once you understand the system holistically and identify functional requirements, causal structure, and your available tools can you really begin to accurately evaluate your options.

"All in", after some thought, looks like a "lottery ticket" special case - without raising the stakes, you can't get at the preferred best-case, so you raise the stakes to enable that outcome.

You've also confirmed my suspicion that I wrote these in the wrong order; I probably should have done the next one before this one.

You're welcome. :)

In what way is "all in" a special case of "lottery ticket" ? Or to put it another way, how are you classifying everything that you'd see as a possible approach ?

In "lottery ticket" I am guaranteed a tolerable loss, for a tiny chance of a huge gain. When going "all in" what I forsake is any outcome close to zero ("tolerable loss" or "piddling gain"). I am guaranteed an outcome of large magnitude, but the probabilities are much closer to even. Either those are different beasts, or I'm totally confused as to what you're trying to achieve with your classification, and your reply above doesn't help me at all in the latter case. (I could be patient and wait for the next post in the series, however it sounds as if my confusion would be an issue of exposition with the current post.)

While in the actual purchase of a literal lottery ticket, you guarantee a loss to enable a huge gain, the criterion to be a "lottery ticket" case in the Alicorn-loves-cutesy-titles sense is just that the motivation is to make the huge gain possible. Sometimes, you can do this without guaranteeing a loss of any size - all it requires is that you move to open up the possibility of a large gain. Raising the stakes does exactly that: before you raise the stakes, the large gain isn't possible. After you do so, the large gain is possible, although not guaranteed. Presumably, you'd never raise stakes if that never made it possible to win big - you wouldn't raise the stakes on a bet you were certain to lose!

I get it now, thanks.

I'll wait for your next post then, and see how your classification fits in with that.

While I was thinking about your post initially, I envisioned a 2d graph, with "probability" on one axis and "(dis)utility" in another. I was toying with formalizations of your concepts as linked blobs of area at various locations on that graph, and my visualizations (of all-in vs lottery) were quite different. So, if I raise that particular point again, it probably will be in terms of that picture.

Putting a lot of work into a career like acting where there's a low chance of a very high reward strikes me as an "all in" strategy.

Other than cryonics (I'm already a member of Alcor) what are some other accessible decisions that act as a lottery ticket: enabling high pay off -- if unlikely -- future outcomes?

"Accessible," to me, does not include abandoning my career to directly meddle -- yet.

I suppose donating to SIAI might be a lottery ticket, but I'm not entirely convinced that it is such. I honestly have no idea what the SIAI does in their day to day business and the material I can find on them doesn't provide much information. I also have no idea how credible the SIAI is among those who might be in a position to turn disasters off so its hard to determine how much to value an SIAI donation at compared to alternatives.

Supporting SENS could be a lottery ticket, although to some extent the same concerns with the SIAI apply to SENS -- I don't have enough information to evaluate it compared to alternatives.

Supporting existential risk research in some way seems like a good approach to turning disasters off, since this growing branch of research appears to be creating a solid basis for future risk mitigation methods. I might investigate that further.

I'm sure there are many options I don't know that I don't know.

A neat thing about cryonics is that the disaster (my death) can come to pass, but even after that point I still have a chance to survive. Should I look for things to invest in that share that insurance-like dynamic? It seems powerful. Is insurance against death a more effective investment than trying to resolve the causes of death? I suppose this depends on the amount of knowledge the civilization has at the time you go to make the bet.

Most of the really good "lottery ticket" examples are things like starting a startup company in the hopes of being a millionaire, becoming a drug dealer in hopes of becoming a kingpin, informing a crush of their status as such in hopes of getting to be with them, and anything else on which subject you can imagine some Chicken Soup for the Soul person saying "you miss 100% of the shots you don't take".

Ok, we know that we can't just maximize expected utility, but the four strategies you give seem pretty arbitrary and unlikely to be even close to optimal. Why did you propose them?

Let me suggest another strategy that I think might make more sense. Start by considering what distributions of outcomes are feasible (intuitively). Then, among the set of seemingly feasible distributions, decide which one you most prefer, and try to work out a plan that results in that distribution. If it turns out (while trying to work out the plan) that you were wrong about its feasibility, then adjust your intuition, and reselect the most preferred feasible distribution of outcomes. Repeat this process until you end up with a plan.

This way, you get a plan that at least somewhat approximates optimality, given computational constraints and the fact that you don't know how to express your values as a utility function.

I'm not sure I know how to consider distributions of outcomes.

That's more rational (and more difficult), but still only about halfway to expectation maximization.

Only 20 geek points? Who do you think you are?

ETA: It was just a joke, but one which anyone who earned the geek points should get.

Only 20 geek points? Who do you think you are?

Where are you going... with this? Do you have anything worth, uh, listening for? ;-)

I hear there's one that went around continually asking "What time is it?"

Wow. That is one hell of an obscure reference.The number of people in the world who would get it is probably in the triple digits.

Going by the scale Alicorn was using for geek points, if getting that Babylon 5 reference gets you 20 geek points, getting this reference should probably give you on the order of 200 000 geek points.

[This comment is no longer endorsed by its author]Reply

...to Mr. Boffo? What were you thinking of?

Huh. Maybe it wasn't a reference to what I thought it was. Let's just say that a while ago I had the rather annoying habit of answering people who asked the time by repeating their question back to them. I assumed that whoever this was drew from the same source, although I now relize I may have been mistaken. (It really is that obscure...)

The thing I was thinking of was this really obscure RPG from more than a decade ago called Continuum (Tvtropes page Wikipedia page Official (semi-abandoned) website) in which time travveler's identify one another by one asking the other for the time, and then the other repeating the question right back. Thus, time-travellers can identify one another, while at worst confusing normal people with strange demands for the time or weird non-answers to the question of what the time is.

So, the obvious thing for a fan to do in order to try to identify nearby time-travelers is to go around asking a lot of people what the time is or answering such questions with the time-traveller recognized response.

As I said, very obscure.

I hear there's one that went around continually asking "What time is it?"

Teatime, of course. Aren't the three Adamsian questions, "How can we eat?", "Why do we eat?" and then, "Where shall we go for a nice lunch?"

This post was very confusing to me. What is the Shadow Question? It was never explained in the post, and it's somewhat hard to understand without knowing. Like wware, I kept thinking "Who knows what evil lurks within the hearts of men?"

The Shadow Question is "What do you want?"

Could this be added to the article? It would make it much clearer.

But then future readers would have no opportunity to win geek points.

A lot more readers will care about clarity than will care about geek points.

Fine, I'll put in a footnote :(

They still would, if you took out the explanation of where the question came from (which I would never have known). I'd suggest putting the question itself in the main body of the article, but taking out the source of the question; that way, people could still have the chance to win points.

And the reason that it is The Shadow Question is because it is a reference to Babylon 5).

The other approach is to identify a good next step and to go with that. For example, if you're trying to improve your social skills, you may join meetup.com and go to a few events. Although this is unlikely to solve your problem, it'll probably give you more information as to what the nature of it is.

The Everglades aren't close enough to my sandwich for me to think they're worth explicitly acting to protect, even now that Everglades-destruction has occurred to me as an undesirable potential side effect.

As someone who grew up in Florida, I politely request that you not eat PB&J sandwiches made with jelly made with sugar grown in Florida by conventional means -- the pesticide runoff tends to ruin the Everglades. If it's just PB, the sugar's probably not a big deal, although some brands add a lot of sugar.

I wish that the third and fourth approaches had more “everyday” examples like the first two do.

Let me see if I understand the original post:

Lottery Ticket: examples include ... buying lottery tickets, flirting with a stranger, investing in an adventurous startup.

Turn Disasters Off: examples include ... wearing your seatbelt, buying insurance, taking a taxi home when you've been drinking.

Yes, those are good examples, thanks :)

Some examples of turning (everyday computer-related) disasters off:

  • Setting up a 24/7 automatic off-site backup for your machine
  • Working under a non-admin account to prevent malware infections
  • Choosing website passwords carefully

This post could use an edit to include a link to the third part, especially as the series doesn't seem to have an easily Google'd title for "Foo, Part 3" searches :)

I never actually finished this one, sorry.

I feel significantly better about my failures to find it via Google, at least :)

Suggest editing the post to reflect that, if possible.

I did have to read (only) the first sentence before I got it. Do I get 15 geek points or 20?

By "the first sentence" do you mean "This is part 2 of a sequence on problem solving", or "You have to be careful what you wish for"?

The latter, of course.

I got the reference.

And I've never actually watched more than a couple of episodes of the show...

Ouch, I got it wrong. I thought it was talking about the radio program from my father's childhood. The tagline I had in mind was, "Who knows what evil lurks in the hearts of men? The Shadow knows." Yikes, dating myself.

The silly examples with the library book reminded me of the idea that if you're sitting on a local maximum of the fitness function, any direction you go is down. I think that's why these shadow questions are hard: they are asking you to change your status quo, which almost certainly means coming down (at least temporarily) from a local maximum. I suppose that's why smart people can sometimes seem so over-analytical about big changes. They're smart enough to already be sitting on a pretty good local maximum, and smart enough to recognize that any tradeoffs involved may be complicated.

I got it from the title alone, but skimmed right past the part of the introduction where points were being offered. Guess that means I'm still too unlucky minded!

Rot13 that sort of thing so it doesn't show up in the comments bar. It's a spoiler.