This is part 2 of a sequence on problem solving. Here's part 1, which introduces the vocabulary of "problems" versus "tasks". This post's title is a reference1 worth 15 geek points if you get it without Googling, and 20 if you can also get it without reading the rest of the post.
You have to be careful what you wish for. You can't just look at a problem, say "That's not okay," and set about changing the world to contain something, anything, other than that. The easiest way to change things is usually to make them worse. If I owe the library fifty cents that I don't have lying around, I can't go, "That's not okay! I don't want to owe the library fifty cents!" and consider my problem solved when I set the tardy book on fire and now owe them, not money, but a new copy of the book. Or you could make things, not worse in the specific domain of your original problem, but bad in some tangentially related department: I could solve my library fine problem by stealing fifty cents from my roommate and giving it to the library. I'd no longer be indebted to the library. But then I'd be a thief, and my roommate might find out and be mad at me. Calling that a solution to the library fine problem would be, if not an outright abuse of the word "solution", at least a bit misleading.
So what kind of solutions are we looking for? How do we answer the Shadow Question? It's hard to turn a complex problem into doable tasks without some idea of what you want the world to look like when you've completed those tasks. You could just say that you want to optimize according to your utility function, but that's a little like saying that your goal is to achieve your goals: no duh, but now what? You probably don't even know what your utility function is; it's not a luminous feature of your mind.
For little problems, the answer to the Shadow Question may not be complete. For instance, I have never before thought to mentally specify, when making a peanut butter sandwich, that I'd prefer that my act of sandwich-making not lead to the destruction of the Everglades. But it's complete enough. The Everglades aren't close enough to my sandwich for me to think they're worth explicitly acting to protect, even now that Everglades-destruction has occurred to me as an undesirable potential side effect. But for big problems, well - we may have a problem...
Here's a few broad approaches you could take in trying to answer the Shadow Question. Somebody please medicate me for my addiction to cutesy reference-y titles for things:
- First, Do No Harm: Your top priority is to avoid making anything worse than the present status quo. This is the strategy to apply if the status quo is more-or-less acceptable but precarious, or if you're in a particularly hazardous location relative to your problem (i.e. you can very easily make something go very pear-shaped if you don't tread carefully). For instance, you don't move somebody who's just been flung at high speed from a Prius and landed on the shoulder of the highway and isn't moving, if you aren't a paramedic. However she's doing, you're most likely to make it worse if you try to drag her somewhere.
- Cherry on Top: Your top priority is to make things better than the present status quo. When your problem is mostly independent from the rest of the world, and you have some direct control over it, this is a safe bet: pick what you can mess with, and mess with it so it gets better. It's a worse choice when anything you do will probably have a heap of side effects. For instance, if you're not feeling well, you could drink a glass of water and take a nap. This pretty definitely won't cure you, but it's got a good shot of helping a little.
- Lottery Ticket: Your top priority is to enable a best case scenario. When the best case scenario is easy and straightforward to attain, this isn't a long shot - but it's also not much of a problem. This is the strategy to employ when you have a really awesome best case on your hands, or when the worse cases are fairly safe and you're comfortable risking them. This is distinct from "Cherry on Top" because CoT doesn't allow a large chance for worsening the status quo; it requires the predictable outcome to be an improvement, even if it's not the most fantastic thing that could happen. As an example, you could sign up for cryonics. This is guaranteed to cost you financially, but if a string of "ifs" turns out nicely, it might let you be an immortal undead ice zombie with a flying car, which would be very cool (pun intended).
- Turn Disasters Off: Your top priority is to disable a worst case scenario (or a family of them). This is the go-to strategy when the disaster in question is really, horrendously awful and you aren't comfortable with it having any appreciable chance of realization. You might tolerate a guaranteed reduction in the quality of the situation in order to stave off a worse one, and so it's different from "First, Do No Harm". For instance, you could hand over children to evil aliens in order to avert global catastrophe.
These strategies tolerate plenty of overlap, but in general, the more overlap available in a situation, the less problematic a problem you have. If you can simultaneously enable the best case, disable the worst case, make it unlikely that anything will deteriorate, and nearly guarantee that things will improve - uh - go ahead and do that, then! Sometimes, though, it seems like you have to organize these strategies and narrow down your plan in order. Arrange them however you like, and in the search space each one leaves behind, optimize for the next.
Part 3 of this sequence will conclude it, and will talk about resource evaluation.
1"The Shadow Question" refers to the question "What do you want?", which was repeatedly asked by creatures called Shadows and their agents during the course of the splendid television show Babylon 5.
Ouch, I got it wrong. I thought it was talking about the radio program from my father's childhood. The tagline I had in mind was, "Who knows what evil lurks in the hearts of men? The Shadow knows." Yikes, dating myself.
The silly examples with the library book reminded me of the idea that if you're sitting on a local maximum of the fitness function, any direction you go is down. I think that's why these shadow questions are hard: they are asking you to change your status quo, which almost certainly means coming down (at least temporarily) from a local maximum. I suppose that's why smart people can sometimes seem so over-analytical about big changes. They're smart enough to already be sitting on a pretty good local maximum, and smart enough to recognize that any tradeoffs involved may be complicated.