Wicked Problems
Original post: http://bearlamp.com.au/wicked-problems/
Nothing is a wicked problem.
When I started researching problems and problem solving and solutions and meta-solving processes I stumbled across a wicked problem. This is from Wikipedia:
Rittel and Webber's 1973 formulation of wicked problems in social policy planning specified ten characteristics:[3][4]
- There is no definitive formulation of a wicked problem.
- Wicked problems have no stopping rule.
- Solutions to wicked problems are not true-or-false, but good or bad.
- There is no immediate and no ultimate test of a solution to a wicked problem.
- Every solution to a wicked problem is a "one-shot operation"; because there is no opportunity to learn by trial and error, every attempt counts significantly.
- Wicked problems do not have an enumerable (or an exhaustively describable) set of potential solutions, nor is there a well-described set of permissible operations that may be incorporated into the plan.
- Every wicked problem is essentially unique.
- Every wicked problem can be considered to be a symptom of another problem.
- The existence of a discrepancy representing a wicked problem can be explained in numerous ways. The choice of explanation determines the nature of the problem's resolution.
- The social planner has no right to be wrong (i.e., planners are liable for the consequences of the actions they generate).
Conklin later generalized the concept of problem wickedness to areas other than planning and policy. The defining characteristics are:
- The problem is not understood until after the formulation of a solution.
- Wicked problems have no stopping rule.
- Solutions to wicked problems are not right or wrong.
- Every wicked problem is essentially novel and unique.
- Every solution to a wicked problem is a 'one shot operation.'
- Wicked problems have no given alternative solutions.
Defeating a wicked problem
It took me a while to realise what a wicked problem was. It is evil. It's a challenge. It's a one-shot task that you don't really understand until you are attempting to solve it, and then you influence it by trying to solve it. It's wicked. And then I started paying attention to everything around me. And suddenly being a social human was a wicked problem. Every new interaction is not like the last ones, as soon as you enter the interaction it's too late; and then you only have one shot. Any action towards the problem adds more complexity to the problem.
Then I looked to time management. Time management is a wicked problem. You start out knowing nothing. It takes time to work out what takes time. And by the time you think you have a system in place you are already burning more time. Just catching up on a bad system is failing at the wicked problem.
Then I looked to cooking. No two ingredients are the same. Even if you are cooking a thing for the 100th time, the factors of the day, the humidity, temperature, it's going to be different. You can't know what's going to happen.
Then I looked at politics. And that's what wicked problems were invented around, social problems where trying to solve the problem changes the problem. And nothing makes it easier.
Then I took my man-with-a-hammer syndrome and I whacked myself on the head with it.
Okay so not everything is a hammer-nail wicked problem. Even wicked problems are not a wicked problem. There are problems out there that are really wicked problems, but it would be rare that you find one.
There is a trick to solving a wicked problem. The trick is to work out how it's not a wicked problem. Sure if it's wicked by design so be it. But real problems in the real world are only pretending to be wicked problems.
- The problem is not understood until after the formulation of a solution.
Yeah, okay. So you don't really get the problem. That's cool. You have done problems before. And done problems like this before too. The worst thing to do in the case of being presented with a problem which is not understood is to never attempt it. If you don't understand - it's time to quantify what you do understand and quantify what you don't understand. After that it's time to look at how much uncertainty you can get away with and how to solve that. If in doubt refer to the book How to measure anything.
2. Wicked problems have no stopping rule.
Real wicked problems don't have a stopping rule but real world problems do. Or you can give them one anyway. How many years is enough years of life. "I don't know I will decide when I get there". How much money is enough money? "I will first earn my next 10 million dollarydoos and then decide what to do next". Yes. A wicked problem has no stopping rule. But that's not the real world. In the real world even a fake stopping rule is good enough for your purposes.
3. Solutions to wicked problems are not right or wrong.
Okay. Maybe a tricky one. Lots of things are not right or wrong. "should I earn to give, or should I bring around FAI sooner?". Who knows? Right now people are arguing about it but we don't really know. If you are making decisions based on right or wrong you probably want to do the right thing. We know already that if you can't decide that makes all options equally good and irrelevant what you choose. If you can make one more right than the other - do that. It's probably not a real wicked problem. "How should I format this word document" is not a right or wrong, but it's also irrelevant.
4. Every wicked problem is essentially novel and unique.
Yes. If you are facing a truly novel and unique problem there is nothing I can say that can help you. But if you are not, there are many options. You can:
- build a model scenario and test solutions
- look for existing examples of similar problems and find similar solutions
- try to break the problem into smaller known parts
- consider doing nothing about the problem and see if it solves itself
IF a problem is truly unique, then you really have no reason to fear the unknown because it was not possible to be prepared. If it's not unique - be prepared (we are all always being prepared for problems all the time)
5. Every solution to a wicked problem is a 'one shot operation.'
Yea, these are hard. Maybe some of the solutions to 4 will help. Build models, try search or create similar scenarios (why do trolley problems exist other than to test one-shot problems with pre-thought-out examples). You only get one shot to launch a nuclear missile the first time (and we are very glad that we didn't ignite the atmosphere that time). Now days we have computer modelling. We have prediction markets, we have Bayes. We can know what we don't know. And we can make it significantly less dangerous to launch into space - risking the lives of astronauts when we do.
6. Wicked problems have no given alternative solutions.
Yes. Wicked problems don't, but real world problems could, and often do. Find those solutions, or the degrees of freedom in your problem. Search and try to confirm possible options, find friend scenarios, and use everything you have.
Nothing is a wicked problem.
Meta: This took 1 hour to write and has been on my mind for months. Coming soon: Defining what is a problem
The ladder of abstraction and giving examples
Original post: http://bearlamp.com.au/examples/
When we talk about a concept or a point it's important to understand the ladder of abstraction. Covered before on lesswrong and in other places as advice for communicators on how to bridge a gap of knowledge.
Knowing, understanding and feeling the ladder of abstraction prevents things like this:
- Speakers who bury audiences in an avalanche of data without providing the significance.
- Speakers who discuss theories and ideals, completely detached from real-world practicalities.
When you talk to old and wise people, they will sometimes give you stories of their lives. "back in my day...". Seeing that in perspective is a good way to realise that might be people's way of shifting around the latter of abstraction. As an agenty-agent of agenty goodness - your job is to make sense of this occurrence. The ladder of abstraction is very powerful when used effectively and very frustrating when you find yourself on the wrong side of it.
The flipside to this example is when people talk at a highly theoretical level. I suspect this happens to philosophers, as well as hippies. They are very good at being able to tell you about the connections between things that are "energy" or "desire", but lack the grounding to explain how that applies to real life. I don't blame them. One day I will be able to think completely abstractly. Today is not that day. Since today is not that day, it is my duty and your's to ask and specify. To give the explanation of what the ladder of abstraction is, and then tell them you have no idea what they are talking about. Or as for the example above - ask them to go up a level in the ladder of abstraction. "If I were to learn something from your experiences - what would it be?".
Lesswrong doing it wrong
I care about adding the conceptual ladder of abstraction to the repertoire for a reason. LW'ers are very good at paying attention to details. A really powerful and important ability. After all - the fifth virtue is argument, the tenth is precision. If you can't be precise about what you are communication, you fail to value what we value.
Which is why it's great to see critical objections to what OP's provide as examples.
I object when defeating an example does not defeat the rule. Our delightful OP may see their territory, stride forth and exclaim to have a map for this territory and a few similar mountains or valleys. Correcting the mountains and valleys map mentioned doesn't change the rest of the territory and does not change the rest of the map.
This does matter. Recently a copy of this dissertation came around the slack - https://cryptome.org/2013/09/nolan-nctc.pdf. It is a report detailing the ridiculous culture inside the CIA and other US government security institutions. One of the biggest problems within that culture can be shown through this example (page 34 of the report):
The following exchange is a good example, told to me by a CIA analyst who was explaining the rules of baseball to visitors who didn’t know the game:
Analyst A: So there are four bases--
Analyst B: -- Well, no, it’s really three bases plus home plate.
Analyst A: ... Okay, three bases plus home plate. The batter hits the ball and advances through the bases one by one—
Analyst C: -- Well, no, it doesn’t have to be one base at a time.
And these ones on page 35:
The following excerpts from stories people have told me or that I witnessed further illustrate this concept:
John: I see you’ve drawn a star on that draft.
Bridget: Yeah, that’s just my doodle of choice. I just do it unconsciously sometimes.
John: Don’t you mean subconsciously?
Scott: Good morning!
Employee in the parking lot: Well, I don’t know if it’s good, but here we are.
Helene: I am so thirsty today! I seriously have a dehydration problem.
Lucy: Actually, you have a hydration problem.
Victoria: My hopes have been squashed like a pancake.
James: Don’t you mean flattened like a pancake?
For those of us that don't have time to read 215 pages. The point is that analyst culture does this. A lot. From the outside it might seem ridiculous. We can intellectually confidently say that the analysts A, B and C in the first example were all right, and if they paid attention to the object of the situation they would skip the interruptions and get to the point of explaining how baseball works. But that's not what it feels like when you are on the inside.
The report outlines that these things make analyst culture a difficult one to be a part of or be engaged in because of examples like these.
We do the same thing. We nitpick at examples, and fight over irrelevant things. If I were to change everyone's mind, I would rather see something like this:
Statements including "no one denies that ..." are usually false. Regardless, my goal here was to...
Taken literally, yes. However these statements are not intended to be taken literally...
Turn into:
(*yes this is not a very good example of an example, this is an example of a turn of speech that was challenged, but the same effect of nitpicking on irrelevant details is present).
Nitpicking is not necessary.
Sometimes we forget that we are all in the same boat together, racing down the river at the rate that we can uncover truth. Sometimes we feel like we are in different boats racing each other. In this sense it would be a good idea to compete and accuse each other of our failures on the journey to get ahead. However we do not want to do that.
It's in our nature to compete, the human need to be right! But we don't need to compete against each other, we need to support each other to compete against Moloch, Akrasia, Entropy, Fallacies and biases (among others).
I am guilty myself. In my personal life as well as on LW. If I am laying blame, I blame myself for failing to point this out sooner, more than I blame anyone else for nitpicking examples.
The plan of action.
Next time you go to comment; Next time I go to comment, think very carefully about if you can improve, if I can improve - the post I am commenting on, before I level my objections at it. We want to make the world a better place. People wiser, older, sharper and witter than me have already said it; "if you are looking for where to start... you need only look in the mirror".
Meta: this took 3 hours to write.
Against easy superintelligence: the unforeseen friction argument
In 1932, Stanley Baldwin, prime minister of the largest empire the world had ever seen, proclaimed that "The bomber will always get through". Backed up by most of the professional military opinion of the time, by the experience of the first world war, and by reasonable extrapolations and arguments, he laid out a vision of the future where the unstoppable heavy bomber would utterly devastate countries if a war started. Deterrence - building more bombers yourself to threaten complete retaliation - seemed the only counter.
And yet, things didn't turn out that way. Against all past trends, the light fighter plane surpassed the heavily armed bomber in aerial combat, the development of radar changed the strategic balance, and cities and industry proved much more resilient to bombing than anyone had a right to suspect.
Could anyone have predicted these changes ahead of time? Most probably, no. All of these ran counter to what was known and understood, (and radar was a completely new and unexpected development). What could and should have been predicted, though, was that something would happen to weaken the impact of the all-conquering bomber. The extreme predictions would be unrealistic; frictions, technological changes, changes in military doctrine and hidden, unknown factors, would undermine them.
This is what I call the "generalised friction" argument. Simple predictive models, based on strong models or current understanding, will likely not succeed as well as expected: there will likely be delays, obstacles, and unexpected difficulties along the way.
I am, of course, thinking of AI predictions here, specifically of the Omohundro-Yudkowsky model of AI recursive self-improvements that rapidly reach great power, with convergent instrumental goals that make the AI into a power-hungry expected utility maximiser. This model I see as the "supply and demand curve" of AI prediction: too simple to be true in the form described.
But the supply and demand curves are generally approximately true, especially over the long term. So this isn't an argument that the Omohundro-Yudkowsky model is wrong, but that it will likely not happen as flawlessly as described. Ultimately, the "bomber will always get through" turned out to be true: but only in the form of the ICBM. If you take the old arguments and replace "bomber" with "ICBM", you end with strong and accurate predictions. So "the AI may not foom in the manner and on the timescales described" is not saying "the AI won't foom".
Also, it should be emphasised that this argument is strictly about our predictive ability, and does not say anything about the capacity or difficulty of AI per se.
Miracle Mineral Supplement
We can always use more case studies of insanity that aren't religion, right?
Well, Miracle Mineral Supplement is my new go-to example for Bad Things happening to people with low epistemic standards. "MMS" is a supposed cure for everything ranging from the common cold to HIV to cancer. I just saw it recommended in another Facebook thread to someone who was worried about malaria symptoms.
It's industrial-strength bleach. Literally just bleach. Usually drunk, sometimes injected, and yes, it often kills you. It is every bit as bad as it sounds if not worse.
This is beyond Poe's Law. Medieval blood draining via leeches was far more of an excusable error than this, they had far less evidence it was a bad idea. I think if I was trying to guess what was the dumbest alternative medicine on the planet, I still would not have guessed this low. My brain is still not pessimistic enough about human stupidity.
The Aspirin Paradox- replacement for the Smoking Lesion Problem?
It's been pointed out that the Smoking Lesion problem is a poorly chosen decision theory problem, because in the real world there actually is a direct causal link from smoking to cancer, and people's intuitions are influenced more by that than by the stated parameters of the scenario. In his TDT document, Eliezer concocts a different artificial example (chewing gum and throat abcesses). I recently noticed, though, a potentially good real-world example of the same dynamic: the Aspirin Paradox.
Despite the effectiveness of aspirin in preventing heart attacks, those who regularly take aspirin are at a higher risk of a second heart attack, because those with symptoms of heart disease are more likely than those without symptoms to be taking aspirin regularly. While it turns out this "risk factor" is mostly screened off by other measurable health factors, it's a valid enough correlation for the purposes of decision theory.
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)