Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Hold Off On Proposing Solutions

38 Post author: Eliezer_Yudkowsky 17 October 2007 03:16AM

From pp. 55-56 of Robyn Dawes's Rational Choice in an Uncertain World.  Bolding added.

Norman R. F. Maier noted that when a group faces a problem, the natural tendency of its members is to propose possible solutions as they begin to discuss the problem.  Consequently, the group interaction focuses on the merits and problems of the proposed solutions, people become emotionally attached to the ones they have suggested, and superior solutions are not suggested.  Maier enacted an edict to enhance group problem solving: "Do not propose solutions until the problem has been discussed as thoroughly as possible without suggesting any."  It is easy to show that this edict works in contexts where there are objectively defined good solutions to problems.

Maier devised the following "role playing" experiment to demonstrate his point.  Three employees of differing ability work on an assembly line.  They rotate among three jobs that require different levels of ability, because the most able—who is also the most dominant—is strongly motivated to avoid boredom.  In contrast, the least able worker, aware that he does not perform the more difficult jobs as well as the other two, has agreed to rotation because of the dominance of his able co-worker.  An "efficiency expert" notes that if the most able employee were given the most difficult task and the least able the least difficult, productivity could be improved by 20%, and the expert recommends that the employees stop rotating.  The three employees and the a fourth person designated to play the role of foreman are asked to discuss the expert's recommendation.  Some role-playing groups are given Maier's edict not to discuss solutions until having discussed the problem thoroughly, while others are not.  Those who are not given the edict immediately begin to argue about the importance of productivity versus worker autonomy and the avoidance of boredom.  Groups presented with the edict have a much higher probability of arriving at the solution that the two more able workers rotate, while the least able one sticks to the least demanding job—a solution that yields a 19% increase in productivity.

I have often used this edict with groups I have led—particularly when they face a very tough problem, which is when group members are most apt to propose solutions immediately.  While I have no objective criterion on which to judge the quality of the problem solving of the groups, Maier's edict appears to foster better solutions to problems.

This is so true it's not even funny.  And it gets worse and worse the tougher the problem becomes.  Take Artificial Intelligence, for example.  A surprising number of people I meet seem to know exactly how to build an Artificial General Intelligence, without, say, knowing how to build an optical character recognizer or a collaborative filtering system (much easier problems).  And as for building an AI with a positive impact on the world—a Friendly AI, loosely speaking—why, that problem is so incredibly difficult that an actual majority resolve the whole issue within 15 seconds.  Give me a break.

(Added:  This problem is by no means unique to AI.  Physicists encounter plenty of nonphysicists with their own theories of physics, economists get to hear lots of amazing new theories of economics.  If you're an evolutionary biologist, anyone you meet can instantly solve any open problem in your field, usually by postulating group selection.  Et cetera.)

Maier's advice echoes the principle of the bottom line, that the effectiveness of our decisions is determined only by whatever evidence and processing we did in first arriving at our decisions—after you write the bottom line, it is too late to write more reasons above.  If you make your decision very early on, it will, in fact, be based on very little thought, no matter how many amazing arguments you come up with afterward.

And consider furthermore that We Change Our Minds Less Often Than We Think:  24 people assigned an average 66% probability to the future choice thought more probable, but only 1 in 24 actually chose the option thought less probable.  Once you can guess what your answer will be, you have probably already decided.  If you can guess your answer half a second after hearing the question, then you have half a second in which to be intelligent.  It's not a lot of time.

Traditional Rationality emphasizes falsification— the ability to relinquish an initial opinion when confronted by clear evidence against it.  But once an idea gets into your head, it will probably require way too much evidence to get it out again.  Worse, we don't always have the luxury of overwhelming evidence.

I suspect that a more powerful (and more difficult) method is to hold off on thinking of an answer.  To suspend, draw out, that tiny moment when we can't yet guess what our answer will be; thus giving our intelligence a longer time in which to act.

Even half a minute would be an improvement over half a second.

 

Part of the Seeing With Fresh Eyes subsequence of How To Actually Change Your Mind

Next post: "On Expressing Your Concerns"

Previous post: "We Change Our Minds Less Often Than We Think"

Comments (47)

Sort By: Old
Comment author: Gray_Area 17 October 2007 04:40:23AM 3 points [-]

What circles do you run in Eliezer? I meet a fair number of people who work in AI, (you can say I "work in AI" myself) and so far I can't think of a single person who was sure of a way to build general intelligence. Is this attitude you observe a common one among people who aren't actually doing AI research, but who think about AI?

Comment author: Eliezer_Yudkowsky 17 October 2007 05:10:46AM 4 points [-]

Oh, I'm not talking about the mainstream AI field. Most of them know better. I mean, say, a random middle or upper-class individual in Silicon Valley, or a random user on an IRC channel.

However, the rule about instantly solving Friendly AI may apply even within the AI field, since it's a more difficult problem.

Comment author: Constant2 17 October 2007 06:06:45AM 10 points [-]

It's obvious how to build AI. You just add complexity. AIs need complexity. :-)

Comment author: Richard_Hollerith 17 October 2007 08:41:08AM 2 points [-]

And a randomness-adder :)

Comment author: Eddieosh 17 October 2007 09:14:48AM 3 points [-]

I've just finished a 3-day training course on TRIZ (http://en.wikipedia.org/wiki/TRIZ) a problem solving technique, one of the recurring themes throughout the course was what to do about all the solutions that come out even before you've figured out what the true problem is you're trying to solve. The advice was to write the solutions down (rather than be diverted by them or try to bat them away), use them to help examine the problem a bit more and then carry on until you have enough information to make useful judgements about all the solutions you've generated; this was very helpful advice. You need to have a sound way of formulating and exploring the problem space, as well as generating solutions, otherwise you'll become too distracted by all the great solutions your brain is generating.

Comment author: logicnazi 17 October 2007 09:21:30AM 0 points [-]

I just want to remark that it is far from obvious on apriori grounds that there is no elegant general AI algorithm that will solve all the other problems quite nicely. We've only learned this by the continued failure to find such an algorithm or anything like it by the AI community and the continued small successes of more specific less elegant approaches.

Comment author: Rick_Smith 17 October 2007 10:07:46AM 3 points [-]

AI's need Emergence too. Make sure to add some of that to the soup ;^)

Comment author: Alan_Crowe 17 October 2007 11:55:07AM 2 points [-]

X3J13, the ANSI committee that standarised Common Lisp, had many problems to solve. Kent Pitman credits Larry Masinter with imposing the disciple of seperating problem descriptions from proposed solutions and gives insights into what that meant in practise in a post to comp.lang.lisp

http://tinyurl.com/2hppgs

The general interest lies in that fact that the X3J13 Issues were all written up and are available on line.

http://www.lispworks.com/documentation/HyperSpec/Front/X3J13Iss.htm

or

http://www.lisp.org/HyperSpec/FrontMatter/X3J13-Issues.html

so if you wish to study how this works there is a resource you can analyse.

I should confess that my interest has been in content not process. I have been reading these issues to learn Common Lisp. Are these pages really a useful resource for scholars wishing to study the separation of problem descriptions from proposed solutions? I don't know.

Comment author: Tiedemies2 17 October 2007 12:01:16PM 4 points [-]

I think this argument is flawed with respect to the more technology-oriented questions. Most people do not seriously claim to solve AI problems. What most people (like myself) who are slightly educated in the field (I did an undergrad minor in AI, just very simple stuff) will do is they will suggest an approach that they would try if they had to start working on it. Technical questions also usually yield to evidence very quickly whenever it matters, i.e., when someone would start burning money on an implementation. That is not to say some time and resources are not to be saved by using the maxim outlined here.

OTOH, the part about economists is valid, since most people have very strong ideas (usually wrong ones) about what will work, e.g., as a policy. But then again, most people have no way of wasting (other peoples') resources based on these faulty ideas.

No, wait...

Comment author: 17 October 2007 01:42:33PM 1 point [-]

The latest of a number of really good posts from you that directly address the concern of this blog. You seem to be really starting to "grok" the terrifying reality of just how biased we are by the very nature of our thought processes, and coming up with good and useful steps to reduce those biases. Nicely done.

Comment author: michael_vassar3 17 October 2007 02:33:23PM 8 points [-]

This post makes me wonder how much time passed for Eliezer between concluding that a technological singularity was a probable part of the future and deciding that creating an AGI was the best response, and likewise how much time passed between concluding that AGI Friendlyness would be a difficult problem and concluding that working on a theory of AGI Friendlyness was the best response.

Comment author: Richard_Hollerith 17 October 2007 06:55:12PM 0 points [-]

Eliezer, I get the impression that your recent blog entries will make me a better rationalist or if not that a better inventor of software, organizational innovations and social arrangements that will help people become better rationalists.

Good stuff, I say.

Comment author: anonymous_poster 17 October 2007 07:09:57PM 5 points [-]

A surprising number of people I meet seem to know exactly how to build an Artificial General Intelligence, without, say, knowing how to play the guitar or juggle (much easier problems).

Comment author: danlowlite 29 October 2010 01:58:30PM 2 points [-]

Yes, but while those two topics may be interesting to me, other "easy" problems (home and car maintenance, farming) are not so much even though I recognize their importance. I'm not going to learn how to do everything basic before I am going to learn something complicated. Am I?

Is an AI?

And these problems aren't even easy, really. Like the person who knows how to make an AI, one imagines they "know" how to play guitar. There's a competence level and there is a deeper mastery/creation level. I know three chords; I am not <your favorite guitarist>.

Unless that was your point.

Comment author: Constant2 17 October 2007 07:29:16PM 2 points [-]

My AI will play the guitar and juggle so I won't have to.

Comment author: Constant2 17 October 2007 09:02:12PM 0 points [-]

This advice seems the opposite of, "avoid analysis paralysis." These may be bounding two extremes, neither of which is healthy. Or I may simply be wrong about the relationship.

Comment author: Eliezer_Yudkowsky 17 October 2007 09:03:02PM 1 point [-]

Playing the guitar has human-aesthetic components so it's a subproblem of Friendly AI, not just AGI. Building an AI that juggles is a valid challenge. As for trying to do it yourself, that quite misses the point. A mathematician may not be able to do high-speed mental arithmetic, but ought to know how to build a calculator.

Comment author: Luis_Enrique2 17 October 2007 10:02:27PM 1 point [-]

I remember reading something much like this in I am right and you are wrong by Edward de Bono, who as I recall wrote that we should try to hold on to the "I haven't made my mind up" state much longer than we do, and be prepared to say "I don't know" much more often than we do (I think he even proposed a new word we could use to answer questions with that meant we don't have a reason to think either way yet). This was about 15 years ago so I've probably mis-remembered.

I was a philosophy undergrad at the time, and when I asked my tutors about de Bono, they told me he was a vacuous 'self-help' nitwit I should ignore.

Comment author: GreedyAlgorithm 17 October 2007 10:45:03PM 0 points [-]

"My Ap distribution is rather flat."

Hm, MADIRF? :)

Comment author: Doug_S. 18 October 2007 12:14:35AM 12 points [-]

Completely useless methods for building a general intelligence:

Method 1: Put some bacteria on a lifeless planet with liquid water. Wait until one evolves.

Method 2: Find a fertile human of each gender and induce them to mate. Wait nine months.

Comment author: Douglas_Knight2 18 October 2007 03:08:14AM 0 points [-]

Luis Enrique, See above about "We Change Our Minds Less Often Than We Think"; my interpretation is that the people are trying to believe that they haven't made up their minds, but they are wrong. That is, they seem to be implementing the (first) advice you mention. Maybe one can come up with more practical advice, but these are very difficult problems to fix, even if you understand the errors. On the other hand, the main part of the post is about a successful intervention.

Comment author: Rolf_Nelson2 18 October 2007 05:31:08AM 1 point [-]

Constant, regarding "analysis paralysis," keep in mind there are often two separate questions:

1. How much time should I spend thinking about X?

2. Given I'm allocating T time to think about X, how should I divide up T among different thought subtasks?

Analysis Paralysis would generally be a problem with (1).

The current blog post applies more to (2). In the Maier example, the participants presumably know they have a sizable chunk of time blocked out, and the experimental group presumably gets better results not by spending more time overall, but because they reserved a good chunk of T to spend learning the problem, *without* committing right away to a solution.

Comment author: Lewsome 17 January 2010 12:44:36AM 5 points [-]

The notion of delaying proposition of 'solutions' as long as possible seems an excellent technique for group work where stated propositions not only appear prematurely but become entangled with other, perhaps unproductive interpersonal dynamics, and where the energy of the deliberately 'unmade up' group mind can possibly assist the individual to internally change position. The thorny bit for me however, is the individual trying to 'hold that non-thought' - a challenge that is more or less equivalent to stopping, or even slowing the thought process deliberately, which is meditation after all - something we mere mortals haven't found all that easy so far. Indeed, some argue that many of us aren't even aware there is an 'internal dialogue', let alone knowing how to stop it. In other words, it's easy to say don't make up your mind, but not so easy to enact.

Comment author: ericn 30 December 2010 06:45:00AM 1 point [-]

It's okay to think up solutions. You just have to write them down and refocus on the problem.

This is how a brainstorming session is supposed to work. The main goal of the facilitator is to keep the group criticism from spinning out of control. Usually, if someone proposes a solution, someone will shout out an objection to it. But we should still be thinking about the problem. Just write down the solution and shush the objection, then return to the problem.

Comment author: xamdam 18 March 2010 04:32:26PM 9 points [-]

"...human mind is a lot like the human egg, and the human egg has a shut-off device. When o­ne sperm gets in, it shuts down so the next o­ne can't get in. The human mind has a big tendency of the same sort."

Charlie Munger

http://vinvesting.com/docs/munger/human_misjudgement.html

Comment author: ericn 30 December 2010 06:38:21AM 6 points [-]

I agree. I really hate our notion that "you shouldn't bring up a problem unless you have a solution".

It is obvious to anyone that solves problems that we should analyze the problem before letting our minds move on to a solution.

Comment author: FriendlyViking 17 March 2011 05:33:34PM 2 points [-]

The people advocating that might be confusing analysis with politics. It's annoying when someone criticises your political idea but offers no alternative; it feels (sometimes accurately) that they're disrupting the conversation but offering no input. So in a political debate, a ground rule might be "don't criticise my solution if you don't have a solution of your own".

Rationally, however, that doesn't excuse not assessing the solution. And it's also important to remember that one potential solution is "do nothing" or "carry on doing what we were doing already". So, in most cases, ANY new solution had an alternative solution to which it can be compared.

Comment author: MarkusQ 11 August 2011 05:51:36AM *  3 points [-]

Are you sure of that citation? I just looked for it in a copy of Dawes's "Rational Choice in an Uncertain World" and again with the full text search in Google books

http://books.google.com/books/about/Rational_choice_in_an_uncertain_world.html?id=rcU1BsfrM2kC

and did not find any mention of Maier's work. Also, though Maier does frequently use the "Changing Work Procedures" problem, I haven't turned up any publication by him that matches this description. (Note that this failure is quite possibly mine; I haven't done an exhaustive search).

-- MarkusQ

Comment author: byrnema 26 August 2011 11:30:33AM 1 point [-]

I'm thinking perhaps it is this book by Norman R.F. Maier:

Problem Solving Discussions and Conferences, published by McGraw-Hill Education (December 1963).

Does anyone know of more recent journal article on the topic, 'wait before proposing solutions'?

Comment author: mat33 08 October 2011 02:22:20AM *  0 points [-]

"why, that problem is so incredibly difficult that an actual majority resolve the whole issue within 15 seconds.", "We Change Our Minds Less Often Than We Think" and "Cached Thoughts"...

Right. We don't do a lot of "our" thinking ourselves. We aren't individually sentient, not really. We don't notice it, but the actual thinking is going on in our subcultures. The sad and funny thing is, we don't even try to understand the cognition of our subcultures, when we research cognition.

Comment author: stcredzero 03 June 2012 09:04:59PM 3 points [-]

I think I'm sentient. If you're not sentient, I would surmise that you believe you're lucky enough to be in a competent subculture -- one self-aware enough to bring this realization to you.

Could one devise a series of experiments to show that individuals aren't sentient, but "subcultures" are?

Comment author: ameriver 20 May 2012 03:05:38AM 7 points [-]

This is one of the techniques I've always thought sounded really useful, but never had a clear enough picture of to implement for myself. Does anyone have an example (a transcript, or something of the like) of groups and/or individuals successfully discussing a problem for 5 or 10 minutes without proposing any solutions? I have trouble imagining what that would look like.

Comment author: TheOtherDave 20 May 2012 04:23:52AM 18 points [-]

No transcript. But I do this professionally all the time. Clients frequently come to me with a design in mind for a solution, and it's often important to back them up and get them to tell me what the problem actually is.

Usually, I start with the question "How would you be able to tell that this problem had been solved?" and repeat it two or twenty times in different words until someone actually tries to answer it.

On one occasion I handed a client my pen and asked whether it was a solution to their problem. They looked at me funny and said it wasn't. I asked them how they knew that, and after a while one of them said "well, for one thing, it doesn't do X" and I said "great!", took the pen back, and wrote "has to do X". Then I handed them the pen back and said "OK, suppose I add the ability to do X somehow to this pen. Is it a solution to your problem now?" and after a couple of iterations they got it and started actually telling me what their problem was.

The thing that used to astonish me is how often the proposed solution utterly fails to even address the problem articulated by the same person who proposed the solution. I've come to expect it.

Comment author: Jonathan_Graehl 15 June 2012 07:11:54AM 2 points [-]

I start with the question "How would you be able to tell that this problem had been solved?" and repeat it two or twenty times in different words until someone actually tries to answer it.

I handed a client my pen and asked whether it was a solution to their problem

Bleakly funny. Thanks for that. I usually retreat (probably with an angry or pained look on my face) when I notice I'm not really being heard. But sometimes it's better to play and explore.

Comment author: TheOtherDave 15 June 2012 03:18:29PM 7 points [-]

(nods) It's kind of critical in a systems engineering role.

Only vaguely relatedly, one of my favorite lines ever came from my first professional mentor, about a design he was proposing: "It does what you expect, but you have to expect the right things."

Comment author: Epiphany 21 September 2012 04:40:15AM *  1 point [-]

Usually, I start with the question "How would you be able to tell that this problem had been solved?" and repeat it two or twenty times in different words until someone actually tries to answer it.

What a true and hilarious depiction of life. I have the exact same problem doing web development. Because the people giving me projects are not IT people they tend to come up with totally dysfunctional solutions. Yet they almost always start by telling me how they want the problem solved. I have to dig to find out what the problem is first but I just ask them "What result do you want?" or "What purpose do you want this to serve?" and say "I can't make it serve the purpose without knowing what the purpose is." That works for me, without me having to ask them 20 times. Then again maybe you're doing projects in radically different contexts all the time, or with completely different people who vary in their ability to see the point in answering that question. I work with a limited number of people and contexts, all of which I understand pretty well, so my problem clarification process is pretty simple.

Comment author: TheOtherDave 21 September 2012 06:55:38AM 0 points [-]

Yeah, it's different people and a different context every time.

Comment author: Zian 22 January 2013 05:05:08AM *  2 points [-]

What purpose do you want this to serve ... I work with a limited number of people and contexts, all of which I understand pretty well, so my problem clarification process is pretty simple..

In my experience as a programmer (who wore all the software-related hats), I found that even when I understood the domain quite well, inquired about the purpose multiple times, and wrote little stories illustrating my interpretation of the users' desires, I could walk away from early usability tests with major changes to the project.

In one particularly memorable instance, I got all the way through making paper prototypes and making pretend e-mails. Then, I convinced my manager to try out the system. The process started in a pre-existing e-mail package and then routed stuff to the proposed custom software. He sat down, opened up the pretend e-mail, and started to save the attached files. At that point, we discovered that there was no need for the custom software and killed the entire project.

Comment author: Insert_Idionym_Here 10 September 2012 10:05:41PM 11 points [-]

I have attempted using this in more casual decision making situations, and the response I get is nearly always something along the lines of "Okay, just let me propose this one solution, we won't get attached to it or anything, just hear me out..."

Comment author: shminux 10 September 2012 10:13:37PM 5 points [-]

What do you do in this situation? Let them speak? Ask them to write down their solution, to be discussed later?

Oops... Couldn't resist proposing solutions.

Comment author: Insert_Idionym_Here 11 September 2012 05:08:44AM 1 point [-]

To be perfectly honest, at the time I simply planted my face on the table in front of me a few times. I was at a dinner party with friends of my mother's; I would have sounded extremely condescending otherwise.

Comment author: shminux 11 September 2012 04:30:12PM 1 point [-]

Ah yes, status mismatch in a not very rational crowd. Not much you can do there.