The times I was able to get people to do things that they felt were too unlikely to commit to were largely about lowering the emotional costs of failure. The context is a bit different, but it seems likely that some of the same factors apply.
Using “writing HPMoR” as an example, there’s more than one thing failure could be taken to mean. One is “I tested a high risk high reward idea, and it didn’t pan out. I learned something useful about what kinds of things I can’t do (right away, at least), and it still strikes me as having been worth attempting, given what I knew at the time. If I keep trying high risk high reward ideas one of them is likely to pay out, because the idea that I’m limited by what social expectations would see as “modest” isn’t even worth taking seriously”. A completely different thing it could mean is “I was arrogant to think I had a chance at this. I learned nothing on the object level because I already knew I couldn’t do it, but on the meta level I learned that I was wrong to set this aside and hope. In hindsight, it was a mistake that never was worth trying in the first place, and if I keep trying high risk high rewards things I’m just going to keep failing because social expectations of what I’m capable of are *right*”. The people with the latter anticipation are going to be less thrilled about flipping that coin with a 50% chance of success because the other 50% hurts a lot more.
The former mindset *sounds* a lot better, and people are going to want to say “yeah, that one sounds right! I believe *that* one!” even when their private thoughts tend towards the latter mindset. If you try to get someone in the latter category to act like they’re in the former category, you’re going to run into motivation problems. You’re going to hear “You’re right, and I want to… I just can’t find the motivation”.
In order to get people to shift from “failure means I should be less confident and try less” to “failure means this particular one didn’t pan out, and it’s still worth trying more”, you have to be able to engage with (and pass the ‘ideological turing test’ of) their impulses to take failure as indicative of a larger problem. There is definitely a skill to this, and it can be tough when you can plainly see that the right answer is to “just try it”. At the same time, it’s a skill that can be learned and it does work for opening things up for change.
Gambling and interviewing seem like the opposite sides of the same "trying a low-probability undertaking" approach. In the former case you want the person to realize, after losing a few times, that winning at gambling is not a fruitful undertaking, except in some very specific circumstances. In the latter case, even if someone failed to secure an offer several times in a row, you want the person to continue trying until they succeed. Yet emotionally the situation can be similar, or even reversed.
Good point. Note that gambling has the added difficulty in that it's emotionally adversarial - the casino/bookie is setting the game and environment to confuse the player's estimates of success probability and magnitude. Interviewing probably has a little of this in that many interviewers are more interested in making themselves feel smart than in hiring the candidates that will contribute most.
In any case, focusing on someone's motivation and their perception of the distribution of successes and failures should be secondary to analysis of the real possible outcomes. For most people, there exist jobs they shouldn't interview for. A blanket "keep trying" is unhelpful, without specific analysis of expected value.
I recently got a chance to interview a couple people about this who'd done product management or similar at bay area tech companies.
They agreed that you can't run projects there unless you project near-certainty the project will succeed. However, they had a trick that had failed to occur to me prior to them saying it, which is to find a mid-scale objective that is all of: a) quite likely to have at least a bit of use in its own right; b) almost certainly do-able; and c) a stepping-stone for getting closer to the (more worthwhile but higher-failure-odds) goal. And work towards your harder goal via stepping-stones such as these, when doing product management for groups.
I'll be trying this out, probably. It reminds me of building modular code instead of spaghetti code.
Less confidently, I'd like to lightly recommend reading this blog post about Christopher Alexander's notions of how to make buildings into places where humans like to be, and about analogous ideas for how to do good software design. I don't have concrete take-aways to point to from that one, so read it at your own risk or don't, but it seems to me there may be patterns like this to how to make goals and subgoals that groups can have morale/orientation while building.
Looking at my own experience, the thing that motivated me to do things likely to fail is the expectation of getting other benefits, even if it failed. One such thing is "experience", but it could also be "it'll be fun" or "attempting will give you status even if you fail" or any number of other things.
Or, if there is feedback after relatively little effort (you find out after the first few chapters if people like it).
There's just something about "work hard for an extended period of time with no feedback until you find out if you won, which is a binary event with low odds" that turns people off, I guess.
There’s just something about “work hard for an extended period of time with no feedback until you find out if you won, which is a binary event with low odds” that turns people off, I guess.
Indeed.
There is a Russian saying, which I am sure I have quoted before, that goes like this:
“It is very difficult to find a black cat in a dark room—especially if the cat is not there.”
If you are working on a difficult problem, and, after considerable effort, you have not yet solved it, it may be that the problem is simply quite difficult; more effort is required; you have not yet put in enough work. After more work—perhaps, much more work—you will solve it. You need only persevere.
Or, it may be that the problem is unsolvable. It may be that you will not, and cannot, ever solve it. It may be that the problem was ill-posed to begin with. All further work would be wasted, as all work to date has been wasted. You should abandon the task at once.
It is often difficult to tell which of these cases you are facing.
"How hard it is to obtain the truth is a key factor to consider when thinking about secrets. Easy truths are simply accepted conventions. Pretty much everybody knows them. On the other side of the spectrum are things that are impossible to figure out. These are mysteries, not secrets. Take superstring theory in physics, for instance. You can’t really design experiments to test it. The big criticism is that no one could ever actually figure it out. But is it just really hard? Or is it a fool’s errand? This distinction is important. Intermediate, difficult things are at least possible. Impossible things are not. Knowing the difference is the difference between pursuing lucrative ventures and guaranteed failure."
- Peter Thiel’s CS183: Startup - Class 11 Notes Essay - Secrets
This is an interesting enough point that I feel like we should have a shorthand for referring to it. I think the key is other benefits are of different kinds, and probably operate at different timescales.
If a two-factor market is resistant to changes because the Nash equilibrium is reinforced from multiple points, maybe we could call this two-factor compensation because it needs compensation of multiple types to motivate someone to persist in the face of bad odds.
Or we could make it more explicit, and call it something like multi-factor rewards, because you need to hit several variables in the reward function.
I'm a bit surprised that most of the previous discussion here was focused on the "okay, so how do you actually motivate people?" aspect of this post.
This post gave me a fairly strong "sit bolt upright in alarm" experience because of it's implications on epistemics, and I think those implications are sneaky and far reaching. I expect this phenomenon to influence people's ability to think and communicate, before you get to the point where you actually have a project that people are hitting the hard parts of.
People form models of what sort of things they can achieve, and what sort of projects to start, and how likely their friends are to succeed at things, and (I expect) this backpropagates through their entire information system.
The problem may not just affect the people on a given project – it could affect the people nearby (and the people nearby them in turn), in terms of what sort of feedback you can easily give each other.
I've struggled with deciding whether to give people feedback that I don't think their project is likely to work, when I nonetheless think the project is the right thing to be working on, either for EV reasons or longterm-growth-as-a-person reasons, i.e. their next project will benefit from the skills they're gaining here. I know there are some people who'd lean hard into the "information is a key bottleneck, definitely don't withhold info like that." And they may be right. But if so, that still leaves us with a deep, crucial question of how to integrate epistemics and actual success on hard, long problems that lets you actually win at the instrumental part.
(I don't think the "pay people a lot" is actually sufficient, although it obviously helps. I think lack-of-clear-path-to-victory can be demoralizing even if you have plenty of money – that seems to be where a lot of 20th century ennui comes from. Also, many of the key projects here are early-stage where it's just hard to convince people to give you enough money to escape scarcity mindset)
I notice I don't have a very explicit model of what's going on – this comment felt more like I was ranting than clearly explaining things, not sure how it comes across to others.
I hope to flesh this out (and hopefully make some empirical predictions) during the review phase.
Hmm, I worry that motivation is only part of the picture. There's also idiosyncrasy between agents in terms of ability and acceptance of outcome.
By this I mean that they would not have stuck to writing the Sequences or HPMOR or working on AGI alignment past the first few months of real difficulty,
True enough, though you can replace "would not" with "will not" or "did not".
without assigning odds in the vicinity of 10x what I started out assigning that the project would work
This is not the counterfactual I'd assign the most weight to. "Without being Eliezer" is probably too specific, but "without having Eleizer's history of rewarded iconoclasm" wouldn't be a stretch. It's extremely likely that you _ARE_ orders of magnitude more likely to succeed at these endeavors than the majority of people who say they're interested in the topic.
I'm painfully familiar with the issue of lack of group participation, since I can't even get people to show up to a meetup.
Because of that, I've been doing research on identifying the factors contributing towards this issue and how to possibly mitigate them. I'm not sure if any of this will be new to you, but it might spark more discussion.
These are the first ideas that come to mind:
1. For people to be intrinsically motivated to do something, the process of working on it has to be fun or fulfilling.
2. Extrinsic motivation, as you say, requires either money or a reason to believe the effort will accomplish more than other uses of one's time would. If it's a long-term project, the problem of hyperbolic discounting may lead people to watch TV or [insert procrastination technique here] instead, even if they think the project is likely to succeed.
3. If people already have a habit of performing an activity, then it takes less effort for them to participate in similar activities and they demand less benefit from doing so. Identifying habits that are useful for the task you have in mind can be tricky if it's a complex issue, but successfully doing so can keep productivity consistent and reduce its mental cost.
4. Building a habit requires either intense and consistent motivation, or very small steps that build confidence. Again, though, identifying very small steps that still make for good productivity early on may be tricky.
5. If you have trouble getting people to start joining, it may be good to seek out early adopters to provide social proof. However, the social proof may only work for people who are familiar with those specific early adopters and who take cues from them. In that case, you may need to find some regular early adopters and then identify trendsetters in society (habitual early adopters from whom many people take their cues) you could get on board, after which their followers will consider participating. (Then the danger becomes making sure that the participants understand the project, but at least you have more people to choose from.)
6. It may help to remind people from time to time what they're working towards, even though everyone already knows. Being able to celebrate successes and take some time to review the vision can go quite a ways in relieving stress when people start to feel like their work isn't rewarding.
From item 1, if people think they can get a benefit from working on a project even if the project fails, they might be willing to participate. Socializing with project members and forming personal relationships with them may help in this respect, since they'll enjoy working with people. Alternatively, you could emphasize the skills they'll pick up along the way.
From item 4, I've been working on 'mini-habits" (a concept I got from Stephen Guise) to lower my own mental costs for doing things, and it seems to be working fairly well. Then the trick becomes getting enough buy-in (per item 5) so you can get other people started on those mini-habits.
There are probably some other factors I'm overlooking at the moment. Since I haven't been able to get results yet, I can't say for sure what will work, but I hope this provides a helpful starting point for framing the problem.
But… what’s wrong with this?
If:
(a) Someone is “not personally madly driven to accomplish the thing”, and
(b) They are not being paid money to do it, and
(c) They don’t think it’s pretty likely to succeed…
… then… why should they keep working on it?
They don’t think it’s pretty likely to succeed…
How worth doing something is depends on the product of its success chance and its payoff, but it's not clear that anticipations of goodness scale as much as consequences of goodness do, which could lead to predictably unmotivating plans (which 'should be' motivating).
This is a reasonable point.
However, I have a question. How would you distinguish a case where anticipations of goodness are not matching expected consequences of goodness (aside: I think “goodness of consequences” is a less awkward / more accurate formulation here, actually), from a case where expected goodness of consequences differs from claimed expected goodness of consequences?
In other words:
Alice: You should work on Project X!
Bob: Why?
Alice: Project X is very important! If accomplished, the consequences will be [stuff]!
Bob: Really?
Alice: Yeah! Because of [reasons]!
Bob, thinking: That sounds dubious but I can’t really explain why…
Bob: I am convinced.
Bob, thinking: I am not convinced…
Alice: Great! Then you’ll work on Project X, right? Because it’s so important?
Bob, thinking: There’s no good reason for me to say no…
Bob: Of course I’ll work on Project X.
Bob, thinking: I won’t work on Project X.
Later:
Alice: Bob, why haven’t you been working on Project X?!
Bob, thinking: If I tell her that I was never convinced in the first place, that will look bad…
Bob: Uh, motivation…al… problems. My, uh, System 1. And stuff. You know how it is.
Alice: Confounded System 1! Don’t worry, Bob, I’ll figure out a way around this problem!
Bob: Great! I look forward to being able to work on Project X, which is important.
Bob, thinking: Phew…
Edit: See also “epistemic learned helplessness” (which, as Scott points out, is exactly the correct response much of the time).
Example: you can think AGI alignment is worth working hard on even if (a) you only assign a 30% probability to success, and (b) you're not incredibly excited and overjoyed to be working on it.
By assumption, this also isn't a case where you find the work so inherently bleh that it's actually not a good fit for you and you shouldn't try. If you'd be sufficiently excited in the world where you thought the success odds were 70%, and your system 2 doesn't think the difference between 70% and 30% odds is decision-relevant in this case, then it seems like something's going wrong if you're insufficiently motivated in the 30% case.
You seem to be conflating “madly driven” with “incredibly excited and overjoyed”, and also “expect to succeed” with “excited”?
I appreciate you being the voice of reason, but I'm actually with Eliezer on this one. (a) isn't an immutable fact about you, it's a matter of your policy - what things you allow yourself to get excited about. And if you end up doing nothing exciting for years on end, like most people, then your policy might be suboptimal.
Er, but… (a) is a stipulation that Eliezer himself specified. In his post. I was quoting him!
Eliezer was asking the question: how do you get people to keep working on something that they’re not “personally madly driven to accomplish”, etc. If you can make yourself (or someone else) “madly driven to accomplish” the thing, well, then… that answers that? I… don’t see how your comment isn’t a non sequitur :(
And if you end up doing nothing exciting for years on end, like most people, then your policy might be suboptimal.
True, of course, I agree with this! But… who on earth said anything about doing nothing exciting?
Suppose you are doing something exciting, but then there’s some other thing that you’re not excited about, but that you (allegedly) think that it’s important to keep working on, but that you don’t keep working on, due to lack of motivation. Your criticism doesn’t apply, but Eliezer’s question does.
This is a very general question, and I'm not sure there's good general answers that don't make use of the specific types involved, such as 'convincing people who like doing X to do X is easier'. (As opposed to 'hard workers work harder/longer, so get hard workers.')
In absence of an answer, I have speculation.
People don't want to do new things.
Getting started is the hardest part of doing anything.
The end of a project is the hardest part.
Maybe people value successes - not in the sense of risk aversion, but maybe they want to a certain number of expected successes? .7+.7=1.4, so 2 things with a 70% chance of success, gets you an expected success.
Maybe people want a certain fraction of the projects they do to succeed? This would makes things difficult.
Maybe it's the type of thing, or something about it (doesn't sound interesting, requires acquiring new skills, or takes too long to produce value*).
*As compared to this. and things like it.
What are you trying to get people to do?
"People don't want to do new things."
Uhm, depends. I think many people are quite enthusiastic if they think they can contribute to something exciting and new, and then lose interest if turns out to be less exciting, less new, and is hard, boring work.
This post has influenced my evaluations of what I am doing in practice by forcing me to consider lowering the bar for expected success for high return activities. Despite "knowing" about how to shut up and multiply, and needing to expect a high failure rate if taking reasonable levels of risk, I didn't consciously place enough weight on these. This helped move me more in that direction, which has led to both an increased number of failures to get what I hoped, and a number of mostly unexpected successes when applying for / requesting / attempting things.
It is worth noting that I still need to work on the reaction I have to failing at these low cost, high-risk activities. I sometimes have a significant emotional reaction to failing, which is especially problematic because the emotional reaction to failing at a long-shot can influence my mood for multiple days or weeks afterwards.
I love how cleanly this brings up its point and asks the question. My answer is essentially that you can do this if and only if you can create expectation of Successful Failure in some way. Thus, if failing person's real mission can be the friends they made along the way or skills they developed or lessons learned, or they still got a healthy paycheck, or the attempt brings them honor, or whatever, that's huge.
Writing a full response is on my list of things to eventually do, which is rare for posts that are over a year old.
Focusing on the chance of success may be missing other factors. You need to also consider their belief in the chance that the problem is real and their belief that someone else won't solve it before them.
Someone who has, say a 30% belief in their ability to contribute to solving a problem will have very different motivations if they're 90% vs 10% confidant that the problem is real (and important) and if they have a 90/10% belief that someone else is going to solve it first.
I doubt it's as simple as multiplying those estimates, but that's an ok place to start.
From my experience with working with groups I've found a few things to work, all of them revolve around incentivizing or promoting positive feedback loops. This includes: bribery/payment (while most people think of this as being with money, since I work with 10-17 year olds it's usually candy or pizza), lowering the bar so that small bits of progress are celebrated greatly, making the actual task less boring (turning on some music, working with friendly people, making it look pretty), humor, taking breaks to promote sanity, having a comfortable working environment etc.
Luke already wrote that there are at least four factors that feed motivation, and the expectation of success is only one of them. No amount of expectancy can increment drive if other factors are lacking, and as Eliezer notice, it's not sane to expect only one factor to be 10x the others so that it alone powers the engine.
What Eliezer is asking is basicall if anyone has solved the basic coordination problem of mankind, and I think he knows very well that the answer to his question is no. Also, because we are operating in a relatively small mindspace (humans' system 1), the fact that no one solved that problem in hundreds of thousands of years of cooperation points strongly toward the fact that such a solution doesn't exist.
Venture Capital seems to be quite successful at finding startups to fund where the founder of the company has a chance of success of less then 30% and the founder still puts in incredibly hard work.
Most people aren't startup founders but there are many people who want to fund startups and are okay with success chances of less then 30%.
There are a lot of coordination problems whereby you need to get people to get people do to things that are not in their own interest that you could also call "the basic coordination problem of mankind".
Is it true that they're 'quite successful'? My impression was "yes, people succeed at this problem, but it's still a hard problem and the resolution isn't really an exact science.
I'm not aware of it being a problem that a lot of startup founders quit there startup because they lack motivation half a year after they get funding or even two years after they get funding.
You raise a good point about the multiple factors that go into motivation and why it's important to address as many of them as possible.
I'm having trouble interpreting your second paragraph, though. Do you mean that humanity has a coordination problem because there is a great deal of useful work that people are not incentivized to do? Or are you using "coordination problem" in another sense?
I'm skeptical of the idea that a solution is unlikely just because people haven't found it yet. There are thousands of problems that were only solved in the past few decades when the necessary tools were developed. Even now, most of humanity doesn't have an understanding of whatever psychological or sociological knowledge may help with implementing a solution to this type of problem. Those who might have such an understanding aren't yet in a position to implement it. It may just be that no one has succeeded in Doing the Impossible yet.
However, communities and community projects of varying types exist, and some have done so for millennia. That seems to me to serve as proof of concept on a smaller scale. Therefore, for some definitions of "coordinating mankind" I suspect the problem isn't quite as insurmountable as it may look at first. It seems worth some quality time to me.
What really helps is mortality and our inbred need to leave a legacy. It is better to pick a project with low probability of success than none at all. That can help you stick with something you only estimate to have a low chance of success, at least long enough to have sunk costs kick in. Does for me anyway.
This mechanism may only work for one man projects, or work in tight knit groups like bands of musicians. Your contribution to a big project doesn't feel like a legacy to the same degree.
Good point. Isn't that a bit neurotic though? I pretty much avoid dreaming about any kind of legacy (apart from kids), because that would be setting myself up for unhappiness in old age.
The core idea of this post has occurred to me more than once when considering plans. I'm still working on how to relate to plans with low chances of success. On the one hand low chances of success suggest a bad plan, and being willing to "do the improbable" feels like it's an excuse for having a bad plan. On the other, sometimes you really want to be doing low probability, high EV plans. I'm uncertain over whether LW counts as this. Sometimes I think we can definitely succeed at big stuff, sometimes it more seems like high-EV low probability. I'm not sure.
But all in all, the idea here seems important to think more about. I'd actually like to see more thought on this (and perhaps do my own stuff here).
Also want to note that Anna's post on "going at half-speed" seems at least a bit related.
Maybe it would be more fruitful to find the people who do want to cooperate than it is to convert the people who don't?
If I read Eliezer correctly, he is saying that he has found people who want to cooperate, but those people are unable to maintain the motivation to work on the problem (due to the issues he describes in the post).
You should distinguish between "accomplishing one big thing after lots of work" (like writing HPMoR) and "accomplishing one tenth of similar goals after lots of work" (like having some kind of applications approved). The first is hard and usually just not something people find they must do. I think that's part of the problem. If everyone else gets by without writing HPMoR, surely I can. The second is annoying, but you can have a drink and forget about needing to be motivated and just keep at it.
For me, it would be hard even if I believed it necessary etc., because I cannot work alone (I tried). What I do must matter to someone "now", and in a very specific way. Cheering me only made me feel uncomfortable, even if it was deserved. People cheer you on when they want something from you, not just when they are happy for your sake, so after you hear the 'good job' part you wait for the other shoe to drop. Someone finding small bugs in what I do helps much more.
...and there's a difference between "didn't do it, things remained as they were (when otherwise they would be better)" and "didn't do it and suddenly all was lost (when otherwise it would not be)". People do improbable things of the latter kind. I expect that quite a few cancer remissions that do happen are actually improbable, if only because the treatment requires lots of money.
(Cross-posted from Facebook.)
I've noticed that, by my standards and on an Eliezeromorphic metric, most people seem to require catastrophically high levels of faith in what they're doing in order to stick to it. By this I mean that they would not have stuck to writing the Sequences or HPMOR or working on AGI alignment past the first few months of real difficulty, without assigning odds in the vicinity of 10x what I started out assigning that the project would work. And this is not a kind of estimate you can get via good epistemology.
I mean, you can legit estimate 100x higher odds of success than the Modest and the Outside Viewers think you can possibly assign to "writing the most popular HP fanfiction on the planet out of a million contenders on your first try at published long-form fiction or Harry Potter, using a theme of Harry being a rationalist despite there being no evidence of demand for this" blah blah et Modest cetera. Because in fact Modesty flat-out doesn't work as metacognition. You might as well be reading sheep entrails in whatever way supports your sense of social licensing to accomplish things.
But you can't get numbers in the range of what I estimate to be something like 70% as the required threshold before people will carry on through bad times. "It might not work" is enough to force them to make a great effort to continue past that 30% failure probability. It's not good decision theory but it seems to be how people actually work on group projects where they are not personally madly driven to accomplish the thing.
I don't want to have to artificially cheerlead people every time I want to cooperate in a serious, real, extended shot at accomplishing something. Has anyone ever solved this organizational problem by other means than (a) bad epistemology (b) amazing primate charisma?
EDIT: Guy Srinivasan reminds us that paying people a lot of money to work on an interesting problem is also standardly known to help in producing real perseverance.