The social reality of how hard you can reasonably be expected to try/the "standard amount" of trying is actually really important, because it gates the tremendous value of social diversification.
After Hurricane Sandy, when lower Manhattan was without power but I still had power in upper Manhattan, I let a couple of friends sleep in my double bed while I slept on my own couch. In principle they could have applied more dakka to ensure their apartment would be livable in natural disasters, but this would be very expensive and the ability to fall back on mutual aid creates a lot of value by decreasing the need for such extraordinary precautions, mitigating unknown unknowns, and lowering everybody's P(totally fucked). (Especially when this scales up from a temporary local power outage to something like being an international refugee.)
On the other hand, if a couple I knew similarly well had shown up in NYC with no notice asking if they could sleep in my bed while I slept on my couch, I would say no. If they had booked a confirmed Airbnb but the host had flaked out at the last minute, I'd probably say yes. If they had gone to Aqueduct Racetrack expecting to win enough for a hotel room but their horse had lost, I'd say no. It seems to me this mostly comes down to whether they had a prima facie reasonable plan, or whether they were predictably likely to take unfair advantage of mutual aid all along in a way that needs to be timelessly disincentivised. But this means a lot depends on what your particular subculture considers "prima facie reasonable" and what unknowns it considers known.
(Which of the following are prima facie reasonable to rely on in retirement planning such that you can fairly expect aid from more fortunate friends if they fail? 1. bitcoin investments, 2. stock index investments, 3. gold, 4. a home you own in the US, 5. USD in savings accounts, 6. a defined-benefit employer pension, 7. US Social Security, 8. US Medicare, 9. absence of punitive wealth taxes in the US, 10. your ability to get a new job after years out of the workforce, 11. your children's unconditional support, 12. the singularity arriving before you get too old to work. The answer will vary a lot with your social circle's politics and memes with only a very indirect dependence on wider objective reality.)
This is excellent (Zvi, as-the-author, has the ability to push this to frontpage and I think that'd be good, and I'd personally recommend this for Featured when we get to the next Featured Cycle)
It is interesting to note that while I was a recent proponent of gratitude, it still hadn't occurred to me that 'maybe I should just try, like, a fuckton of it instead of 3 things a day and noticing-in-the-moment-when-I-can.'
(I'd be starting to get slightly bored of the Modesty discussions, i.e. 'okay okay I get it or hell maybe I don't get it but let's, like, actually try solving some problems instead of getting ourselves into weird handwringing and/or tribal fights about How/Whether To Modesty'.
By contrast, I liked this post a lot, since it was focused on examining holes in the existing models and proposing solutions that seemed really useful)
I pushed it to the frontpage. I'm always fine with the admins pushing my things to the front page if they think it's a good idea, and only reason I didn't was that it was Shabbat (I schedule posts for Saturday morning so I won't check for comments or karma continuously until the day has passed, and I can read comments and answer before the weekend ends).
The heuristic generating the bottom line seems to be trying not to stick out. So, you don’t want to be scapegoatable by not having done your job, but there’s also little appeal and a lot of risk in going above and beyond. So, you check the box, and then you choke to submit.
Except, of course, when metrics and endowment effects combine – e.g. if your initially profitable-seeming business gradually starts failing, you’ll be motivated to generate increasingly creative fraud schemes to keep up appearances. If you used to be a straight-A student, the risk of a B might lead to creative test-taking solutions. Etc.
(Comment cross-posted from here.)
There was someone who was interviewed on Tim Ferriss who recommended finding out what you care about and spending a lot more on that and what you don't care about and spending a lot less on that. In particular, there was a suggestion to think about spending ten times as much on what you care about-- you've got a chance of turning up improvements which aren't nearly that expensive.
In case you were wondering, the interviewee is Ramit Sethi, who was talking about what he calls "money dials".
It has caused me to increase my amount of expressed gratitude in smaller ways, so a marginal increase that has had a positive effect. It has not yet caused me to write more full gratitude letters, although I'm thinking of what they might be and I'm hoping it will actually happen. I'm glad you asked since it increases the chance this will indeed happen.
Excellent post.
By the way, is this Zvi Mowshowitz? If so, I wrote and cited some things you wrote about advantage theory some months ago, and a friend asked if I knew you from the rationalist community. I said I didn't know Zvi was a rationalist; I just liked his writing and was happy to reference it. (The piece was on tracing the hidden logic of things when the things that determine winning and losing aren't explicitly coded parameters.)
If so, well, small world — and thanks, twice. This post is great, and your past archives on logic and decisionmaking were wonderfully insightful reading for me.
Yes, it's me, and you're most welcome. I know exactly the piece you're talking about, and it's still the hardest thing I ever had to write.
In our house we started a tradition of holding hands and taking turns saying something we're grateful before dinner each night. We then soft-evangalise this by having guests over and including them - most notably to hundreds of people at our wedding.
Minor notes:
It'd be great if this post had a one sentence explanation of More Dakka instead of linking to TV tropes (and/or gave a hazard warning for TV tropes)
When you link to Challenging the Difficult, it's now an option to link here:
https://www.lesserwrong.com/sequences/3szfzHZr7EYGSWt92
(Although, come to think of it I'm actually not sure what's intended to happen re: links to lesserwrong.com when we eventually migrate over to lesswrong.com)
Pushed a quick one-line explanation for now, on my blog, which I believe should reach here as well. If I can think of a better one given more time I'll move to a better one, but no reason to let the perfect be enemy of the good here.
If you can confirm that a link to LesserWrong sequences will be evergreen I'll switch over to the new version.
I'm reading this again now because I remember liking it and wanted to link it in something I'm writing, however:
Yes, some countries printed too much money and very bad things happened, but no countries printed too much money because they wanted more inflation. That’s not a thing.
That is absolutely a thing that some governments do. Even if we disregard hyperinflation, when a government's tax brackets, spending commitments and sovereign debt are denominated in nominal currency and it needs more money for stuff, the political cost of high inflation is sometimes less than it would be to raise taxes, cut spending or default on bonds.
In these cases, I do not think such explanations are enough.
Eliezer gives the model of researchers looking for citations plus grant givers looking for prestige, as the explanation for why his SAD treatment wasn’t tested. I don’t buy it. Story doesn’t make sense.
On my model, the lack of exploitability is what allowed the failure to happen, whereas your theory on reasons why people do not try more dakka may be what caused the failure to happen.
If the problem were exploitable in the Eliezer-sense, the market would bulldoze straight through the roadblocks you describe. The fact that the problem is not exploitable allows your roadblocks to have the power they empirically have.
If more light worked, you’d get a lot of citations, for not much cost or effort. If you’re writing a grant, this costs little money and could help many people. It’s less prestigious to up the dosage than be original, but it’s still a big prestige win.
I don't think this is true, empirically. Being the first person to think of a new treatment with a proof of concept is prestigious. Working out all the details and engineering it into something practical is much less so.
Our models differ in the magnitude of prestige effects, but not sure they disagree that much. So I think that yes, you'd get a lot less prestige for working out the details, but that it's still a very 'good deal' in prestige terms given the size of the opportunity. I also think that there's a difference between making little tweaks that improve matters versus making a large tweak that makes things much better; the first one has a much bigger low-prestige problem. Basically I think that yes, you take an order-of-magnitude hit to prestige here, but it's more than made up for by the ease of finding and exploting the problem.
In terms of the market bulldozing through such things, I have much less faith that markets are so reliably good at such things. I think they're very good, but unreliable without imposing several additional constraints that often don't hold or only partially hold. Yes, being exploitable in that sense much improves the chance someone will exploit slash fix the issue, but the search process for things to exploit, and the decision process to do so, and thre requirements to do so, and the opportunity cost of doing so, and so forth, make it quite easy for exploitable things to sit there unexploited, or for things that are exploitable once you notice with the right other personal circumstances to go motly unnoticed and therefore unexploited, and for many things that are exploitable somewhat to not get exploited, while other things are what is called 'overdone trades' where too many people try to exploit something that is not expolitable enough.
Much of the time, the real cost that makes something unexploitable is the step of noticing the opportunity and taking the time to analyze it, which isn't an obviously exploitable opportunity for exploration, whereas the actual exploitation process then becomes clearly good. In fact, if exploration of a problem is a marginal decision, you should expect to therefore find exploitable actions from it, just not enough in expectation to justify the opportunity cost of doing that exploration over another.
(Or, markets are always imperfectly competitive and this matters more than we give it credit, even though the perfect markets are often great approximations and using them explains a lot.)
This is the canonical More Dakka post, but it doesn't include the origin.
Quoting the TV Tropes page on More Dakka:
The trope namer is Warhammer 40,000, where it is the Ork onomatopoeia for machine gun firing, and their general term for rapid fire capacity: "dakka-dakka-dakka-dakka...".
Here's a color version of the image on the page, that honestly conveys the heuristic to me quite effectively. It's very stupid, but man it's good to just go more dakka when marginal effort is continuing to improve things.
Can I do more of this? Can I do this better? Put in more effort, more time and/or more money? Might that do the job better? Could that be a good idea? Could that be worth it? How much more? How much better?
Things are rarely sprints, they are marathons. It is more important to be able to put in a consistent amount of effort in over time than it is to go full Dakka and burn yourself out.
Find a place where you if put energy in, you get a little bit of energy back. Don't expect as much energy back as you put in, but some is good and might grow. Be on the look out for someone that might be able to be the first person to join you in Dakka.
Sometimes this is about spending energy, and sometimes it's about getting a bajillion more lightbulbs which is only limited by your financial situation.
And occasionally it's about doing less. It's surprising how often people trying things like melatonin or other psychoactive drugs, who respond too strongly, don't consider a lower dosage - and continuing to lower the dosage until they either get no effect or the desired effect.
I like this generalization/extension a lot. It expands the principle that if you have tools that have the potential to get the job done, optimizing the use of those tools, and maximizing the net benefits, is often remarkably low-hanging fruit that most people never think to do. One could take it a step further to cases like the classic joke of "Doctor, doctor, it hurts when I do that!" "Don't do that."
With melatonin, it's not anywhere as simple as "too strong" an effect. Melatonin is typically sold at high doses that don't really have the proper effect at all, which results in people deciding to increase the dose even more until they get a hard knockout effect which looks like the desired thing if you're desperate and squinting, but... no.
I would just like to point out the trolley problem here.
Yes, pulling the lever will save four people, but the one person who dies would still be alive if it wasn't for us pulling the lever. There's a (somewhat radical) active intervention that makes us responsible for potential bad outcomes.
There seems to be a similar effect with "more dakka". Implementing a modest solution is socially justifiable. Implementing a radical solution may be less so.
It's a dilemma, and even if you're 100% doing the right thing, it may still lead to bad outcomes.
Why would pulling the lever make you more responsible of the outcome than not pulling the lever? Both are options you decide to take once you have observed the situation.
Logically, maybe not that much.
Socially, this is a believe people hold.
I think it's the main argument for people choosing not to pull the lever (almost 20% by a survey I found).
Mentioned in another comment, but not explicitly: this falls under the general optimisation mindset. Not just blindly repeating the set of actions that once led to a positive outcome, but experimenting further to find the sweet spot/area - be it the optimal amount of More Dakka or Less Dakka. For example, in "The How of Happiness", the author explicitly advises doing the gratitude journal once a week rather than every day, to make it a ritual of actually feeling and expressing gratitude, rather than quickly writing down 3 relatively positive things as quickly as possible; not that the latter isn't effective, but it's less effective than savouring the gratitude. (I think thit pitfall is easy to avoid, but it's a good example) Another example could be sleep: suppose you've tried cutting back on sleep and found that 30 minutes a day for a week didn't affect your cognition. You could try both cutting back another 30 minutes or going the other way and trying to get an extra hour's sleep - what if increasing your sleep time actually gave you benefits that outweighed the extra hour in the evening?
Thank you for writing this post. This gave me the framework and motivation to overcome the trivial inconveniences of reading a wiki and writing an email to sign up for the free gym room in my dorm. Another inconvenience that had scared me away was an appointment to get introduced to how to use the gym. I will start learning to use the barbell there rather than just using the dumbbell I own. Now that I am writing this it sounds quite insane that I didn't take advantage of this for 3 years.
Epistemic Status: Hopefully enough Dakka
Eliezer Yudkowsky’s book Inadequate Eqilibria is excellent. I recommend reading it, if you haven’t done so. Three recent reviews are Scott Aaronson’s, Robin Hanson’s (which inspired You Have the Right to Think and a great discussion in its comments) and Scott Alexander’s. Alexander’s review was an excellent summary of key points, but like many he found the last part of the book, ascribing much modesty to status and prescribing how to learn when to trust yourself, less convincing.
My posts, including Zeroing Out and Leaders of Men have been attempts to extend the last part, offering additional tools. Daniel Speyer offers good concrete suggestions as well. My hope here is to offer both another concrete path to finding such opportunities, and additional justification of the central role of social control (as opposed to object-level concerns) in many modest actions and modesty arguments.
Eliezer uses several examples of civilizational inadequacy. Two central examples are the failure of the Bank of Japan and later the European Central Bank to print sufficient amounts of money, and the failure of anyone to try treating seasonal affective disorder with sufficiently intense artificial light.
In a MetaMed case, a patient suffered from a disease with a well-known reliable biomarker and a safe treatment. In studies, the treatment improved the biomarker linearly with dosage. Studies observed that sick patients whose biomarkers reached healthy levels experienced full remission. The treatment was fully safe. No one tried increasing the dose enough to reduce the biomarker to healthy levels. If they did, they never reported their results.
In his excellent post Sunset at Noon, Raymond points out Gratitude Journals:
“Huh. Do *you* keep a gratitude journal?”
“Lol. No, obviously.”
– Some Guy at the Effective Altruism Summit of 2012
Gratitude journals are awkward interventions, as Raymond found, and we need to find details that make it our own, or it won’t work. But the active ingredient, gratitude, obviously works and is freely available. Remember the last time someone expressed gratitude to you and it made your day worse? Remember the last time you expressed gratitude to someone else, or felt gratitude about someone or something, and it made your day worse?
In my experience it happens approximately zero times. Gratitude just works, unmistakably. I once sent a single gratitude letter. It increased my baseline well-being. Then I didn’t write more. I do try to remember to feel gratitude, and express it. That helps. But I can’t think of a good reason not to do that more, or for anyone I know to not do it more.
In all four cases, our civilization has (it seems) correctly found the solution. We’ve tested it. It works. The more you do, the better it works. There’s probably a level where side effects would happen, but there’s no sign of them yet.
We know the solution. Our bullets work. We just need more. We need More (and better) (metaphorical) Dakka. And then we decide we’re out of bullets. We stop.
If it helps but doesn’t solve your problem, perhaps you’re not using enough.
I
We don’t use enough to find out how much enough would be, or what bad things it might cause. More Dakka might backfire. It also might solve your problem.
The Bank of Japan didn’t have enough money. They printed some. It helped a little. They could have kept printing more money until printing more money either solves their problem or starts to cause other problems. They didn’t.
Yes, some countries printed too much money and very bad things happened, but no countries printed too much money because they wanted more inflation. That’s not a thing.
Doctors saw patients suffer for lack of light. They gave them light. It helped a little. They could have tried more light until it solved their problem or started causing other problems. They didn’t.
Yes,people suffer from too much sunlight, or spending too long in tanning beds, but those are skin conditions (as far as I know) and we don’t have examples of too much of this kind of artificial light, other than it being unpleasant.
Doctors saw patients suffer from a disease in direct proportion to a biomarker. They gave them a drug. It helped a little, with few if any side effects. They could have increased the dose until it either solved the problem or started causing other problems. They didn’t.
Yes, drug overdoses cause bad side effects, but we could find no record of this drug causing any bad side effects at any reasonable dosage, or any theory why it would.
People express gratitude. We are told it improves subjective well-being in studies. Our subjective well-being improves a little. We could express more gratitude, with no real downsides. Almost all of us don’t.
On that note, thanks for reading!
A decision was universally made that enough, despite obviously not being enough, was enough. ‘More’ was never tried.
This is important on two levels.
II
The first level is practical. If you think a problem could be solved or a situation improved by More Dakka, there’s a good chance you’re right.
Sometimes a little more is a little better. Sometimes a lot more is a lot better. Sometimes each attempt is unlikely to work, but improves your chances.
If something is a good idea, you need a reason to not try doing more of it.
No, seriously. You need a reason.
The second level is, ‘do more of what is already working and see if it works more’ is as basic as it gets. If we can’t reliably try that, we can’t reliably try anything. How could you ever say ‘If that worked someone would have tried it’?
You can’t. If no one says they tried it, probably no one tried it. There might be good reasons not to try it. There also might not. There’d still be a good chance no one tried it.
There’s also a chance someone did try it and isn’t reporting the results anywhere you can find. That doesn’t mean it didn’t work, let alone that it can never work.
III
Why would this be an overlooked strategy?
It sounds crazy that it could be overlooked. It’s overlooked.
Eliezer gives three tools to recognize places systems fail, using highly useful economic arguments I recommend using frequently:
2. Cases where decision-makers can’t reliably learn the information they need to make decisions, even though someone else has that information
3. Systems that are broken in multiple places so that no one actor can make them better, even though, in principle, some magically coordinated action could move to a new stable state.
In these cases, I do not think such explanations are enough.
If the Bank of Japan didn’t print more money, that implies the Bank of Japan wasn’t sufficiently incentivized to hit their inflation target. They must have been maximizing primarily for prestige instead. I can buy that, but why didn’t they think the best way to do that was to hit the inflation target? Alexander’s suggested payoff matrix, where printing more money makes failure much worse, isn’t good enough. It can’t be central on its own. The answer was too clear, the payoff worth the odds, and they had the information, as I detail later.
Eliezer gives the model of researchers looking for citations plus grant givers looking for prestige, as the explanation for why his SAD treatment wasn’t tested. I don’t buy it. Story doesn’t make sense.
If more light worked, you’d get a lot of citations, for not much cost or effort. If you’re writing a grant, this costs little money and could help many people. It’s less prestigious to up the dosage than be original, but it’s still a big prestige win.
If you say they want to associate with high status research folk, then they won’t care about the grant contents, so it reduces to a one-factor market, where again researchers should try this.
Alexander noticed the same confusion on that one.
In the drug dosage case, Eliezer’s tools do better. No doctor takes the risk of being sued if something goes wrong, and no company makes money by funding the study and it’s too expensive for a grant, and trying it on your own feels too risky. Maybe. It still does not feel like enough. The paths forward are too easy, too cheap, the payoff too large and obvious. Even one wealthy patient could break through, and it would be worth it. Yet even our patient, as far as we know, didn’t even try it and certainly didn’t report back.
The gratitude case doesn’t fit the three modes at all.
IV
Here is my model.I hope it illuminates when to try such things yourself.
Two key insights here are The Thing and the Symbolic Representation of The Thing, and Scott Alexander’s Concept-Shaped Holes Can Be Impossible To Notice. Both are worth reading, in that order.
I’ll summarize the relevant points.
The standard amount of something, by definition, counts as the symbolic representation of the thing. The Bank of Japan ‘printed money.’ The standard SAD treatment ‘exposes people to light.’ Our patients’ doctors prescribed ‘standard drug.’ Today, various people ‘left with plenty of time,’ ‘came up with a plan,’ ‘were part of a community,’ ‘ate pizza,’ ‘listened to the other person,’ ‘focused on their breath,’ ‘bought enough nipple tops for the baby’s bottles,’ ‘did their job’ and ‘added salt and pepper.’
They got results. A little. Better than nothing. But much less than was desired.
The Bank of Australia printed enough money. Eliezer Yudkowsky exposed his wife to enough light. Our patient was told to take enough of the drug to actually work. Meanwhile, other people actually left with plenty of time, actually came up with a workable plan, actually were part of a community, ate real pizza, actually listened to another person, actually focused on their breath, bought enough nipple tops for the baby’s bottles, actually did their job, and added copious amounts of sea salt and freshly ground pepper.
Some of these are about quality rather than quantity. You could also think of that as a bigger quantity of effort, or willingness to pay more money or devote more time. Still, it’s worth noting that an important variant of ‘use more,’ ‘do more’ or ‘do more often’ is ‘do it better.’
Being part of that second group is harder than it looks:
You need to realize the thing might exist at all.
You need to realize the symbolic representation of the thing isn’t the thing.
You need to ignore the idea that you’ve done your job.
You need to actually care about solving the problem.
You need to think about the problem a little.
You need to ignore the idea that no one could blame you for not trying.
You need to not care that what you’re about to do is unusual or weird or socially awkward.
You need to not care that what you’re about to do might be high status.
You need to not care that what you’re about to do might be low status.
You need to not care that what you’re about to do might not work.
You need to not be concerned that what you’re about to do might work.
You need to not care that what you’re about to do might backfire.
You need to not care that what you’re about to do is immodest.
You need to not instinctively assume that this will backfire because attempting it would be immodest, so the world will find some way to strike you down.
You need to not care about the implicit accusation you’re making against everyone who didn’t try it.
You need to not care that what you’re about to do might be wasteful. Or inappropriate. Or weird. Or unfair. Or morally wrong. Or something.
Why is this list getting so long? What is that answer of ‘don’t do it’ doing on the bottom of the page?
V
Long list is long. A lot of items are related. Some will be obvious, some won’t be. Let’s go through the list.
You need to realize the thing might exist at all.
One cannot do better unless one realizes it might be possible to do better. Scott gives several examples of situations in which he doubted the existence of the thing.
You need to realize the symbolic representation of the thing isn’t the thing.
Scott gives several examples where he thought he knew what the thing was, only to find out he had no idea; what he thought was the thing was actually a symbolic representation, a pale shadow. If you think having a few friends is what a community is, it won’t occur to you to seek out a real one.
You need to ignore the idea that you’ve done your job.
There was a box marked ‘thing’. You’ve checked that box off by getting the symbolic version of the thing. It’s easy to then think you’ve done the job and are somehow done. Even if you’re doing this for yourself or someone you care about, there’s this urge to get to and think ‘job done’, ‘quest complete’, and not think about details. You need to realize you’re not doing the job so you can say you’ve done the job, or so you can tell yourself you’ve done the job. Even if you didn’t get what you wanted, your real job was to get the right to tell a story you can tell yourself that you tried to get it, right?
You need to actually care about solving the problem.
You’re doing the job so the job gets done. That’s why doing the symbolic version doesn’t mean you’re done. Often people don’t care much about solving the problem. They care whether they’re responsible. They care whether socially appropriate steps have been taken.
You need to ignore the idea that no one could blame you for not trying.
Alexander notes how important this one is, and it’s really big.
People often care primarily about doing that which no one could blame them for. Being blamed or scapegoated is really bad. Even self-blame! We instinctively fear someone will discover and expose us, and make ourselves feel bad. We cover up the evidence and create justifications. Doing the normal means no one could blame you. If you don’t grasp that this is a thing, read as much of Atlas Shrugged as needed until you grasp that. It should only take a chapter or two, but this idea alone is worth a thousand page book in order to get, if that’s what it takes. I’m not kidding.
Blame does happen. The real incentive here is big. The incentive people think they have to do this, even when the chance of being blamed is minimal, is much, much bigger.
You need to think about the problem a little.
People don’t like thinking.
You need to not care that what you’re about to do is unusual or weird or socially awkward.
There’s a primal fear of doing anything unusual or weird. More would be unusual and weird. It might be slightly socially awkward. You’d never know until it actually was awkward. That would be just awful. Can’t have that. No one is watching or cares, but some day someone might find you out and then expose you as no good. We go around being normal, only guessing which slightly weird things would get us in trouble, or that we’d need to get someone else in trouble for! So we try to do none of them. That’s what happens when not operating on object-level causal models full of gears about what will work.
You need to not care that what you’re about to do might be high status.
Doing or tying to do something high status is to claim high status. Claiming status you’re not entitled to is a good way to get into a lot of trouble. Claiming to usefully think, or to know something, is automatically high status. Are you sure you have that right?
You need to not care that what you’re about to do might be low status.
Your status would go down. That’s even worse. If it’s high status you lose, if it’s low status you also lose, and you don’t even know which one it is since no one does it! Might even be both. Better to leave the whole thing alone.
You need to not care that what you’re about to do might not work.
Failing is just awful. Even things that are supposed to mostly fail. Even getting ludicrous odds. Only explicitly permitted narrow exceptions are permitted, which shrink each year. Otherwise we must, must succeed, or nothing we do will ever work and everyone will know that. I founded a company once*. It didn’t work. Now everyone knows rationalists can’t found companies. Shouldn’t have tried.
* – Well, three times.
You need to not be concerned that what you’re about to do might work.
Even worse, it might work. Then what? No idea. Does not compute. You’d have to keep doing weird thing, or advocate for weird thing. How weird would that be? What about the people you’d prove wrong? What would you even say?
You need to not care that what you’re about to do might backfire.
It might not only not work, it might have real consequences. That’s a thing. Can’t think of why that might happen. Every brainstormed risk seems highly improbable and not that big a deal. But why take that risk?
You need to not care that what you’re about to do is immodest.
By modesty, anything you think of, that’s worth thinking, has been thought of. Anything worth trying has been tried, anything worth doing done. Ignore that there’s a first time for everything. Who are you to claim there’s something worth trying? Who are you to claim you know better than everyone else? Did you not notice all the other people? Are you really high status enough to claim you know better than all of them? Let’s see that hero licence of yours, buster. Object-level claims are status claims!
You need to not instinctively assume that this will backfire because attempting it would be immodest, so the world will find some way to strike you down.
The world won’t let you get away with that. It will make this blow up in your face. And laugh. At you. People know this. They’ll instinctively join the conspiracy making it happen, coordinating seamlessly. Their alternative is thinking for themselves, or other people might thinking for themselves rather than playing imitation games. Unthinkable. Let’s scapegoat someone and reinforce norms.
You need to not care about the implicit accusation you’re making against everyone who didn’t try it.
You’re not only calling them wrong. You’re saying the answer was in front of their face the whole time. They had an obvious solution and didn’t take it. You’re telling them they didn’t have a good reason for that. They gonna be pissed.
You need to not care that what you’re about to do might be wasteful. Or inappropriate. Or unfair. Or low status. Or lack prestige. Or be morally wrong. Or something. There’s gotta be something!
The answer is right there at the bottom of the page. This isn’t done, so don’t do it. Find a reason. If there isn’t a good one, go with what you got. Flail around as needed.
That’s what the Bank of Japan was actually afraid of. Nothing. A vague feeling they were supposed to be afraid of something, so they kept brainstorming until something sounded plausible.
Printing money might mean printing too much! The opposite is true. Not printing money now means having to print even more later, as the economy suffers.
Printing money would destroy their credibility! The opposite is true. Not printing money destroyed their credibility.
People don’t like it when we print too much money! The opposite is true. Everyone was yelling at them to print more money.
The markets don’t like it when we print too much money! The opposite is true. We have real time data. The Nikkei goes up on talk of printing money, down on talk of not printing money, and goes wild on actual unexpected money printing. It’s almost as if the market thinks printing money is awesome and has a rational expectations model. The bond market? The rising interest rates? Not a peep.
Printing money wouldn’t be prestigious! It would hurt bank independence! The opposite is true. Not printing money forced Prime Minister Shinzo Abe to threaten them into printing more money. They were seen as failures. Everyone respects the Bank of Australia because they did print more money.
This same vague fear, combined with trivial inconveniences, is what stops the other solutions, too.
Not only are these trivial fears that shouldn’t stop us, they’re not even things that would happen. When you try the thing, almost nothing bad of this sort ever happens at all.
At all. This is low risks of shockingly mild social disapproval. Ignore.
These worries aren’t real. They’re in your head.
They’re in my head, too. The voice of Pat Modesto is in your head. It is insidious. It says whatever it has to. It lies. It cheats. It is the opposite of useful.
If someone else has these concerns, the concerns are in their head, whispering in their ear. Don’t hold it against them. Help them.
Some such worries are real. They can point to real costs and benefits. Check! But they’re mostly trying to halt thinking about the object level, to keep you from being the nail that sticks up and gets hammered down. When someone else raises them, mostly they’re the hammer. The fears are mirages we’ve been trained and built to see.
You don’t have that problem, you say? Great! Other people do have that problem. Sympathize and try to help. Otherwise, keep doing what you’re doing, only more so. And congratulations.
VI
My practical suggestion is that if you do, buy or use a thing, and it seems like that was a reasonable thing to do, you should ask yourself:
Can I do more of this? Can I do this better? Put in more effort, more time and/or more money? Might that do the job better? Could that be a good idea? Could that be worth it? How much more? How much better?
Make a quick object level model of what would happen. See what it looks like. Discount your chances a little if no one does it, but only a little. Maybe half, tops. Less if those who succeeded wouldn’t say anything. In some cases, the thing you’re about to try is actually done all the time, but no one talks about it. If you suspect that, definitely try it.
You’ll hear the voice. This isn’t done. There must be a reason. When you hear that, get excited. You might be on to something.
If you’re getting odds to try, try. Use the try harder, Luke! You can do this. Pull out More Dakka.
It’s also worth looking back on things you’ve done in the past and asking the same question.
I’ve linked several times to the Challenging the Difficult sequence, but none of this need be difficult. Often all that’s needed, but never comes, is an ordinary effort.
The bigger picture point is also important. These are the most obvious things. Those bad reasons stop actual everyone from trying things that cost little, on any level, with little risk, on any level, and that carry huge benefits. For other things, they stop almost everyone. When someone does try them and reports back that it worked, they’re ignored.
Something possibly being slightly socially awkward, or causing a likely nominal failure, acts as a veto. Rationalizations for this are created as needed.
Adding that to the economic model of inadequate equilibria, and the fact that almost no one got as far as considering this idea at all, is it any wonder that you can beat ‘consensus’ by thinking of and trying object-level things?
Why wouldn’t that work?