Why "Changing the World" is a Horrible Phrase
Steve Jobs famously convinced John Scully from Pepsi to join Apple Computer with the line, “Do you want to sell sugared water for the rest of your life? Or do you want to come with me and change the world?”. This sounds convincing until one thinks closely about it.
Steve Jobs was a famous salesman. He was known for his selling ability, not his honesty. His terminology here was interesting. ‘Change the world’ is a phrase that both sounds important and is difficult to argue with. Arguing if Apple was really ‘changing the world’ would have been pointless, because the phrase was so ambiguous that there would be little to discuss. On paper, of course Apple is changing the world, but then of course any organization or any individual is also ‘changing’ the world. A real discussion of if Apple ‘changes the world’ would lead to a discussion of what ‘changing the world’ actually means, which would lead to obscure philosophy, steering the conversation away from the actual point.
‘Changing the world’ is an effective marketing tool that’s useful for building the feeling of consensus. Steve Jobs used it heavily, as had endless numbers of businesses, conferences, nonprofits, and TV shows. It’s used because it sounds good and is typically not questioned, so I’m here to question it. I believe that the popularization of this phrase creates confused goals and perverse incentives from people who believe they are doing good things.
Problem 1: 'Changing the World' Leads to Television Value over Real Value
It leads nonprofit workers to passionately chase feeble things. I’m amazed by the variety that I see in people who try to ‘change the world’. Some grow organic food, some research rocks, some play instruments. They do basically everything.
Few people protest this variety. There are millions of voices giving the appeal to ‘change the world’ in the way that would validate many radically diverse pursuits.
TED, the modern symbol of the intellectual elite for many, is itself a grab bag of a ways to ‘change the world’, without any sense of scale between pursuits. People tell comedic stories, sing songs, discuss tales of personal adventures and so on. In TED Talks, all presentations are shown side-by-side with the same lighting and display. Yet in real life some projects produce orders of magnitude more output than others.
At 80,000 Hours, I read many applications for career consulting. I got the sense that there are many people out there trying to live their lives in order to eventually produce a TED talk. To them, that is what ‘changing the world’ means. These are often very smart and motivated people with very high opportunity costs.
I would see an application that would express interest in either starting an orphanage in Uganda, creating a woman's movement in Ohio, or making a conservatory in Costa Rica. It was clear that they were trying to ‘change the world’ in a very vague and TED-oriented way.
I believe that ‘Changing the World’ is promoted by TED, but internally acts mostly as a Schelling point. Agreeing on the importance of ‘changing the world’ is a good way of coming to a consensus without having to decide on moral philosophy. ‘Changing the world’ is simply the minimum common denominator for what that community can agree upon. This is a useful social tool, but an unfortunate side effect was that it inspired many others to follow this shelling point itself. Please don’t make the purpose of your life the lowest common denominator of a specific group of existing intellectuals.
It leads businesses to be gain employees and media attention without having to commit to anything. I’m living in Silicon Valley, and ‘Change the World’ is an incredibly common phrase for new and old startups. Silicon Valley (the TV show) made fun of it, as do much of the media. They should, but I think much of the time they miss the point; the problem here is not one where the companies are dishonest, but one where their honestly itself just doesn’t mean much. Declaring that a company is ‘changing the world’ isn’t really declaring anything.
Hiring conversations that begin and end with the motivation of ‘changing the world’ are like hiring conversations that begin and end with making ‘lots’ of money. If one couldn’t compare salaries between different companies, they would likely select poorly for salary. In terms of social benefit, most companies don’t attempt to quantify their costs and benefits on society except in very specific and positive ways for them. “Google has enabled Haiti disaster recovery” for social proof sounds to me like saying “We paid this other person $12,000 in July 2010” for salary proof. It sounds nice, but facts selected by a salesperson are simply not complete.
Problem 2: ‘Changing the World’ Creates Black and White Thinking
The idea that one wants to ‘change the world’ implies that there is such a thing as ‘changing the world’ and such a thing is ‘not changing the world’. It implies that there are ‘world changers’ and people who are not ‘world changers’. It implies that there is one group of ‘important people’ out there and then a lot of ‘useless’ others.
This directly supports the ‘Great Man’ theory, a 19th century idea that history and future actions are led by a small number of ‘great men’. There’s not a lot of academic research supporting this theory, but there’s a lot of attention to it, and it’s a lot of fun to pretend is true.
But it’s not. There is typically a lot of unglamorous work behind every successful project or organization. Behind every Steve Jobs are thousands of very intelligent and hard-working employees and millions of smart people who have created a larger ecosystem. If one only pays attention to Steve Jobs they will leave out most of the work. They will praise Steve Jobs far too highly and disregard the importance of unglamorous labor.
Typically much of the best work is also the most unglamorous. Making WordPress websites, sorting facts into analysis, cold calling donors. Many the best ideas for organizations may be very simple and may have been done before. However, for someone looking to get to TED conferences or become superstars, it is very easy to look over other comparatively menial labor. This means that not only will it not get done, but those people who do it feel worse about themselves.
So some people do important work and feel bad because it doesn’t meet the TED standard of ‘change the world’. Others try ridiculously ambitious things outside their own capabilities, fail, and then give up. Others don’t even try, because their perceived threshold is too high for them. The very idea of a threshold and a ‘change or don’t change the world’ approach is simply false, and believing something that’s both false and fundamentally important is really bad.
In all likelihood, you will not make the next billion-dollar nonprofit. You will not make the next billion-dollar business. You will not become the next congressperson in your district. This does not mean that you have not done a good job. It should not demoralize you in any way once you fail hardly to do these things.
Finally, I would like to ponder on what happens once or if one does decide they have changed the world. What now? Should one change it again?
It’s not obvious. Many retire or settle down after feeling accomplished. However, this is exactly when trying is the most important. People with the best histories have the best potentials. No matter how much a U.S. President may achieve, they still can achieve significantly more after the end of their terms. There is no ‘enough’ line for human accomplishment.
Conclusion
In summary the phrase change the world provides a lack of clear direction and encourages black-and-white thinking that distorts behaviors and motivation. However, I do believe that the phrase can act as a stepping stone towards a more concrete goal. ‘Change the World’ can act as an idea that requires a philosophical continuation. It’s a start for a goal, but it should be recognized that it’s far from a good ending.
Next time someone tells you about ‘changing the world’, ask them to follow through with telling you the specifics of what they mean. Make sure that they understand that they need to go further in order to mean anything.
And more importantly, do this for yourself. Choose a specific axiomatic philosophy or set of philosophies and aim towards those. Your ultimate goal in life is too important to be based on an empty marketing term.
Effective Writing
Granted, writing is not very effective. But some of us just love writing...
Earning to Give Writing: Which are the places that pay 1USD or more dollars per word?
Mind Changing Writing: What books need being written that can actually help people effectively change the world?
Clarification Writing: What needs being written because it is only through writing that these ideas will emerge in the first place?
Writing About Efficacy: Maybe nothing else needs to be written on this.
What should we be writing about if we have already been, for very long, training the craft? What has not yet been written, what is the new thing?
The world surely won't save itself through writing, but it surely won't write itself either.
Pascal's wager
I started this as a comment on "Being half wrong about pascal's wager is even worse" but its really long, so I'm posting it in discussion instead.
Also I illustrate here using negative examples (hell and equivalents) for the sake of followability and am a little worried about inciting some paranoia so am reminding you here that every negative example has an equal and opposite positive partner. For example pascal's wager has the opposite where accepting sends you to hell, it also has the opposite where refusing sends you to heaven. I haven't mentioned any positive equivalents or opposites below. Also all of these possibilities are literally effectively 0 so don't be worrying.
"For so long as I can remember, I have rejected Pascal's Wager in all its forms on sheerly practical grounds: anyone who tries to plan out their life by chasing a 1 in 10,000 chance of a huge pay-off is almost certainly doomed in practice. This kind of clever reasoning never pays off in real life..."
Pascal's wager shouldn't be in in the reference class of real life. It is a unique situation that would never crop up in real life as you're using it. In the world in which pascal's wager is correct you would still see people who plan out their lives on a 1 in 10000 chance of a huge pay-off fail 9999 times out of 10000. Also, this doesn't work for actually excluding pascal's wager. If pascal's wager starts off excluded from the category real life you've already made up your mind so this cannot quite be the actual order of events.
In this case 9999 times you waste your Christianity and 1/10000 you don't go to hell for eternity, which is, at a vast understatement, much worse than 10000 times as bad as worshipping god even at the expense of the sanity it costs to force a change in belief, the damage it does to your psyche to live as a victim of self inflicted Stockholm syndrome, and any other non obvious cost: With these premises choosing to believe in God produces infinitely better consequences on average.
Luckily the premises are wrong. 1/10000 is about 1/10000 too high for the relevant probability. Which is:
the probability that the wager or equivalent, (anything whose acceptance would prevent you going to hell is equivalent) is true
MINUS
the probability that its opposite or equivalent, (anything which would send you to hell for accepting is equivalent), is true
1/10000 is also way too high even if you're not accounting for opposite possibilities.
Equivalence here refers to what behaviours it punishes or rewards. I used hell because it is in the most popular wager but it applies to all wagers. To illustrate: If its true that there is one god: ANTIPASCAL GOD, and he sends you to hell for accepting any pascal's wager, then that's equivalent to any pascal's wager you hear having an opposite (no more "or equivalent"s will be typed but they still apply) which is true because if you accept any pascal's wager you go to hell. Conversely, If PASCAL GOD is the only god and he sends you to hell unless you accept any pascal's wager, that's equivalent to any pascal's wager you hear being true.
The real trick of pascals wager is the idea that they're generally no more likely than their opposite. For example, there are lots of good, fun, reasons to assign the Christian pascal's wager a lower probability than its opposite even engaging on a Christian level:
Hell is a medieval invention/translation error: the eternal torture thing isn't even in the modern bibles.
The belief or hell rule is hella evil and gains credibility from the same source (Christians, not the bible) who also claim that god is good as a more fundamental belief, which directly contradicts the hell or belief rule.
The bible claims that God hates people eating shellfish, taking his name in vain, and jealousy. Apparently taking his name in vain is the only unforgivable sin. So if they're right about the evil stuff, you're probably going to hell anyway.
It makes no sense that god would care enough about your belief and worship to consign people to eternal torture but not enough to show up once in a while.
it makes no sense to reward people for dishonesty.
The evilness really can't be overstated. eternal torture as a response to a mistake which is at its worst due to stupidity (but actually not even that: just a stacked deck scenario), outdoes pretty much everyone in terms of evilness. worse than pretty much every fucked up thing every other god is reputed to have done put together. The psychopath in the bible doesn't come close to coming close.
The problem with the general case of religious pascal's wagers is that people make stuff up (usually unintentionally) and what made up stuff gains traction has nothing to do with what is true. When both Christianity and Hinduism are taken seriously by millions (as were the Roman/Greek gods, and Viking gods, and Aztec gods, and Greek gods, and all sorts of other gods at different times, by large percentages of people) mass religious belief is 0 evidence. At most one religion set (e.g. Greek/Roman, Christian/Muslim/Jewish, etc) is even close to right so at least the rest are popular independently of truth.
The existence of a religion does not elevate the possibility that the god they describe exists above the possibility that the opposite exists because there is no evidence that religion has any accuracy in determining the features of a god, should one exist.
You might intuitively lean towards religions having better than 0 accuracy if a god exists but remember there's a lot of fictional evidence out there to generalise from. It is a matter of judgement here. there's no logical proof for 0 or worse accuracy (other than it being default and the lack of evidence) but negative accuracy is a possibility and you've probably played priest classes in video games or just seen how respected religions are and been primed to overestimate religion's accuracy in that hypothetical. Also if there is a god it has not shown itself publicly in a very long time, or ever. So it seems to have a preference for not being revealed. Also humans tend to be somewhat evil and read into others what they see in themselves. and I assume any high tier god (one that had the power to create and maintain a hell, detect disbelief, preserve immortal souls and put people in hell) would not be evil. Being evil or totally unscrupled has benefits among humans which a god would not get. I think without bad peers or parents there's no reason to be evil. I think people are mostly evil in relation to other people. So I religions a slight positive accuracy in the scenario where there is a god but it does not exceed priors against pascal's wager (another one is that they're pettily human) or perhaps even the god's desire to stay hidden.
Even if God itself whispered pascal's wager in your ear there is no incentive for it to actually carry out the threat:
There is only one iteration.
These threats aren't being made in person by the deity. They are either second hand or independently discovered so:
The deity has no use for making the threat true, to claim it more believably, as it might if it was an imperfect liar (at a level detectable by humans) that made the threats in person.
The deity has total plausible deniability.
Which adds up to all of the benefits of the threat having already being extracted by the time the punishment is due and no possibility of a rep hit (which wouldn't matter anyway.)
So, All else being equal. i.e. unless the god is the god of threats or pascal's wagers (whose opposites are equally likely):
If God is good (+ev on human happiness -ev on human sadness that sort of thing), actually carrying out the threats has negative value.
If god is scarily-doesn't-give-a-shit-neutral to humans, it still has no incentive to actually carry out the threat and a non zero energy cost.
if god gives the tiniest most infinitesimal shit about humans its incentive to actually carry out the threat is negative.
If God is evil you're fucked anyway:
The threat gains no power by being true, so the only incentive a God can have for following through is that it values human suffering. If it does, why would it not send you to hell if you believed in it? (remember that the god of commitments is as likely as the god of breaking commitments)
Despite the increased complexity of a human mind I think the most (not saying its at all likely just that all others are obviously wrong) likely motivational system for a god which would make it honour the wager is that that God thinks like a human and therefore would keep its commitment out of spite or gratitude or some other human reason. So here's why I think that one is wrong. It's generalizing from fictional evidence: humans aren't that homogeneous (and one without peers would be less so), and if a god gains likelihood to keep a commitment from humanness it also gains not -designed-to-be-evil-ness that would make it less likely to make evil wagers. It also has no source for spite or gratitude, having no peers. Finally could you ever feel spite towards a bug? Or gratitude? We are not just ants compared to a god, we're ant-ant-ant-etc-ants.
Also there's the reasons that refusing can actually get you in trouble: bullies don't get nicer when their demands are met. It's often not the suffering they're after but the dominance, at which point the suffering becomes an enjoyable illustration of that dominance. As we are ant-ant-etc-ants this probability is lower but The fact that we aren't all already in hell suggests that if god is evil it is not raw suffering that it values. Hostages are often executed even when the ransom is paid. Even if it is evil, it could be any kind of evil: its preferences cannot have been homogenised by memes and consensus.
There's also the rather cool possibility that if human-god is sending people to hell, maybe its for lack of understanding. If it wants belief it can take it more effectively than this. If it wants to hurt you it will hurt you anyway. Perhaps peerless, it was never prompted to think through the consequences of making others suffer. Maybe god, in the absence of peers just needs someone to explain that its not nice to let people burn in hell for eternity. I for one remember suddenly realising that those other fleshbags hosted people. I figured it out for myself but if I grew up alone as the master of the universe maybe I would have needed someone to explain it to me.
Let's create a market for cryonics
My uncle works in insurance. I recently mentioned that I'm planning to sign up for cryonics.
"That's amazing," he said. "Convincing a young person to buy life insurance? That has to be the greatest scam ever."
I took the comment lightly, not caring to argue about it. But it got me thinking - couldn't cryonics be a great opportunity for insurance companies to make a bunch of money?
Consider:
- Were there a much stronger demand for cryonics, cryonics organizations would flourish through competition, outside investment, and internal reinvestment. Costs would likely fall, and this would be good for cryonicists in general.
- If cryonics organizations flourish, this increases the probability of cryonics working. I can think of a bunch of ways in which this could happen; perhaps, for example, it would encourage the creation of safety nets whereby the failure of individual companies doesn't result in anyone getting thawed. It would increase R&D on both perfusion and revivification, encourage entrepreneurs to explore new related business models, etcetera.
- Increasing the demand for cryonics increases the demand for life insurance policies; thus insurance companies have a strong incentive to increase the demand for cryonics. Many large insurance companies would like nothing more than to usher in a generation of young people that want to buy life insurance.1
- The demand for cryonics could be increased by an insightful marketing campaign by an excellent marketing agency with an enormous budget... like those used by big insurance companies.2 A quick Googling says that ad spending by insurance companies exceeded $4.15 billion in 2009.
Almost a year ago, Strange7 suggested that cryonics organizations could run this kind of marketing campaign. I think he's wrong - there's no way CI or Alcor have the money. But the biggest insurance companies do have the money, and I'd be shocked if these companies or their agencies aren't already dumping all kinds of money into market research.
What would doing this require?
- That an open-minded person in the insurance industry who is in the position to direct this kind of funding exists. I don't have a sense of how likely this is.
- That we can locate/get an audience with the person from step 1. I think research and networking could get this done, especially if the higher-status among us are interested.
- That we can find someone who is capable and willing to explain this clearly and convincingly to the person from step 1. I'm not sure it would be that difficult. In the startup world, strangers convince strangers to speculatively spend millions of dollars every week. Hell, I'll do it.
I want to live in a world where cryonics ads air on TV just as often as ads for everything else people spend money on. I really can see an insurance company owning this project - if they can a) successfully revamp the image of cryonics and b) become known as the household name for it when the market gets big, they will make lots of money.
What do you think? Where has my reasoning failed? Does anyone here know anyone powerful in insurance?
Lastly, taking a cue from ciphergoth: this is not the place to rehash all the old arguments about cryonics. I'm asking about a very specific idea about marketing and life insurance, not requesting commentary on cryonics itself. Thanks!
1 Perhaps modeling the potential size of the market would offer insight here. If it turns out that this idea is not insane, I'll find a way to make it happen. I could use your help.
2 Consider what happened with diamonds in the 1900s:
... N. W. Ayer suggested that through a well-orchestrated advertising and public-relations campaign it could have a significant impact on the "social attitudes of the public at large and thereby channel American spending toward larger and more expensive diamonds instead of "competitive luxuries." Specifically, the Ayer study stressed the need to strengthen the association in the public's mind of diamonds with romance. Since "young men buy over 90% of all engagement rings" it would be crucial to inculcate in them the idea that diamonds were a gift of love: the larger and finer the diamond, the greater the expression of love. Similarly, young women had to be encouraged to view diamonds as an integral part of any romantic courtship.
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)