Perhaps a better form factor for Meetups vs Main board posts?
I like to read posts on "Main" from time to time, including ones that haven't been promoted. However, lately, these posts get drowned out by all the meetup announcements.
It seems like this could lead to a cycle where people comment less on recent non-promoted posts (because they fall off the Main non-promoted area quickly) which leads to less engagement, and less posts, etc.
Meetups are also very important, but here's the rub: I don't think a text-based announcement in the Main area is the best possible way to showcase meetups.
So here's an idea: how about creating either a calendar of upcoming meetups, or map with pins on it of all places having a meetup in the next three months?
This could be embedded on the front page of leswrong.com -- that'd let people find meetups easier (they can look either by timeframe or see if their region is represented), and would give more space to new non-promoted posts, which would hopefully promote more discussion, engagement, and new posts.
Thoughts?
Communicating concepts in value learning
Epistemic status: Trying to air out some thoughts for feedback, we'll see how successfully. May require some machine learning to make sense, and may require my level of ignorance to seem interesting.
Many current proposals for value learning are garden-variety regression (or its close cousin, classification). The agent doing the learning starts out with some model for what human values look like (a utility function over states of the world, or a reward function in a Markov decision process, or an expected utility function over possible actions), and receives training data that tells it the right thing to do in a lot of different situations. And so the agent finds the parameters of the model that minimize some loss function with the data, and Learns Human Values.
All these models of "the right thing to do" I mentioned are called parametric models, because they have some finite template that they update based on the data. Non-parametric models, on the other hand, have to keep a record of the data they've seen - prediction with a non-parametric model often looks like taking some weighted average of nearby known examples (though not always), while a parametric model would (often) fit some curve to the data and predict using that. But we'll get back to this later.
An obvious problem with current proposals is that it's very resource-intensive to communicate a category or concept to the agent. An AI might be able to automatically learn a lot about the world, but if we want to define its preferences, we have to somehow pick out the concept of "good stuff" within the representation of the world learned by the AI. Current proposals for this look like supervised learning, where huge amounts of labeled data are needed to specify "good stuff," and for many proposals I'm concerned that we'll actually end up specifying "stuff that humans can be convinced is good," which is not at all the same. Humans are much better learners than these supervised learning systems - they learn from fewer examples, and have a better grasp of the meaning and structure behind examples. This hints that there are some big improvements to be made in value learning.
This comparison to humans also leads to my vaguer concerns. It seems like the labeled examples are too crucial, and the unlabeled data not crucial enough. We want a value learner to understand concepts based on just a few examples so long as it has unlabeled data to fill in the gaps, and be able to learn more about morality from observation as a core competency, not as a pale shadow of its learning from labeled data. It seems like fine-tuning the model for the labeled data with stochastic gradient descent is missing something important.
To digress slightly, there are additional problems (e.g. corrigibility) once you build an agent that has an output channel instead of merely sponging up information, and these problems are harder if we want value learning from observation. If we want a value learning agent that could learn a simplified version of human morality, and then use that to learn the full version, we might need something like the Bayesian guarantee of Dewey 2011, or a functional analogue thereof.
One inspiration for alternative learning schemes might be clustering. As a toy example, imagine finding literal clusters in thing-space by k-means clustering. If you want to specify a cluster, you can do something like pick a small sample of examples and force them to be in the same cluster, and allow the number of clusters you try to find in the data to vary so that the statistics of the mandatory cluster are not very different from any other's. The huge problem here is that the idea of "thing-space" elides the difficulty of learning a representation of the world (or equivalently, elides how really, really complicated the cluster boundaries are in terms of observations).
Because learning how to understand the world already requires you to be really good at learning things, it's not obvious to me what identifying and using clusters in the data will entail. One might imagine that if we modeled the world using a big pile of autoencoders, this pile would already contain predictors for many concepts we might want to specify, but that if we use examples to try and communicate a concept that was not already learned, the pile might not even contain the features that make our concept easy to specify. Further speculation in this vein is fun, but is likely pointless at my current level of understanding. So even though learning well from unlabeled data is an important desideratum, I'm including this digression on clustering because I think it's interesting, not because I've shed much light.
Okay, returning to the parametric/non-parametric thing. The problem of being bad at learning from unlabeled data shows up in diverse proposals like inverse reinforcement learning and Hibbard 2012's two-part example. And in these cases it's not due to the learning algorithm per se, but for the simple reason that at some point the representation of the world is treated as fixed - the value learner is assumed to understand the world, and then proceeds to learn or be told human values in terms of that understanding. If you can no longer update your understanding of the world, naturally this causes problems with learning from observation.
We should instead design agents that are able to keep learning about the world. And this brings us back to the idea of communicating concepts via examples. The most reasonable way to update learned concepts in light of new information seems to be to just store the examples and re-apply them to the new understanding. This would be a non-parametric model of learned concepts.
What concepts to learn and how to use them to make decisions is not at all known to me, but as a placeholder we might consider the task of learning to identify "good actions," given proposed actions and some input about the world (similar to the "Learning from examples" section of Christiano's Approval Directed Agents).
Polling Thread - Tutorial
After some hiatus another installment of the Polling Thread.
This is your chance to ask your multiple choice question you always wanted to throw in. Get qualified numeric feedback to your comments. Post fun polls.
Additionally this is your chance to learn to write polls. This installment is devoted to try out polls for the cautious and curious.
These are the rules:
- Each poll goes into its own top level comment and may be commented there.
- You must at least vote all polls that were posted earlier than you own. This ensures participation in all polls and also limits the total number of polls. You may of course vote without posting a poll.
- Your poll should include a 'don't know' option (to avoid conflict with 2). I don't know whether we need to add a troll catch option here but we will see.
If you don't know how to make a poll in a comment look at the Poll Markup Help.
This is a somewhat regular thread. If it is successful I may post again. Or you may. In that case do the following :
- Use "Polling Thread" in the title.
- Copy the rules.
- Add the tag "poll".
- Link to this Thread or a previous Thread.
- Create a top-level comment saying 'Discussion of this thread goes here; all other top-level comments should be polls or similar'
- Add a second top-level comment with an initial poll to start participation.
Concept Safety: Producing similar AI-human concept spaces
I'm currently reading through some relevant literature for preparing my FLI grant proposal on the topic of concept learning and AI safety. I figured that I might as well write down the research ideas I get while doing so, so as to get some feedback and clarify my thoughts. I will posting these in a series of "Concept Safety"-titled articles.
A frequently-raised worry about AI is that it may reason in ways which are very different from us, and understand the world in a very alien manner. For example, Armstrong, Sandberg & Bostrom (2012) consider the possibility of restricting an AI via "rule-based motivational control" and programming it to follow restrictions like "stay within this lead box here", but they raise worries about the difficulty of rigorously defining "this lead box here". To address this, they go on to consider the possibility of making an AI internalize human concepts via feedback, with the AI being told whether or not some behavior is good or bad and then constructing a corresponding world-model based on that. The authors are however worried that this may fail, because
Humans seem quite adept at constructing the correct generalisations – most of us have correctly deduced what we should/should not be doing in general situations (whether or not we follow those rules). But humans share a common of genetic design, which the OAI would likely not have. Sharing, for instance, derives partially from genetic predisposition to reciprocal altruism: the OAI may not integrate the same concept as a human child would. Though reinforcement learning has a good track record, it is neither a panacea nor a guarantee that the OAIs generalisations agree with ours.
Addressing this, a possibility that I raised in Sotala (2015) was that possibly the concept-learning mechanisms in the human brain are actually relatively simple, and that we could replicate the human concept learning process by replicating those rules. I'll start this post by discussing a closely related hypothesis: that given a specific learning or reasoning task and a certain kind of data, there is an optimal way to organize the data that will naturally emerge. If this were the case, then AI and human reasoning might naturally tend to learn the same kinds of concepts, even if they were using very different mechanisms. Later on the post, I will discuss how one might try to verify that similar representations had in fact been learned, and how to set up a system to make them even more similar.
Word embedding
A particularly fascinating branch of recent research relates to the learning of word embeddings, which are mappings of words to very high-dimensional vectors. It turns out that if you train a system on one of several kinds of tasks, such as being able to classify sentences as valid or invalid, this builds up a space of word vectors that reflects the relationships between the words. For example, there seems to be a male/female dimension to words, so that there's a "female vector" that we can add to the word "man" to get "woman" - or, equivalently, which we can subtract from "woman" to get "man". And it so happens (Mikolov, Yih & Zweig 2013) that we can also get from the word "king" to the word "queen" by adding the same vector to "king". In general, we can (roughly) get to the male/female version of any word vector by adding or subtracting this one difference vector!
Why would this happen? Well, a learner that needs to classify sentences as valid or invalid needs to classify the sentence "the king sat on his throne" as valid while classifying the sentence "the king sat on her throne" as invalid. So including a gender dimension on the built-up representation makes sense.
But gender isn't the only kind of relationship that gets reflected in the geometry of the word space. Here are a few more:

It turns out (Mikolov et al. 2013) that with the right kind of training mechanism, a lot of relationships that we're intuitively aware of become automatically learned and represented in the concept geometry. And like Olah (2014) comments:
It’s important to appreciate that all of these properties of W are side effects. We didn’t try to have similar words be close together. We didn’t try to have analogies encoded with difference vectors. All we tried to do was perform a simple task, like predicting whether a sentence was valid. These properties more or less popped out of the optimization process.
This seems to be a great strength of neural networks: they learn better ways to represent data, automatically. Representing data well, in turn, seems to be essential to success at many machine learning problems. Word embeddings are just a particularly striking example of learning a representation.
It gets even more interesting, for we can use these for translation. Since Olah has already written an excellent exposition of this, I'll just quote him:
We can learn to embed words from two different languages in a single, shared space. In this case, we learn to embed English and Mandarin Chinese words in the same space.
We train two word embeddings, Wen and Wzh in a manner similar to how we did above. However, we know that certain English words and Chinese words have similar meanings. So, we optimize for an additional property: words that we know are close translations should be close together.
Of course, we observe that the words we knew had similar meanings end up close together. Since we optimized for that, it’s not surprising. More interesting is that words we didn’t know were translations end up close together.
In light of our previous experiences with word embeddings, this may not seem too surprising. Word embeddings pull similar words together, so if an English and Chinese word we know to mean similar things are near each other, their synonyms will also end up near each other. We also know that things like gender differences tend to end up being represented with a constant difference vector. It seems like forcing enough points to line up should force these difference vectors to be the same in both the English and Chinese embeddings. A result of this would be that if we know that two male versions of words translate to each other, we should also get the female words to translate to each other.
Intuitively, it feels a bit like the two languages have a similar ‘shape’ and that by forcing them to line up at different points, they overlap and other points get pulled into the right positions.
After this, it gets even more interesting. Suppose you had this space of word vectors, and then you also had a system which translated images into vectors in the same space. If you have images of dogs, you put them near the word vector for dog. If you have images of Clippy you put them near word vector for "paperclip". And so on.
You do that, and then you take some class of images the image-classifier was never trained on, like images of cats. You ask it to place the cat-image somewhere in the vector space. Where does it end up?
You guessed it: in the rough region of the "cat" words. Olah once more:
This was done by members of the Stanford group with only 8 known classes (and 2 unknown classes). The results are already quite impressive. But with so few known classes, there are very few points to interpolate the relationship between images and semantic space off of.
The Google group did a much larger version – instead of 8 categories, they used 1,000 – around the same time (Frome et al. (2013)) and has followed up with a new variation (Norouzi et al. (2014)). Both are based on a very powerful image classification model (from Krizehvsky et al. (2012)), but embed images into the word embedding space in different ways.
The results are impressive. While they may not get images of unknown classes to the precise vector representing that class, they are able to get to the right neighborhood. So, if you ask it to classify images of unknown classes and the classes are fairly different, it can distinguish between the different classes.
Even though I’ve never seen a Aesculapian snake or an Armadillo before, if you show me a picture of one and a picture of the other, I can tell you which is which because I have a general idea of what sort of animal is associated with each word. These networks can accomplish the same thing.
These algorithms made no attempt of being biologically realistic in any way. They didn't try classifying data the way the brain does it: they just tried classifying data using whatever worked. And it turned out that this was enough to start constructing a multimodal representation space where a lot of the relationships between entities were similar to the way humans understand the world.
How useful is this?
"Well, that's cool", you might now say. "But those word spaces were constructed from human linguistic data, for the purpose of predicting human sentences. Of course they're going to classify the world in the same way as humans do: they're basically learning the human representation of the world. That doesn't mean that an autonomously learning AI, with its own learning faculties and systems, is necessarily going to learn a similar internal representation, or to have similar concepts."
This is a fair criticism. But it is mildly suggestive of the possibility that an AI that was trained to understand the world via feedback from human operators would end up building a similar conceptual space. At least assuming that we chose the right learning algorithms.
When we train a language model to classify sentences by labeling some of them as valid and others as invalid, there's a hidden structure implicit in our answers: the structure of how we understand the world, and of how we think of the meaning of words. The language model extracts that hidden structure and begins to classify previously unseen things in terms of those implicit reasoning patterns. Similarly, if we gave an AI feedback about what kinds of actions counted as "leaving the box" and which ones didn't, there would be a certain way of viewing and conceptualizing the world implied by that feedback, one which the AI could learn.
Comparing representations
"Hmm, maaaaaaaaybe", is your skeptical answer. "But how would you ever know? Like, you can test the AI in your training situation, but how do you know that it's actually acquired a similar-enough representation and not something wildly off? And it's one thing to look at those vector spaces and claim that there are human-like relationships among the different items, but that's still a little hand-wavy. We don't actually know that the human brain does anything remotely similar to represent concepts."
Here we turn, for a moment, to neuroscience.
Multivariate Cross-Classification (MVCC) is a clever neuroscience methodology used for figuring out whether different neural representations of the same thing have something in common. For example, we may be interested in whether the visual and tactile representation of a banana have something in common.
We can test this by having several test subjects look at pictures of objects such as apples and bananas while sitting in a brain scanner. We then feed the scans of their brains into a machine learning classifier and teach it to distinguish between the neural activity of looking at an apple, versus the neural activity of looking at a banana. Next we have our test subjects (still sitting in the brain scanners) touch some bananas and apples, and ask our machine learning classifier to guess whether the resulting neural activity is the result of touching a banana or an apple. If the classifier - which has not been trained on the "touch" representations, only on the "sight" representations - manages to achieve a better-than-chance performance on this latter task, then we can conclude that the neural representation for e.g. "the sight of a banana" has something in common with the neural representation for "the touch of a banana".
A particularly fascinating experiment of this type is that of Shinkareva et al. (2011), who showed their test subjects both the written words for different tools and dwellings, and, separately, line-drawing images of the same tools and dwellings. A machine-learning classifier was both trained on image-evoked activity and made to predict word-evoked activity and vice versa, and achieved a high accuracy on category classification for both tasks. Even more interestingly, the representations seemed to be similar between subjects. Training the classifier on the word representations of all but one participant, and then having it classify the image representation of the left-out participant, also achieved a reliable (p<0.05) category classification for 8 out of 12 participants. This suggests a relatively similar concept space between humans of a similar background.
We can now hypothesize some ways of testing the similarity of the AI's concept space with that of humans. Possibly the most interesting one might be to develop a translation between a human's and an AI's internal representations of concepts. Take a human's neural activation when they're thinking of some concept, and then take the AI's internal activation when it is thinking of the same concept, and plot them in a shared space similar to the English-Mandarin translation. To what extent do the two concept geometries have similar shapes, allowing one to take a human's neural activation of the word "cat" to find the AI's internal representation of the word "cat"? To the extent that this is possible, one could probably establish that the two share highly similar concept systems.
One could also try to more explicitly optimize for such a similarity. For instance, one could train the AI to make predictions of different concepts, with the additional constraint that its internal representation must be such that a machine-learning classifier trained on a human's neural representations will correctly identify concept-clusters within the AI. This might force internal similarities on the representation beyond the ones that would already be formed from similarities in the data.
Next post in series: The problem of alien concepts.
Wear a Helmet While Driving a Car
A 2006 study showed that “280,000 people in the U.S. receive a motor vehicle induced traumatic brain injury every year” so you would think that wearing a helmet while driving would be commonplace. Race car drivers wear helmets. But since almost no one wears a helmet while driving a regular car, you probably fear that if you wore one you would look silly, attract the notice of the police for driving while weird, or the attention of another driver who took your safety attire as a challenge. (Car drivers are more likely to hit bicyclists who wear helmets.)
The $30+shipping Crasche hat is designed for people who should wear a helmet but don’t. It looks like a ski cap, but contains concealed lightweight protective material. People who have signed up for cryonics, such as myself, would get an especially high expected benefit from using a driving helmet because we very much want our brains to “survive” even a “fatal” crash. I have been using a Crasche hat for about a week.
Calibration Test with database of 150,000+ questions
Hi all,
I put this calibration test together this morning. It pulls from a trivia API of over 150,000 questions so you should be able to take this many, many times before you start seeing repeats.
http://www.2pih.com/caltest.php
A few notes:
1. The questions are "Jeopardy" style questions so the wording may be strange, and some of them might be impossible to answer without further context. On these just assign 0% confidence.
2. As the questions are open-ended, there is no answer-checking mechanism. You have to be honest with yourself as to whether or not you got the right answer. Because what would be the point of cheating at a calibration test?
I can't think of anything else. Please let me know if there are any features you would want to see added, or if there are any bugs, issues, etc.
**EDIT**
As per suggestion I have moved this to the main section. Here are the changes I'll be making soon:
- Label the axes and include an explanation of calibration curves.
- Make it so you can reverse your last selection in the event of a misclick.
Here are changes I'll make eventually:
- Create an account system so you can store your results online.
- Move trivia DB over to my own server to allow for flagging of bad/unanswerable questions.
Here are the changes that are done:
- Change 0% to 0.1% and 99% to 99.9%
- Added second graph which shows the frequency of your confidence selections.
- Color code the "right" and "wrong" buttons and make them farther apart to prevent misclicks.
- Store your results locally so that you can see your calibration over time.
- Check to see if a question is blank and skip if so.
Slate Star Codex: alternative comment threads on LessWrong?
Like many Less Wrong readers, I greatly enjoy Slate Star Codex; there's a large overlap in readership. However, the comments there are far worse, not worth reading for me. I think this is in part due to the lack of LW-style up and downvotes. Have there ever been discussion threads about SSC posts here on LW? What do people think of the idea occasionally having them? Does Scott himself have any views on this, and would he be OK with it?
Update:
The latest from Scott:
I'm fine with anyone who wants reposting things for comments on LW, except for posts where I specifically say otherwise or tag them with "things i will regret writing"
In this thread some have also argued for not posting the most hot-button political writings.
Would anyone be up for doing this? Ataxerxes started with "Extremism in Thought Experiments is No Vice"
Attempted Telekinesis
Related to: Compartmentalization in epistemic and instrumental rationality; That other kind of status.
A discussion of heroic responsibility
[Originally posted to my personal blog, reposted here with edits.]
Introduction
You could call it heroic responsibility, maybe,” Harry Potter said. “Not like the usual sort. It means that whatever happens, no matter what, it’s always your fault. Even if you tell Professor McGonagall, she’s not responsible for what happens, you are. Following the school rules isn’t an excuse, someone else being in charge isn’t an excuse, even trying your best isn’t an excuse. There just aren’t any excuses, you’ve got to get the job done no matter what.” Harry’s face tightened. “That’s why I say you’re not thinking responsibly, Hermione. Thinking that your job is done when you tell Professor McGonagall—that isn’t heroine thinking. Like Hannah being beat up is okay then, because it isn’t your fault anymore. Being a heroine means your job isn’t finished until you’ve done whatever it takes to protect the other girls, permanently.” In Harry’s voice was a touch of the steel he had acquired since the day Fawkes had been on his shoulder. “You can’t think as if just following the rules means you’ve done your duty. –HPMOR, chapter 75.
Something Impossible
Bold attempts aren't enough, roads can't be paved with intentions...You probably don’t even got what it takes,But you better try anyway, for everyone's sakeAnd you won’t find the answer until you escape from theLabyrinth of your conventions.Its time to just shut up, and do the impossible.Can’t walk away...Gotta break off those shackles, and shake off those chainsGotta make something impossible happen today...
The Well-Functioning Gear
I feel like maybe the hospital is an emergent system that has the property of patient-healing, but I’d be surprised if any one part of it does.Suppose I see an unusual result on my patient. I don’t know what it means, so I mention it to a specialist. The specialist, who doesn’t know anything about the patient beyond what I’ve told him, says to order a technetium scan. He has no idea what a technetium scan is or how it is performed, except that it’s the proper thing to do in this situation. A nurse is called to bring the patient to the scanner, but has no idea why. The scanning technician, who has only a vague idea why the scan is being done, does the scan and spits out a number, which ends up with me. I bring it to the specialist, who gives me a diagnosis and tells me to ask another specialist what the right medicine for that is. I ask the other specialist – who has only the sketchiest idea of the events leading up to the diagnosis – about the correct medicine, and she gives me a name and tells me to ask the pharmacist how to dose it. The pharmacist – who has only the vague outline of an idea who the patient is, what test he got, or what the diagnosis is – doses the medication. Then a nurse, who has no idea about any of this, gives the medication to the patient. Somehow, the system works and the patient improves.Part of being an intern is adjusting to all of this, losing some of your delusions of heroism, getting used to the fact that you’re not going to be Dr. House, that you are at best going to be a very well-functioning gear in a vast machine that does often tedious but always valuable work. –Scott Alexander
Recursive Heroic Responsibility
Heroic responsibility for average humans under average conditions
I can predict at least one thing that people will say in the comments, because I've heard it hundreds of times–that Swimmer963 is a clear example of someone who should leave nursing, take the meta-level responsibility, and do something higher impact for the usual. Because she's smart. Because she's rational. Whatever.
Fine. This post isn't about me. Whether I like it or not, the concept of heroic responsibility is now a part of my value system, and I probably am going to leave nursing.
But what about the other nurses on my unit, the ones who are competent and motivated and curious and really care? Would familiarity with the concept of heroic responsibility help or hinder them in their work? Honestly, I predict that they would feel alienated, that they would assume I held a low opinion of them (which I don't, and I really don't want them to think that I do), and that they would flinch away and go back to the things that they were doing anyway, the role where they were comfortable–or that, if they did accept it, it would cause them to burn out. So as a consequentialist, I'm not going to tell them.
And yeah, that bothers me. Because I'm not a special snowflake. Because I want to live in a world where rationality helps everyone. Because I feel like the reason they would react that was isn't because of anything about them as people, or because heroic responsibility is a bad thing, but because I'm not able to communicate to them what I mean. Maybe stupid reasons. Still bothers me.
Why "Changing the World" is a Horrible Phrase
Steve Jobs famously convinced John Scully from Pepsi to join Apple Computer with the line, “Do you want to sell sugared water for the rest of your life? Or do you want to come with me and change the world?”. This sounds convincing until one thinks closely about it.
Steve Jobs was a famous salesman. He was known for his selling ability, not his honesty. His terminology here was interesting. ‘Change the world’ is a phrase that both sounds important and is difficult to argue with. Arguing if Apple was really ‘changing the world’ would have been pointless, because the phrase was so ambiguous that there would be little to discuss. On paper, of course Apple is changing the world, but then of course any organization or any individual is also ‘changing’ the world. A real discussion of if Apple ‘changes the world’ would lead to a discussion of what ‘changing the world’ actually means, which would lead to obscure philosophy, steering the conversation away from the actual point.
‘Changing the world’ is an effective marketing tool that’s useful for building the feeling of consensus. Steve Jobs used it heavily, as had endless numbers of businesses, conferences, nonprofits, and TV shows. It’s used because it sounds good and is typically not questioned, so I’m here to question it. I believe that the popularization of this phrase creates confused goals and perverse incentives from people who believe they are doing good things.
Problem 1: 'Changing the World' Leads to Television Value over Real Value
It leads nonprofit workers to passionately chase feeble things. I’m amazed by the variety that I see in people who try to ‘change the world’. Some grow organic food, some research rocks, some play instruments. They do basically everything.
Few people protest this variety. There are millions of voices giving the appeal to ‘change the world’ in the way that would validate many radically diverse pursuits.
TED, the modern symbol of the intellectual elite for many, is itself a grab bag of a ways to ‘change the world’, without any sense of scale between pursuits. People tell comedic stories, sing songs, discuss tales of personal adventures and so on. In TED Talks, all presentations are shown side-by-side with the same lighting and display. Yet in real life some projects produce orders of magnitude more output than others.
At 80,000 Hours, I read many applications for career consulting. I got the sense that there are many people out there trying to live their lives in order to eventually produce a TED talk. To them, that is what ‘changing the world’ means. These are often very smart and motivated people with very high opportunity costs.
I would see an application that would express interest in either starting an orphanage in Uganda, creating a woman's movement in Ohio, or making a conservatory in Costa Rica. It was clear that they were trying to ‘change the world’ in a very vague and TED-oriented way.
I believe that ‘Changing the World’ is promoted by TED, but internally acts mostly as a Schelling point. Agreeing on the importance of ‘changing the world’ is a good way of coming to a consensus without having to decide on moral philosophy. ‘Changing the world’ is simply the minimum common denominator for what that community can agree upon. This is a useful social tool, but an unfortunate side effect was that it inspired many others to follow this shelling point itself. Please don’t make the purpose of your life the lowest common denominator of a specific group of existing intellectuals.
It leads businesses to be gain employees and media attention without having to commit to anything. I’m living in Silicon Valley, and ‘Change the World’ is an incredibly common phrase for new and old startups. Silicon Valley (the TV show) made fun of it, as do much of the media. They should, but I think much of the time they miss the point; the problem here is not one where the companies are dishonest, but one where their honestly itself just doesn’t mean much. Declaring that a company is ‘changing the world’ isn’t really declaring anything.
Hiring conversations that begin and end with the motivation of ‘changing the world’ are like hiring conversations that begin and end with making ‘lots’ of money. If one couldn’t compare salaries between different companies, they would likely select poorly for salary. In terms of social benefit, most companies don’t attempt to quantify their costs and benefits on society except in very specific and positive ways for them. “Google has enabled Haiti disaster recovery” for social proof sounds to me like saying “We paid this other person $12,000 in July 2010” for salary proof. It sounds nice, but facts selected by a salesperson are simply not complete.
Problem 2: ‘Changing the World’ Creates Black and White Thinking
The idea that one wants to ‘change the world’ implies that there is such a thing as ‘changing the world’ and such a thing is ‘not changing the world’. It implies that there are ‘world changers’ and people who are not ‘world changers’. It implies that there is one group of ‘important people’ out there and then a lot of ‘useless’ others.
This directly supports the ‘Great Man’ theory, a 19th century idea that history and future actions are led by a small number of ‘great men’. There’s not a lot of academic research supporting this theory, but there’s a lot of attention to it, and it’s a lot of fun to pretend is true.
But it’s not. There is typically a lot of unglamorous work behind every successful project or organization. Behind every Steve Jobs are thousands of very intelligent and hard-working employees and millions of smart people who have created a larger ecosystem. If one only pays attention to Steve Jobs they will leave out most of the work. They will praise Steve Jobs far too highly and disregard the importance of unglamorous labor.
Typically much of the best work is also the most unglamorous. Making WordPress websites, sorting facts into analysis, cold calling donors. Many the best ideas for organizations may be very simple and may have been done before. However, for someone looking to get to TED conferences or become superstars, it is very easy to look over other comparatively menial labor. This means that not only will it not get done, but those people who do it feel worse about themselves.
So some people do important work and feel bad because it doesn’t meet the TED standard of ‘change the world’. Others try ridiculously ambitious things outside their own capabilities, fail, and then give up. Others don’t even try, because their perceived threshold is too high for them. The very idea of a threshold and a ‘change or don’t change the world’ approach is simply false, and believing something that’s both false and fundamentally important is really bad.
In all likelihood, you will not make the next billion-dollar nonprofit. You will not make the next billion-dollar business. You will not become the next congressperson in your district. This does not mean that you have not done a good job. It should not demoralize you in any way once you fail hardly to do these things.
Finally, I would like to ponder on what happens once or if one does decide they have changed the world. What now? Should one change it again?
It’s not obvious. Many retire or settle down after feeling accomplished. However, this is exactly when trying is the most important. People with the best histories have the best potentials. No matter how much a U.S. President may achieve, they still can achieve significantly more after the end of their terms. There is no ‘enough’ line for human accomplishment.
Conclusion
In summary the phrase change the world provides a lack of clear direction and encourages black-and-white thinking that distorts behaviors and motivation. However, I do believe that the phrase can act as a stepping stone towards a more concrete goal. ‘Change the World’ can act as an idea that requires a philosophical continuation. It’s a start for a goal, but it should be recognized that it’s far from a good ending.
Next time someone tells you about ‘changing the world’, ask them to follow through with telling you the specifics of what they mean. Make sure that they understand that they need to go further in order to mean anything.
And more importantly, do this for yourself. Choose a specific axiomatic philosophy or set of philosophies and aim towards those. Your ultimate goal in life is too important to be based on an empty marketing term.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)