Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
Related to: Trusting Expert Consensus
In the sequences, Eliezer talks tells the story of how in childhood he fell into an affective death spiral around intelligence. In his story, his mistakes were failing to understand until he was much older that intelligence does not guarantee moraliry, and that very intelligent people can still end up believing crazy things because of human irrationality.
I have my own story about learning the limits of intelligence, but I ended up learning a very different lesson than the one Eliezer learned. It also started somewhat differently. It involved no dramatic death spiral, just being extremely smart and knowing it from the time I was in kindergaarden. To the point that I grew up with the expectation that, when it came to doing anything mental, sheer smarts would be enough to make me crushingly superior to all the other students around me and many of the adults.
In Harry Potter and the Methods of Rationality, Harry complains of having once had a math teacher who didn't know what a logrithm is. I wonder if this is autobiographical on Eliezer's part. I have an even better story, though: in second grade, I had a teacher who insisted there was no such thing as negative numbers. The experience of knowing I was right about this, when the adult authority figure was so very wrong, was probably not good for my humility.
But such brushes with stupid teachers probably weren't the main thing that drove my early self-image. It was enough to be smarter than the other kids around me, and know it. Looking back, there's little that seems worth bragging about. I learned calculus at age 15, not age 8. But that was still younger than any of the other kids I knew took calculus (if they took it at all). And knowing I didn't know any other kids as smart as me did funny things to my view of the world.
I'm honestly not sure I realized there were any kids in the whole world smarter than me until sophomore year, when I qualified to go to a national-level math competition. That was something that no one else at my high school managed to do, not even the seniors... but at the competition itself, I didn't do particularly well. It was one of the things that made me realize that I wasn't, in fact, going to be the next Einstein. But all I took from the math competition was that there were people smarter than me in the world. It didn't, say, occur to me that maybe some of the other competitors had spent more time practicing really hard math problems.
Eliezer once said, "I think I should be able to handle damn near anything on the fly." That's a pretty good description of how I felt at this point in my life. At least as long as we were talking about mental challenges and not sports, and assuming I wasn't going up against someone smarter than myself.
I think my first memory of getting some inkling that maybe sufficient intelligence wouldn't lead to automatically being the best at everything comes from... *drum roll* ...playing Starcraft. I think it was probably junior or senior that I got into the game, and at first I just did the standard campaign playing against the computer, but then I got into online play, and promptly got crushed. And not just by one genius player I encountered on a fluke, but in virtually every match.
This was a shock. I mean, I had friends who could beat me at Super Smash Bros, but Starcraft was a strategy game, which meant it should be like chess, and I'd never had any trouble beating my friends at chess. Sure, when I'd gone to local chess tournaments back in grade school, I'd gotten soundly beat by many of the older players then, but it's not like I'd ever expected all older people to be as stupid as my second grade teacher. But by the time I'd gotten into Starcraft, I was almost an adult, so what was going on?
The answer of course was that most of the other people playing online had played a hell of a lot more Starcraft than me. Also, I'd thought I'd figured the game designer's game-design philosophy (I hadn't), which had let me to make all kinds of incorrect assumptions about the game, assumptions which I could have found out were false if I'd tested them, or (probably) if I'd just looked for an online guide that reported the results of other people's tests.
It all sounds very silly in retrospect, and it didn't change my worldview overnight. But it was (among?) the first of a series of events that made me realize that trying to master something just by thinking about it tends to go badly wrong. That when untrained brilliance goes up against domain expertise, domain expertise will generally win.
A whole bunch of caveats here. I'm not denying that being smart is pretty awesome. As a smart person, I highly recommend it. And acquiring domain expertise requires a certain minimum level of intelligence, which varies from field to field. It's only once you get beyond that minimum that more intelligence doesn't help as much as expertise. Finally, I'm talking about human scale intelligence here, the gap between the village idiot and Einstein is tiny compared to the gap between Einstein and possible superintelligences, so maybe a superintelligence could school any human expert in anything without acquiring any particular domain expertise.
Still, when I hear Eliezer say he thinks he should be able to handle anything on the fly, it strikes me as incredibly foolish. And I worry when I see fellow smart people who seem to think that being very smart and rational gives them grounds to dismiss other people's domain expertise. As Robin Hanson has said:
I was a physics student and then a physics grad student. In that process, I think I assimilated what was the standard worldview of physicists, at least as projected on the students. That worldview was that physicists were great, of course, and physicists could, if they chose to, go out to all those other fields, that all those other people keep mucking up and not making progress on, and they could make a lot faster progress, if progress was possible, but they don’t really want to, because that stuff isn’t nearly as interesting as physics is, so they are staying in physics and making progress there...
Surely you can look at some little patterns but because you can’t experiment on people, or because it’ll be complicated, or whatever it is, it’s just not possible. Partly, that’s because they probably tried for an hour, to see what they could do, and couldn’t get very far. It’s just way too easy to have learned a set of methods, see some hard problem, try it for an hour, or even a day or a week, not get very far, and decide it’s impossible, especially if you can make it clear that your methods definitely won’t work there. You don’t, often, know that there are any other methods to do anything with because you’ve learned only certain methods...
As one of the rare people who have spent a lot of time learning a lot of different methods, I can tell you there are a lot out there. Furthermore, I’ll stick my neck out and say most fields know a lot. Almost all academic fields where there’s lots of articles and stuff published, they know a lot.
(For those who don't know: Robin spent time doing physics, philosophy, and AI before landing in his current field of economics. When he says he's spent a lot of time learning a lot of different methods, isn't an idle boast.)
Finally, what about the original story that Eliezer says set off his original childhood death spiral around intelligence?:
My parents always used to downplay the value of intelligence. And play up the value of—effort, as recommended by the latest research? No, not effort. Experience. A nicely unattainable hammer with which to smack down a bright young child, to be sure. That was what my parents told me when I questioned the Jewish religion, for example. I tried laying out an argument, and I was told something along the lines of: "Logic has limits, you'll understand when you're older that experience is the important thing, and then you'll see the truth of Judaism." I didn't try again. I made one attempt to question Judaism in school, got slapped down, didn't try again. I've never been a slow learner.
I think concluding experience isn't all that great is the wrong response here. Experience is important. The right response is to ask whether all older, more experienced people see the truth of Judaism. The answer of course is that they don't; a depressing number stick with whatever religion they grew up with (which usually isn't Judaism), a significant number end up non-believers, and a few convert to a new religion. But when almost everyone with a high level relevant experience agrees on something, beware thinking you know better than them based on your superior intelligence and supposed rationality.
New meetups (or meetups with a hiatus of more than a year) are happening in:
- [Leipzig] Secular Solstice Celebration! (And the Inauguration of the LW Leipzig Community): 21 December 2013 05:05PM
- Mumbai Meetup: 15 December 2013 03:00PM
- Newcastle-upon-Tyne meetup, December: 07 December 2013 12:00PM
- Utrecht: 14 December 2013 02:00PM
Other irregularly scheduled Less Wrong meetups are taking place in:
- Berlin: 01 January 2019 01:30PM
- Frankfurt meetup:: 08 December 2013 02:00PM
- Helsinki Meetup: 15 December 2013 03:00PM
- Moscow, The First Winter One: 08 December 2013 04:00PM
- Munich Meetup: 07 December 2013 02:00PM
- Saint Petersburg, Russia. On discussions and some social skills: 08 December 2013 04:00PM
- San Francisco / App Academy meetup [LOCATION CHANGE]: 07 December 2014 07:00PM
- Urbana-Champaign fun and games: 07 December 2013 02:00PM
- [Vienna] The Return of the Rationalists!: 14 December 2013 03:00PM
- Austin, TX: 07 December 2019 01:30PM
- Bay Area Solstice: 07 December 2013 06:00PM
- Boston/Cambridge - The Attention Economy: 08 December 2013 02:00PM
- Brussels monthly meetup: time!: 14 December 2013 01:00PM
- London Practical Meetup - Calibration Training!: 08 December 2013 02:00AM
- Washington DC Fermi Estimates Meetup: 08 December 2013 03:00PM
Locations with regularly scheduled meetups: Austin, Berkeley, Brussels, Cambridge, MA, Cambridge UK, Columbus, London, Madison WI, Melbourne, Mountain View, New York, Philadelphia, Research Triangle NC, Salt Lake City, Seattle, Toronto, Vienna, Washington DC, Waterloo, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers.
I think there's a decent chance that governments will be the first to build artificial general intelligence (AI). International hostility, especially an AI arms race, could exacerbate risk-taking, hostile motivations, and errors of judgment when creating AI. If so, then international cooperation could be an important factor to consider when evaluating the flow-through effects of charities. That said, we may not want to popularize the arms-race consideration too openly lest we accelerate the race.
Will governments build AI first?
AI poses a national-security threat, and unless the militaries of powerful countries are very naive, it seems to me unlikely they'd allow AI research to proceed in private indefinitely. At some point the US military would confiscate the project from Google or Goldman Sachs, if the US military isn't already ahead of them in secret by that point. (DARPA already funds a lot of public AI research.)
There are some scenarios in which private AI research wouldn't be nationalized:
- An unexpected AI foom before anyone realizes what was coming.
- The private developers stay underground for long enough not to be caught. This becomes less likely the more government surveillance improves (see "Arms Control and Intelligence Explosions").
- AI developers move to a "safe haven" country where they can't be taken over. (It seems like the international community might prevent this, however, in the same way it now seeks to suppress terrorism in other countries.)
It seems that both of these bad scenarios would be exacerbated by international conflict. Greater hostility means countries are more inclined to use AI as a weapon. Indeed, whoever builds the first AI can take over the world, which makes building AI the ultimate arms race. A USA-China race is one reasonable possibility.
Arms races encourage risk-taking -- being willing to skimp on safety measures to improve your odds of winning ("Racing to the Precipice"). In addition, the weaponization of AI could lead to worse expected outcomes in general. CEV seems to have less hope of success in a Cold War scenario. ("What? You want to include the evil Chinese in your CEV??") (ETA: With a pure CEV, presumably it would eventually count Chinese values even if it started with just Americans, because people would become more enlightened during the process. However, when we imagine more crude democratic decision outcomes, this becomes less likely.)
Ways to avoid an arms race
Averting an AI arms race seems to be an important topic for research. It could be partly informed by the Cold War and other nuclear arms races, as well as by other efforts at nonproliferation of chemical and biological weapons.
Apart from more robust arms control, other factors might help:
- Improved international institutions like the UN, allowing for better enforcement against defection by one state.
- In the long run, a scenario of global governance (i.e., a Leviathan or singleton) would likely be ideal for strengthening international cooperation, just like nation states reduce intra-state violence.
- Better construction and enforcement of nonproliferation treaties.
- Improved game theory and international-relations scholarship on the causes of arms races and how to avert them. (For instance, arms races have sometimes been modeled as iterated prisoner's dilemmas with imperfect information.)
- How to improve verification, which has historically been a weak point for nuclear arms control. (The concern is that if you haven't verified well enough, the other side might be arming while you're not.)
- Moral tolerance and multicultural perspective, aiming to reduce people's sense of nationalism. (In the limit where neither Americans nor Chinese cared which government won the race, there would be no point in having the race.)
- Improved trade, democracy, and other forces that historically have reduced the likelihood of war.
Are these efforts cost-effective?
World peace is hardly a goal unique to effective altruists (EAs), so we shouldn't necessarily expect low-hanging fruit. On the other hand, projects like nuclear nonproliferation seem relatively underfunded even compared with anti-poverty charities.
I suspect more direct MIRI-type research has higher expected value, but among EAs who don't want to fund MIRI specifically, encouraging donations toward international cooperation could be valuable, since it's certainly a more mainstream cause. I wonder if GiveWell would consider studying global cooperation specifically beyond its indirect relationship with catastrophic risks.
Should we publicize AI arms races?
When I mentioned this topic to a friend, he pointed out that we might not want the idea of AI arms races too widely known, because then governments might take the concern more seriously and therefore start the race earlier -- giving us less time to prepare and less time to work on FAI in the meanwhile. From David Chalmers, "The Singularity: A Philosophical Analysis" (footnote 14):
When I discussed these issues with cadets and staff at the West Point Military Academy, the question arose as to whether the US military or other branches of the government might attempt to prevent the creation of AI or AI+, due to the risks of an intelligence explosion. The consensus was that they would not, as such prevention would only increase the chances that AI or AI+ would first be created by a foreign power. One might even expect an AI arms race at some point, once the potential consequences of an intelligence explosion are registered. According to this reasoning, although AI+ would have risks from the standpoint of the US government, the risks of Chinese AI+ (say) would be far greater.
I recently ran across Nick Bostrom’s idea of subjecting your strongest beliefs to a hypothetical apostasy in which you try to muster the strongest arguments you can against them. As you might have figured out, I believe strongly in effective altruism—the idea of applying evidence and reason to finding the best ways to improve the world. As such, I thought it would be productive to write a hypothetical apostasy on the effective altruism movement.
(EDIT: As per the comments of Vaniver, Carl Shulman, and others, this didn't quite come out as a hypothetical apostasy. I originally wrote it with that in mind, but decided that a focus on more plausible, more moderate criticisms would be more productive.)
- How to read this post
- Philosophical difficulties
- Poor cause choices
- Efficient markets for giving
- Inconsistent attitude towards rigor
- Poor psychological understanding
- Historical analogues
- Community problems
- Movement building issues
- Are these problems solvable?
How to read this post
(EDIT: the following two paragraphs were written before I softened the tone of the piece. They're less relevant to the more moderate version that I actually published.)
Hopefully this is clear, but as a disclaimer: this piece is written in a fairly critical tone. This was part of an attempt to get “in character”. This tone does not indicate my current mental state with regard to the effective altruism movement. I agree, to varying extents, with some of the critiques I present here, but I’m not about to give up on effective altruism or stop cooperating with the EA movement. The apostasy is purely hypothetical.
Also, because of the nature of a hypothetical apostasy, I’d guess that for effective altruist readers, the critical tone of this piece may be especially likely to trigger defensive rationalization. Please read through with this in mind. (A good way to counteract this effect might be, for instance, to imagine that you’re not an effective altruist, but your friend is, and it’s them reading through it: how should they update their beliefs?)
(End less relevant paragraphs.)
Finally, if you’ve never heard of effective altruism before, I don’t recommend making this piece your first impression of it! You’re going to get a very skewed view because I don’t bother to mention all the things that are awesome about the EA movement.
Effective altruism is, to my knowledge, the first time that a substantially useful set of ethics and frameworks to analyze one’s effect on the world has gained a broad enough appeal to resemble a social movement. (I’d say these principles are something like altruism, maximization, egalitarianism, and consequentialism; together they imply many improvements over the social default for trying to do good in the world—earning to give as opposed to doing direct charity work, working in the developing world rather than locally, using evidence and feedback to analyze effectiveness, etc.) Unfortunately, as a movement effective altruism is failing to use these principles to acquire correct nontrivial beliefs about how to improve the world.
By way of clarification, consider a distinction between two senses of the word “trying” I used above. Let’s call them “actually trying” and “pretending to try”. Pretending to try to improve the world is something like responding to social pressure to improve the world by querying your brain for a thing which improves the world, taking the first search result and rolling with it. For example, for a while I thought that I would try to improve the world by developing computerized methods of checking informally-written proofs, thus allowing more scalable teaching of higher math, democratizing education, etc. Coincidentally, computer programming and higher math happened to be the two things that I was best at. This is pretending to try. Actually trying is looking at the things that improve the world, figuring out which one maximizes utility, and then doing that thing. For instance, I now run an effective altruist student organization at Harvard because I realized that even though I’m a comparatively bad leader and don’t enjoy it very much, it’s still very high-impact if I work hard enough at it. This isn’t to say that I’m actually trying yet, but I’ve gotten closer.
Using this distinction between pretending and actually trying, I would summarize a lot of effective altruism as “pretending to actually try”. As a social group, effective altruists have successfully noticed the pretending/actually-trying distinction. But they seem to have stopped there, assuming that knowing the difference between fake trying and actually trying translates into ability to actually try. Empirically, it most certainly doesn’t. A lot of effective altruists still end up satisficing—finding actions that are on their face acceptable under core EA standards and then picking those which seem appealing because of other essentially random factors. This is more likely to converge on good actions than what society does by default, because the principles are better than society’s default principles. Nevertheless, it fails to make much progress over what is directly obvious from the core EA principles. As a result, although “doing effective altruism” feels like truth-seeking, it often ends up being just a more credible way to pretend to try.
Rationality quotes time!
The usual rules:
- Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself.
- Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here.
- No more than 5 quotes per person per monthly thread, please.
When I was a teenager, I picked up my mom's copy of Dale Carnegie's How to Win Friends and Influence People. One of the chapters that most made an impression on me was titled "You Can't Win an Argument," in which Carnegie writes:
Nine times out of ten, an argument ends with each of the contestants more firmly convinced than ever that he is absolutely right.
You can’t win an argument. You can’t because if you lose it, you lose it; and if you win it, you lose it. Why? Well, suppose you triumph over the other man and shoot his argument full of holes and prove that he is non compos mentis. Then what? You will feel fine. But what about him? You have made him feel inferior. You have hurt his pride. He will resent your triumph. And -
"A man convinced against his will
"Is of the same opinion still."
In the next chapter, Carnegie quotes Benjamin Franklin saying how he had made it a rule never to contradict anyone. Carnegie approves: he thinks you should never argue with or contradict anyone, because you won't convince them (even if you "hurl at them all the logic of a Plato or an Immanuel Kant"), and you'll just make them mad at you.
It may seem strange to hear this advice cited on a rationalist blog, because the atheo-skeptico-rational-sphere violates this advice on a routine basis. In fact I've never tried to follow Carnegie's advice—and yet, I don't think the rationale behind it is completely stupid. Carnegie gets human psychology right, and I fondly remember reading his book as being when I first really got clued in about human irrationality.
At the recent CFAR Workshop in NY, someone mentioned that they were uncomfortable with pauses in conversation, and that got me thinking about different conversational styles.
Growing up with friends who were disproportionately male and disproportionately nerdy, I learned that it was a normal thing to interrupt people. If someone said something you had to respond to, you’d just start responding. Didn’t matter if it “interrupted” further words – if they thought you needed to hear those words before responding, they’d interrupt right back.
Occasionally some weird person would be offended when I interrupted, but I figured this was some bizarre fancypants rule from before people had places to go and people to see. Or just something for people with especially thin skins or delicate temperaments, looking for offense and aggression in every action.
Then I went to St. John’s College – the talking school (among other things). In Seminar (and sometimes in Tutorials) there was a totally different conversational norm. People were always expected to wait until whoever was talking was done. People would apologize not just for interrupting someone who was already talking, but for accidentally saying something when someone else looked like they were about to speak. This seemed totally crazy. Some people would just blab on unchecked, and others didn’t get a chance to talk at all. Some people would ignore the norm and talk over others, and nobody interrupted them back to shoot them down.
But then a few interesting things happened:
1) The tutors were able to moderate the discussions, gently. They wouldn’t actually scold anyone for interrupting, but they would say something like, “That’s interesting, but I think Jane was still talking,” subtly pointing out a violation of the norm.
2) People started saying less at a time.
#1 is pretty obvious – with no enforcement of the social norm, a no-interruptions norm collapses pretty quickly. But #2 is actually really interesting. If talking at all is an implied claim that what you’re saying is the most important thing that can be said, then polite people keep it short.
With 15-20 people in a seminar, this also meant that people rarely tried to force the conversation in a certain direction. When you’re done talking, the conversation is out of your hands. This can be frustrating at first, but with time, you learn to trust not your fellow conversationalists individually, but the conversation itself, to go where it needs to. If you haven’t said enough, then you trust that someone will ask you a question, and you’ll say more.
When people are interrupting each other – when they’re constantly tugging the conversation back and forth between their preferred directions – then the conversation itself is just a battle of wills. But when people just put in one thing at a time, and trust their fellows to only say things that relate to the thing that came right before – at least, until there’s a very long pause – then you start to see genuine collaboration.
And when a lull in the conversation is treated as an opportunity to think about the last thing said, rather than an opportunity to jump in with the thing you were holding onto from 15 minutes ago because you couldn’t just interrupt and say it – then you also open yourself up to being genuinely surprised, to seeing the conversation go somewhere that no one in the room would have predicted, to introduce ideas that no one brought with them when they sat down at the table.
By the time I graduated, I’d internalized this norm, and the rest of the world seemed rude to me for a few months. Not just because of the interrupting – but more because I’d say one thing, politely pause, and then people would assume I was done and start explaining why I was wrong – without asking any questions! Eventually, I realized that I’d been perfectly comfortable with these sorts of interactions before college. I just needed to code-switch! Some people are more comfortable with a culture of interrupting when you want to, and accepting interruptions. Others are more comfortable with a culture of waiting their turn, and courteously saying only one thing at a time, not trying to cram in a whole bunch of arguments for their thesis.
Now, I’ve praised the virtues of wait culture because I think it’s undervalued, but there’s plenty to say for interrupt culture as well. For one, it’s more robust in “unwalled” circumstances. If there’s no one around to enforce wait culture norms, then a few jerks can dominate the discussion, silencing everyone else. But someone who doesn’t follow “interrupt” norms only silences themselves.
Second, it’s faster and easier to calibrate how much someone else feels the need to talk, when they’re willing to interrupt you. It takes willpower to stop talking when you’re not sure you were perfectly clear, and to trust others to pick up the slack. It’s much easier to keep going until they stop you.
So if you’re only used to one style, see if you can try out the other somewhere. Or at least pay attention and see whether you’re talking to someone who follows the other norm. And don’t assume that you know which norm is the “right” one; try it the “wrong” way and maybe you’ll learn something.
Cross-posted at my personal blog.
Note: Originally posted in Discussion, edited to take comments there into account.
Yes, politics, boo hiss. In my defense, the topic of this post cuts across usual tribal affiliations (I write it as a liberal criticizing other liberals), and has a couple strong tie-ins with main LessWrong topics:
- It's a tidy example of a failure to apply consequentialist / effective altruist-type reasoning. And while it's probably true that the people I'm critiquing aren't consequentialists by any means, it's a case where failing to look at the consequences leads people to say some particularly silly things.
- I think there's a good chance this is a political issue that will become a lot more important as more and more jobs are replaced by automation. (If the previous sentence sounds obviously stupid to you, the best I can do without writing an entire post on that is vaguely gesturing at gwern on neo-luddism, though I don't agree with all of it.)
The issue is this: recently, I've seen a meme going around to the effect that companies like Walmart that have a large number of employees on government benefits are the "real welfare queens" or somesuch, and with the implied message that all companies have a moral obligation to pay their employees enough that they don't need government benefits. (I say mention Walmart because it's the most frequently mentioned villain in this meme, but others, like McDonalds, get mentioned.)
My initial awareness of this meme came from it being all over my Facebook feed, but when I went to Google to track down examples, I found it coming out of the mouths of some fairly prominent congresscritters. For example Alan Grayson:
In state after state, the largest group of Medicaid recipients is Walmart employees. I'm sure that the same thing is true of food stamp recipients. Each Walmart "associate" costs the taxpayers an average of more than $1,000 in public assistance.
Or Bernie Sanders:
The Walmart family... here's an amazing story. The Walmart family is the wealthiest family in this country, worth about $100 billion. owning more wealth than the bottom 40 percent of the American people, and yet here's the incredible fact.
Because their wages and benefits are so low, they are the major welfare recipients in America, because many, many of their workers depend on Medicaid, depend on food stamps, depend on government subsidies for housing. So, if the minimum wage went up for Walmart, would be a real cut in their profits, but it would be a real savings by the way for taxpayers, who would not having to subsidize Walmart employees because of their low wages.
Now here's why this is weird: consider Grayson's claim that each Walmart employee costs the taxpayers on average $1,000. In what sense is that true? If Walmart fired those employees, it wouldn't save the taxpayers money: if anything, it would increase the strain on public services. Conversely, it's unlikely that cutting benefits would force Walmart to pay higher wages: if anything, it would make people more desperate and willing to work for low wages. (Cf. this this excellent critique of the anti-Walmart meme).
Or consider Sanders' claim that it would be better to raise the minimum wage and spend less on government benefits. He emphasizes that Walmart could take a hit in profits to pay its employees more. It's unclear to what degree that's true (see again previous link), and unclear if there's a practical way for the government to force Walmart to do that, but ignore those issues, it's worth pointing out that you could also just raise taxes on rich people generally to increase benefits for low-wage workers. The idea seems to be that morally, Walmart employees should be primarily Walmart's moral responsibility, and not so much the moral responsibility of the (the more well-off segment of) the population in general.
But the idea that employing someone gives you a general responsibility for their welfare (beyond, say, not tricking them into working for less pay or under worse conditions than you initially promised) is also very odd. It suggests that if you want to be virtuous, you should avoid hiring people, so as to keep your hands clean and avoid the moral contagion that comes with employing low wage workers. Yet such a policy doesn't actually help the people who might want jobs from you. This is not to deny that, plausibly, wealthy onwers of Walmart stock have a moral responsibility to the poor. What's implausible is that non-Walmart stock owners have significantly less responsibility to the poor.
This meme also worries me because I lean towards thinking that the minimum wage isn't a terrible policy but we'd be better off replacing it with guaranteed basic income (or an otherwise more lavish welfare state). And guaranteed basic income could be a really important policy to have as more and more jobs are replaced by automation (again see gwern if that seems crazy to you). I worry that this anti-Walmart meme could lead to an odd left-wing resistance to GBI/more lavish welfare state, since the policy would be branded as a subsidy to Walmart.
Existential risks—risks that, in the words of Nick Bostrom, would "either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential," are a significant threat to the world as we know it. In fact, they may be one of the most pressing issues facing humanity today.
The likelihood of some risks may stay relatively constant over time—a basic view of asteroid impact is that there is a certain probability that a "killer asteroid" hits the Earth and that this probability is more or less the same every year. This is what I refer to as a "stable risk."
However, the likelihood of other existential risks seems to fluctuate, often quite dramatically. Many of these "unstable risks" are related to human activity.
For instance, the likelihood of a nuclear war at sufficient scale to be an existential threat seems contingent on various geopolitical factors that are difficult to predict in advance. That said, the likelihood of this risk has clearly changed throughout recent history. Nuclear war was obviously not an existential risk before nuclear weapons were invented, and was fairly clearly more of a risk during the Cuban Missile Crisis than it is today.
Many of these unstable, human-created risks seem based largely on advanced technology. Potential risks like gray goo rely on theorized technologies that have yet to be developed (and indeed may never be developed). While this is good news for the present day, it also means that we have to be vigilant for the emergence of potential new threats as human technology increases.
GiveWell's recent conversation with Carl Shulman contains some arguments as to why the risk of human extinction may be decreasing over time. However, it strikes me as perhaps more likely that the risk of human extinction is increasing over time—or at the very least becoming less stable—as technology increases the amount of power available to individuals and civilizations.
After all, the very concept of human-created unstable existential risks is a recent one. Even if Julius Caesar, Genghis Khan, or Queen Victoria for some reason decided to destroy human civilization, it seems almost certain that they would fail, even given all the resources of their empires.
The same cannot be said for Kennedy or Khrushchev.
In the previous article in this sequence, I conducted a thought experiment in which simple probability was not sufficient to choose how to act. Rationality required reasoning about meta-probabilities, the probabilities of probabilities.
Relatedly, lukeprog has a brief post that explains how this matters; a long article by HoldenKarnofsky makes meta-probability central to utilitarian estimates of the effectiveness of charitable giving; and Jonathan_Lee, in a reply to that, has used the same framework I presented.
In my previous article, I ran thought experiments that presented you with various colored boxes you could put coins in, gambling with uncertain odds.
The last box I showed you was blue. I explained that it had a fixed but unknown probability of a twofold payout, uniformly distributed between 0 and 0.9. The overall probability of a payout was 0.45, so the expectation value for gambling was 0.9—a bad bet. Yet your optimal strategy was to gamble a bit to figure out whether the odds were good or bad.
Let’s continue the experiment. I hand you a black box, shaped rather differently from the others. Its sealed faceplate is carved with runic inscriptions and eldritch figures. “I find this one particularly interesting,” I say.
View more: Next