On Walmart, And Who Bears Responsibility For the Poor
Note: Originally posted in Discussion, edited to take comments there into account.
Yes, politics, boo hiss. In my defense, the topic of this post cuts across usual tribal affiliations (I write it as a liberal criticizing other liberals), and has a couple strong tie-ins with main LessWrong topics:
- It's a tidy example of a failure to apply consequentialist / effective altruist-type reasoning. And while it's probably true that the people I'm critiquing aren't consequentialists by any means, it's a case where failing to look at the consequences leads people to say some particularly silly things.
- I think there's a good chance this is a political issue that will become a lot more important as more and more jobs are replaced by automation. (If the previous sentence sounds obviously stupid to you, the best I can do without writing an entire post on that is vaguely gesturing at gwern on neo-luddism, though I don't agree with all of it.)
The issue is this: recently, I've seen a meme going around to the effect that companies like Walmart that have a large number of employees on government benefits are the "real welfare queens" or somesuch, and with the implied message that all companies have a moral obligation to pay their employees enough that they don't need government benefits. (I say mention Walmart because it's the most frequently mentioned villain in this meme, but others, like McDonalds, get mentioned.)
My initial awareness of this meme came from it being all over my Facebook feed, but when I went to Google to track down examples, I found it coming out of the mouths of some fairly prominent congresscritters. For example Alan Grayson:
In state after state, the largest group of Medicaid recipients is Walmart employees. I'm sure that the same thing is true of food stamp recipients. Each Walmart "associate" costs the taxpayers an average of more than $1,000 in public assistance.
Or Bernie Sanders:
The Walmart family... here's an amazing story. The Walmart family is the wealthiest family in this country, worth about $100 billion. owning more wealth than the bottom 40 percent of the American people, and yet here's the incredible fact.
Because their wages and benefits are so low, they are the major welfare recipients in America, because many, many of their workers depend on Medicaid, depend on food stamps, depend on government subsidies for housing. So, if the minimum wage went up for Walmart, would be a real cut in their profits, but it would be a real savings by the way for taxpayers, who would not having to subsidize Walmart employees because of their low wages.
Now here's why this is weird: consider Grayson's claim that each Walmart employee costs the taxpayers on average $1,000. In what sense is that true? If Walmart fired those employees, it wouldn't save the taxpayers money: if anything, it would increase the strain on public services. Conversely, it's unlikely that cutting benefits would force Walmart to pay higher wages: if anything, it would make people more desperate and willing to work for low wages. (Cf. this this excellent critique of the anti-Walmart meme).
Or consider Sanders' claim that it would be better to raise the minimum wage and spend less on government benefits. He emphasizes that Walmart could take a hit in profits to pay its employees more. It's unclear to what degree that's true (see again previous link), and unclear if there's a practical way for the government to force Walmart to do that, but ignore those issues, it's worth pointing out that you could also just raise taxes on rich people generally to increase benefits for low-wage workers. The idea seems to be that morally, Walmart employees should be primarily Walmart's moral responsibility, and not so much the moral responsibility of the (the more well-off segment of) the population in general.
But the idea that employing someone gives you a general responsibility for their welfare (beyond, say, not tricking them into working for less pay or under worse conditions than you initially promised) is also very odd. It suggests that if you want to be virtuous, you should avoid hiring people, so as to keep your hands clean and avoid the moral contagion that comes with employing low wage workers. Yet such a policy doesn't actually help the people who might want jobs from you. This is not to deny that, plausibly, wealthy onwers of Walmart stock have a moral responsibility to the poor. What's implausible is that non-Walmart stock owners have significantly less responsibility to the poor.
This meme also worries me because I lean towards thinking that the minimum wage isn't a terrible policy but we'd be better off replacing it with guaranteed basic income (or an otherwise more lavish welfare state). And guaranteed basic income could be a really important policy to have as more and more jobs are replaced by automation (again see gwern if that seems crazy to you). I worry that this anti-Walmart meme could lead to an odd left-wing resistance to GBI/more lavish welfare state, since the policy would be branded as a subsidy to Walmart.
Links: so-called "knockout game" a "myth and a "bogus trend."
When I started seeing stories about the "knockout game" (supposedly, teenagers playing a game where they try to knockout random strangers) a few days ago, I immediately resolved to avoid paying attention to them, because it sounded like a classic case of people taking a few isolated incidents and blowing them up into a big scary trend.
And then this morning, I see this blog post, which links back to an article from two years ago titled: "Knockout King: Kids call it a game. Academics call it a bogus trend. Cops call it murder." Turns out my knowledge of human biases has served me well... and it's especially significant that the article is from two years ago; this is not the first time the media has tried to get people scared about this "trend." From the article (emphasis added):
Mike Males, a research fellow at the nonprofit Center on Juvenile and Criminal Justice and who runs the website YouthFacts.org, says the media have made habit of cherry-picking isolated instances of "knockout games" in order to gin up sensational stories that demonize youth. "This knockout-game legend is a fake trend," Males contends.
Given that 4.3 million violent attacks were reported by U.S. citizens in 2009, according to the National Crime Victimization Survey, Males says reporters should know better than to highlight a handful of random attacks by kids and call it journalism. It's the same thing as plucking a few instances of attackers with Jewish surnames who beat up non-Jews and declaring it a "troubling new trend," he argues.
Still, over the years a handful of reports of "knockout" have emerged from cities in Missouri, Illinois, Massachusetts and New Jersey. And most criminologists and youth experts agree that unprovoked attacks by teenagers on strangers are a real, if extremely rare, phenomenon.
Mainstream Epistemology for LessWrong, Part 1: Feldman on Evidentialism
Richard Feldman's Epistemology is a widely-used philosophy textbook published in 2003. I've decided to write a series of posts summarizing its contents, because it contains some surprisingly reasonable views (given what you may have heard about mainstream philosophy), and it also counters some common myths about what all philosophers supposedly know about evidence, the problem of induction, and so on. This installment briefly covers the first three chapters before moving on to Feldman's discussion of the philosophical view he calls evidentialism.
AI Policy?
Here's a question: are there any policies that could be worth lobbying for to improve humanity's chances re: AI risk?
In the near term, it's possible that not much can be done. Human-level AI still seems a long ways off (and it probably is), making it both hard to craft effective policy on, and hard to convince people it's worth doing something about. The US government currently funds work on what it calls "AI" and "nanotechnology," but that mostly means stuff that might be realizable in the near-term, not human-level AI or molecular assemblers. Still, if anyone has ideas on what can be done in the near term, they'd be worth discussing.
Furthermore, I suspect that as human-level AI gets closer, there will be a lot the US government will be able to do to affect the outcome. For example, there's been talk of secret AI projects, but if the US gov got worried about those, I suspect they'd be hard to keep secret from a determined US gov, especially if you believe (as I do) that larger organizations will have a much better shot at building AI than smaller ones.
The lesson of Snowden's NSA revelations seems to be that, while in theory there are procedures humans can use to keep secrets, in practice humans are so bad at implementing those procedures that secrecy will fail against a determined attacker. Ironically, this applies both to the government and everyone the government has spied on. However, the ability of people outside the US gov to find out about hypothetical secret government AI projects seems less predictable, dependent on decisions of individual would-be leakers.
And it seems like, as long as the US government is aware of an AI project, there will be a lot it will be able to do to shut the project down if desired. For foreign projects, there will be the possibility of a Stuxnet-style attack, though the government might be reluctant to do that against a nuclear power like China or Russia (or would it?) However, I expect the US to lead the world in innovation for a long time to come, so I don't expect foreign AI projects to be much of an issue in the early stages of the game.
The real issue is US gov vs. private US groups working on AI. And there, given the current status quo for how these things work in the US, my guess is that if the government ever became convinced that an AI project was dangerous, they would find some way to shut it down citing "national security" and basically that would work. However, I can see big companies with an interest in AI lobbying the government to make that not happen. I can also see them deciding to pack their AI operations off to Europe or South Korea or something.
And on top of all this is simply the fact that, if it becomes convinced that AI is important, the US government has a lot of money to throw at AI research.
These are just some very hastily sketched thoughts, don't take them too seriously, and there's probably a lot more that can be said. I do strongly suspect, however, that people who are concerned about risks from AI ignore the government at our peril.
The Evolutionary Heuristic and Rationality Techniques
Nick Bostrom and Anders Sandberg (2008) have proposed what they call the "evolutionary heuristic" for evaluating possible ways to enhance humans. It begins with posing a challenge, the "evolutionary optimality challenge" or EOC: "if the proposed intervention would result in an enhancement, why have we not already evolved to be that way?"
They write that there seem to be three main categories of answers to this challenge (what follows are abbreviated quotes, see original paper for full explanation):
- Changed tradeoffs: "Evolution 'designed' the system for operation in one type of environment, but now we wish to deploy it in a very different type of environment..."
- Value discordance: "There is a discrepancy between the standards by which evolution measured the quality of her work, and the standards that we wish to apply..."
- Evolutionary restrictions: "We have access to various tools, materials, and techniques that were unavailable to evolution..."
In their original paper, Bostrom and Sandberg are interested in biological interventions like drugs and embryo selection, but it seems that their heuristic could also tell us a lot about "rationality techniques," i.e. methods of trying to become more rational expressible in the form of how-to advice, like what you often find advocated here at LessWrong or by CFAR.
Applying the evolutionary heuristic to rationality techniques supports the value of things like statistics, science, and prediction markets. However, it also gives us reason to doubt that a rationality technique is likely to be effective when it does't have any good answer to the EOC.
Academic Cliques
In my article on trusting expert consensus, I talked about the value of having hard data on the opinions of experts in a given field. The unspoken subtext was that you should be careful of claims of expert consensus that don't have hard data to back them up. I've joked that when a philosopher says there's a philosophical consensus, what he really means is "I talked to a few of my friends about this and they agreed with me."
What's often really happening, though (at least in philosophy) is that the "consensus" really reflects the opinions of a particular academic clique. A sub-group of experts in the field spend a disproportionate amount of time talking to each other, and end up convincing themselves they represent the consensus of the entire profession. A rather conspicuous example of this is what I've called the Plantinga clique on my own blog—theistic philosophers who've convinced themselves that the opinions of Alvin Plantinga represent the consensus of philosophy.
But it isn't just theistic philosophers who do this. When I was in school, it was still possible to hear fans of Quine claim that everyone knew Quine had refuted the analytic synthetic distinction. Post PhilPapers survey, hopefully people have stopped claiming this. And one time, I heard a philosophy blogger berating scientists for being ignorant of the findings in philosophy that all philosophers agree on. I asked him for examples of claims that all philosophers agree on, I responded with examples of philosophers who rejected some of those claims, "Ah," he said, "but they don't count. Let me tell you who's opinions matter..." (I'm paraphrasing, but that was what it amounted to.)
I strongly suspect this happens in other disciplines: supposed "consensuses of experts" are really just the opinions of one clique within a discipline. Thus, I tend to approach claims of consensus in any discipline with skepticism when they're not backed up by hard data. But I don't actually know of verifiable examples of this problem outside of philosophy. Has other people with backgrounds in other disciplines noticed things like this?
Yes, Virginia, You Can Be 99.99% (Or More!) Certain That 53 Is Prime
TLDR; though you can't be 100% certain of anything, a lot of the people who go around talking about how you can't be 100% certain of anything would be surprised at how often you can be 99.99% certain. Indeed, we're often justified in assigning odds ratios well in excess of a million to one to certain claims. Realizing this is important for avoiding certain rookie Bayesian's mistakes, as well as for thinking about existential risk.
53 is prime. I'm very confident of this. 99.99% confident, at the very least. How can I be so confident? Because of the following argument:
If a number is composite, it must have a prime factor no greater than its square root. Because 53 is less than 64, sqrt(53) is less than 8. So, to find out if 53 is prime or not, we only need to check if it can be divided by primes less than 8 (i.e. 2, 3, 5, and 7). 53's last digit is odd, so it's not divisible by 2. 53's last digit is neither 0 nor 5, so it's not divisible by 5. The nearest multiples of 3 are 51 (=17x3) and 54, so 53 is not divisible by 3. The nearest multiples of 7 are 49 (=7^2) and 56, so 53 is not divisible by 7. Therefore, 53 is prime.
(My confidence in this argument is helped by the fact that I was good at math in high school. Your confidence in your math abilities may vary.)
I mention this because in his post Infinite Certainty, Eliezer writes:
Suppose you say that you're 99.99% confident that 2 + 2 = 4. Then you have just asserted that you could make 10,000 independent statements, in which you repose equal confidence, and be wrong, on average, around once. Maybe for 2 + 2 = 4 this extraordinary degree of confidence would be possible: "2 + 2 = 4" extremely simple, and mathematical as well as empirical, and widely believed socially (not with passionate affirmation but just quietly taken for granted). So maybe you really could get up to 99.99% confidence on this one.
I don't think you could get up to 99.99% confidence for assertions like "53 is a prime number". Yes, it seems likely, but by the time you tried to set up protocols that would let you assert 10,000 independent statements of this sort—that is, not just a set of statements about prime numbers, but a new protocol each time—you would fail more than once. Peter de Blanc has an amusing anecdote on this point, which he is welcome to retell in the comments.
I think this argument that you can't be 99.99% certain that 53 is prime is fallacious. Stuart Armstrong explains why in the comments:
Is the orthogonality thesis at odds with moral realism?
Continuing my quest to untangle people's confusions about Eliezer's metaethics... I've started to wonder if maybe some people have the intuition that the orthogonality thesis is at odds with moral realism.
I personally have a very hard time seeing why anyone would think that, perhaps in part because of my experience in philosophy of religion. Theistic apologists would love to be able to say, "moral realism, therefore a sufficiently intelligent being would also be good." It would help patch some obvious holes in their arguments and help them respond to things like Stephen Law's Evil God Challenge. But they mostly don't even try to argue that, for whatever reason.
You did see philosophers claiming things like that back in the bad old days before Kant, which raises the question of what's changed. I suspect the reason is fairly mundane, though: before Kant (roughly), it was not only dangerous to be an atheist, it was dangerous to question that the existence of God could be proven through reason (because it would get you suspected of being an atheist). It was even dangerous to advocated philosophical views that might possibly undermine the standard arguments for the existence of God. That guaranteed that philosophers could used whatever half-baked premises they wanted in constructing arguments for the existence of God, and have little fear of being contradicted.
Besides, even if you think an all-knowing would also necessarily be perfectly good, it still seems perfectly possible to have an otherwise all-knowing being with a horrible blind spot regarding morality.
On the other hand, in the comments of a post on the orthogonality thesis, Stuart Armstrong mentions that:
I've read the various papers [by people who reject the orthogonality thesis], and they all orbit around an implicit and often unstated moral realism. I've also debated philosophers on this, and the same issue rears its head - I can counter their arguments, but their opinions don't shift. There is an implicit moral realism that does not make any sense to me, and the more I analyse it, the less sense it makes, and the less convincing it becomes. Every time a philosopher has encouraged me to read a particular work, it's made me find their moral realism less likely, because the arguments are always weak.
This is not super-enlightening, partly because Stuart is talking about people whose views he admits he doesn't understand... but on the other hand, maybe Stuart agrees that there is some kind of conflict there, since he seems to imply that he himself rejects moral realism.
I realize I'm struggling a bit to guess what people could be thinking here, but I suspect some people are thinking it, so... anyone?
No Universally Compelling Arguments in Math or Science
Last week, I started a thread on the widespread sentiment that people don't understand the metaethics sequence. One of the things that surprised me most in the thread was this exchange:
Commenter: "I happen to (mostly) agree that there aren't universally compelling arguments, but I still wish there were. The metaethics sequence failed to talk me out of valuing this."
Me: "But you realize that Eliezer is arguing that there aren't universally compelling arguments in any domain, including mathematics or science? So if that doesn't threaten the objectivity of mathematics or science, why should that threaten the objectivity of morality?"
Commenter: "Waah? Of course there are universally compelling arguments in math and science."
Now, I realize this is just one commenter. But the most-upvoted comment in the thread also perceived "no universally compelling arguments" as a major source of confusion, suggesting that it was perceived as conflicting with morality not being arbitrary. And today, someone mentioned having "no universally compelling arguments" cited at them as a decisive refutation of moral realism.
After the exchange quoted above, I went back and read the original No Universally Compelling Arguments post, and realized that while it had been obvious to me when I read it that Eliezer meant it to apply to everything, math and science included, it was rather short on concrete examples, perhaps in violation of Eliezer's own advice. The concrete examples can be found in the sequences, though... just not in that particular post.
Lone Genius Bias and Returns on Additional Researchers
One thing that most puzzles me about Eliezer's writings on AI is his apparent belief that a small organization like MIRI is likely to be able to beat larger organizations like Google or the US Department of Defense to building human-level AI. In fact, he seems to believe such larger organizations may have no advantage at all over a smaller one, and perhaps will even be at a disadvantage. In his 2011 debate with Robin Hanson, he said:
As far as I can tell what happens when the government tries to develop AI is nothing. But that could just be an artifact of our local technological level and it might change over the next few decades. To me it seems like a deeply confusing issue whose answer is probably not very complicated in an absolute sense. Like we know why it’s difficult to build a star. You’ve got to gather a very large amount of interstellar hydrogen in one place. So we understand what sort of labor goes into a star and we know why a star is difficult to build. When it comes to building a mind, we don’t know how to do it so it seems very hard. We like query our brains to say “map us a strategy to build this thing” and it returns null so it feels like it’s a very difficult problem. But in point of fact we don’t actually know that the problem is difficult apart from being confusing. We understand the star-building problem so we know it’s difficult. This one we don’t know how difficult it’s going to be after it’s no longer confusing.
So to me the AI problem looks like a—it looks to me more like the sort of thing that the problem is finding bright enough researchers, bringing them together, letting them work on that problem instead of demanding that they work on something where they’re going to produce a progress report in two years which will validate the person who approved the grant and advance their career. And so the government has historically been tremendously bad at producing basic research progress in AI, in part because the most senior people in AI are often people who got to be very senior by having failed to build it for the longest period of time. (This is not a universal statement. I’ve met smart senior people in AI.)
But nonetheless, basically I’m not very afraid of the government because I don’t think it’s a throw warm bodies at the problem and I don’t think it’s a throw warm computers at the problem. I think it’s a good methodology, good people selection, letting them do sufficiently blue sky stuff, and so far historically the government has been tremendously bad at producing that kind of progress. (When they have a great big project to try to build something it doesn’t work. When they fund long-term research it works.)
I admit, I don't feel like I fully grasp all the reasons for the disagreement between Eliezer and myself on this issue. Some of the disagreement, I suspect, comes from slightly different views on the nature of intelligence, though I'm having the trouble pinpointing what those differences might be. But some of the difference, I'm think, comes from the fact that I've become convinced humans suffer from a Lone Genius Bias—a tendency to over-attribute scientific and technological progress to the efforts of lone geniuses.
Disclaimer: My understanding of Luke's current strategy for MIRI is that it does not hinge on whether or not MIRI itself eventually builds AI or not. It seems to me that as long as MIRI keeps publishing research that could potentially help other people build FAI, MIRI is doing important work. Therefore, I wouldn't advocate anything in this post being taken as a reason not to donate to MIRI. I've donated recently, and will probably [edit: see below] continue to do so in the future.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)