On Walmart, And Who Bears Responsibility For the Poor

13 ChrisHallquist 27 November 2013 05:08AM

Note: Originally posted in Discussion, edited to take comments there into account.


Yes, politics, boo hiss. In my defense, the topic of this post cuts across usual tribal affiliations (I write it as a liberal criticizing other liberals), and has a couple strong tie-ins with main LessWrong topics:

  • It's a tidy example of a failure to apply consequentialist / effective altruist-type reasoning. And while it's probably true that the people I'm critiquing aren't consequentialists by any means, it's a case where failing to look at the consequences leads people to say some particularly silly things.
  • I think there's a good chance this is a political issue that will become a lot more important as more and more jobs are replaced by automation. (If the previous sentence sounds obviously stupid to you, the best I can do without writing an entire post on that is vaguely gesturing at gwern on neo-luddism, though I don't agree with all of it.)

The issue is this: recently, I've seen a meme going around to the effect that companies like Walmart that have a large number of employees on government benefits are the "real welfare queens" or somesuch, and with the implied message that all companies have a moral obligation to pay their employees enough that they don't need government benefits. (I say mention Walmart because it's the most frequently mentioned villain in this meme, but others, like McDonalds, get mentioned.)

My initial awareness of this meme came from it being all over my Facebook feed, but when I went to Google to track down examples, I found it coming out of the mouths of some fairly prominent congresscritters. For example Alan Grayson:

In state after state, the largest group of Medicaid recipients is Walmart employees. I'm sure that the same thing is true of food stamp recipients. Each Walmart "associate" costs the taxpayers an average of more than $1,000 in public assistance.

Or Bernie Sanders:

The Walmart family... here's an amazing story. The Walmart family is the wealthiest family in this country, worth about $100 billion. owning more wealth than the bottom 40 percent of the American people, and yet here's the incredible fact.

Because their wages and benefits are so low, they are the major welfare recipients in America, because many, many of their workers depend on Medicaid, depend on food stamps, depend on government subsidies for housing. So, if the minimum wage went up for Walmart, would be a real cut in their profits, but it would be a real savings by the way for taxpayers, who would not having to subsidize Walmart employees because of their low wages.

Now here's why this is weird: consider Grayson's claim that each Walmart employee costs the taxpayers on average $1,000. In what sense is that true? If Walmart fired those employees, it wouldn't save the taxpayers money: if anything, it would increase the strain on public services. Conversely, it's unlikely that cutting benefits would force Walmart to pay higher wages: if anything, it would make people more desperate and willing to work for low wages. (Cf. this this excellent critique of the anti-Walmart meme).

Or consider Sanders' claim that it would be better to raise the minimum wage and spend less on government benefits. He emphasizes that Walmart could take a hit in profits to pay its employees more. It's unclear to what degree that's true (see again previous link), and unclear if there's a practical way for the government to force Walmart to do that, but ignore those issues, it's worth pointing out that you could also just raise taxes on rich people generally to increase benefits for low-wage workers. The idea seems to be that morally, Walmart employees should be primarily Walmart's moral responsibility, and not so much the moral responsibility of the (the more well-off segment of) the population in general.

But the idea that employing someone gives you a general responsibility for their welfare (beyond, say, not tricking them into working for less pay or under worse conditions than you initially promised) is also very odd. It suggests that if you want to be virtuous, you should avoid hiring people, so as to keep your hands clean and avoid the moral contagion that comes with employing low wage workers. Yet such a policy doesn't actually help the people who might want jobs from you. This is not to deny that, plausibly, wealthy onwers of Walmart stock have a moral responsibility to the poor. What's implausible is that non-Walmart stock owners have significantly less responsibility to the poor.

This meme also worries me because I lean towards thinking that the minimum wage isn't a terrible policy but we'd be better off replacing it with guaranteed basic income (or an otherwise more lavish welfare state). And guaranteed basic income could be a really important policy to have as more and more jobs are replaced by automation (again see gwern if that seems crazy to you). I worry that this anti-Walmart meme could lead to an odd left-wing resistance to GBI/more lavish welfare state, since the policy would be branded as a subsidy to Walmart.

Wait vs Interrupt Culture

71 Benquo 27 November 2013 03:38PM

At the recent CFAR Workshop in NY, someone mentioned that they were uncomfortable with pauses in conversation, and that got me thinking about different conversational styles.

Growing up with friends who were disproportionately male and disproportionately nerdy, I learned that it was a normal thing to interrupt people. If someone said something you had to respond to, you’d just start responding. Didn’t matter if it “interrupted” further words – if they thought you needed to hear those words before responding, they’d interrupt right back.

Occasionally some weird person would be offended when I interrupted, but I figured this was some bizarre fancypants rule from before people had places to go and people to see. Or just something for people with especially thin skins or delicate temperaments, looking for offense and aggression in every action.

Then I went to St. John’s College – the talking school (among other things). In Seminar (and sometimes in Tutorials) there was a totally different conversational norm. People were always expected to wait until whoever was talking was done. People would apologize not just for interrupting someone who was already talking, but for accidentally saying something when someone else looked like they were about to speak. This seemed totally crazy. Some people would just blab on unchecked, and others didn’t get a chance to talk at all. Some people would ignore the norm and talk over others, and nobody interrupted them back to shoot them down.

But then a few interesting things happened:

1) The tutors were able to moderate the discussions, gently. They wouldn’t actually scold anyone for interrupting, but they would say something like, “That’s interesting, but I think Jane was still talking,” subtly pointing out a violation of the norm.

2) People started saying less at a time.

#1 is pretty obvious – with no enforcement of the social norm, a no-interruptions norm collapses pretty quickly. But #2 is actually really interesting. If talking at all is an implied claim that what you’re saying is the most important thing that can be said, then polite people keep it short.

With 15-20 people in a seminar, this also meant that people rarely tried to force the conversation in a certain direction. When you’re done talking, the conversation is out of your hands. This can be frustrating at first, but with time, you learn to trust not your fellow conversationalists individually, but the conversation itself, to go where it needs to. If you haven’t said enough, then you trust that someone will ask you a question, and you’ll say more.

When people are interrupting each other – when they’re constantly tugging the conversation back and forth between their preferred directions – then the conversation itself is just a battle of wills. But when people just put in one thing at a time, and trust their fellows to only say things that relate to the thing that came right before – at least, until there’s a very long pause – then you start to see genuine collaboration.

And when a lull in the conversation is treated as an opportunity to think about the last thing said, rather than an opportunity to jump in with the thing you were holding onto from 15 minutes ago because you couldn’t just interrupt and say it – then you also open yourself up to being genuinely surprised, to seeing the conversation go somewhere that no one in the room would have predicted, to introduce ideas that no one brought with them when they sat down at the table.

By the time I graduated, I’d internalized this norm, and the rest of the world seemed rude to me for a few months. Not just because of the interrupting – but more because I’d say one thing, politely pause, and then people would assume I was done and start explaining why I was wrong – without asking any questions! Eventually, I realized that I’d been perfectly comfortable with these sorts of interactions before college. I just needed to code-switch! Some people are more comfortable with a culture of interrupting when you want to, and accepting interruptions. Others are more comfortable with a culture of waiting their turn, and courteously saying only one thing at a time, not trying to cram in a whole bunch of arguments for their thesis.

Now, I’ve praised the virtues of wait culture because I think it’s undervalued, but there’s plenty to say for interrupt culture as well. For one, it’s more robust in “unwalled” circumstances. If there’s no one around to enforce wait culture norms, then a few jerks can dominate the discussion, silencing everyone else. But someone who doesn’t follow “interrupt” norms only silences themselves.

Second, it’s faster and easier to calibrate how much someone else feels the need to talk, when they’re willing to interrupt you. It takes willpower to stop talking when you’re not sure you were perfectly clear, and to trust others to pick up the slack. It’s much easier to keep going until they stop you.

So if you’re only used to one style, see if you can try out the other somewhere. Or at least pay attention and see whether you’re talking to someone who follows the other norm. And don’t assume that you know which norm is the “right” one; try it the “wrong” way and maybe you’ll learn something.

Cross-posted at my personal blog.

The sun reflected off things

-8 polymathwannabe 22 November 2013 02:59PM

An insight I had a while ago:

When I'm out in the daylight, and I see a tree, what I actually see is not the tree itself. What I see is the sun reflected off the tree. Likewise with rocks, grass and birds: it's always the sun I'm seeing reflected off them. This is possible because the sun emits all visible colors (or rather, our eyes evolved to perceive almost all EM frequencies that almost all solid matter deflects). I'm not seeing the things. I'm seeing the light. We live surrounded by the sun.

Is this too obvious? Inconsequential? Redundant?

No Universally Compelling Arguments in Math or Science

30 ChrisHallquist 05 November 2013 03:32AM

Last week, I started a thread on the widespread sentiment that people don't understand the metaethics sequence. One of the things that surprised me most in the thread was this exchange:

Commenter: "I happen to (mostly) agree that there aren't universally compelling arguments, but I still wish there were. The metaethics sequence failed to talk me out of valuing this."

Me: "But you realize that Eliezer is arguing that there aren't universally compelling arguments in any domain, including mathematics or science? So if that doesn't threaten the objectivity of mathematics or science, why should that threaten the objectivity of morality?"

Commenter: "Waah? Of course there are universally compelling arguments in math and science."

Now, I realize this is just one commenter. But the most-upvoted comment in the thread also perceived "no universally compelling arguments" as a major source of confusion, suggesting that it was perceived as conflicting with morality not being arbitrary. And today, someone mentioned having "no universally compelling arguments" cited at them as a decisive refutation of moral realism.

After the exchange quoted above, I went back and read the original No Universally Compelling Arguments post, and realized that while it had been obvious to me when I read it that Eliezer meant it to apply to everything, math and science included, it was rather short on concrete examples, perhaps in violation of Eliezer's own advice. The concrete examples can be found in the sequences, though... just not in that particular post.

continue reading »

What makes us think _any_ of our terminal values aren't based on a misunderstanding of reality?

17 bokov 25 September 2013 11:09PM

Let's say Bob's terminal value is to travel back in time and ride a dinosaur.

It is instrumentally rational for Bob to study physics so he can learn how to build a time machine. As he learns more physics, Bob realizes that his terminal value is not only utterly impossible but meaningless. By definition, someone in Bob's past riding a dinosaur is not a future evolution of the present Bob.

There are a number of ways to create the subjective experience of having gone into the past and ridden a dinosaur. But to Bob, it's not the same because he wanted both the subjective experience and the knowledge that it corresponded to objective fact. Without the latter, he might as well have just watched a movie or played a video game.

So if we took the original, innocent-of-physics Bob and somehow calculated his coherent extrapolated volition, we would end up with a Bob who has given up on time travel. The original Bob would not want to be this Bob.

But, how do we know that _anything_ we value won't similarly dissolve under sufficiently thorough deconstruction? Let's suppose for a minute that all "human values" are dangling units; that everything we want is as possible and makes as much sense as wanting to hear the sound of blue or taste the flavor of a prime number. What is the rational course of action in such a situation?

PS: If your response resembles "keep attempting to XXX anyway", please explain what privileges XXX over any number of other alternatives other than your current preference. Are you using some kind of pre-commitment strategy to a subset of your current goals? Do you now wish you had used the same strategy to precommit to goals you had when you were a toddler?

Torture vs Dust Specks Yet Again

-2 sentientplatypus 20 August 2013 12:06PM

The first time I read Torture vs. Specks about a year ago I didn't read a single comment because I assumed the article was making a point that simply multiplying can sometimes get you the wrong answer to a problem. I seem to have had a different "obvious answer" in mind.

And don't get me wrong, I generally agree with the idea that math can do better than moral intuition in deciding questions of ethics. Take this example from Eliezer’s post Circular Altruism which made me realize that I had assumed wrong:

Suppose that a disease, or a monster, or a war, or something, is killing people. And suppose you only have enough resources to implement one of the following two options:
1. Save 400 lives, with certainty.
2. Save 500 lives, with 90% probability; save no lives, 10% probability.

I agree completely that you pick number 2. For me that was just manifestly obvious, of course the math trumps the feeling that you shouldn't gamble with people’s lives…but then we get to torture vs. dust specks and that just did not compute. So I've read most every argument I could find in favor of torture(there are a great deal and I might have missed something critical), but...while I totally understand the argument (I think) I'm still horrified that people would choose torture over dust specks.

I feel that the way that math predominates intuition begins to fall apart when you the problem compares trivial individual suffering with massive individual suffering, in a way very much analogous to the way in which Pascal’s Mugging stops working when you make the credibility really low but the threat really high. Like this. Except I find the answer to torture vs. dust specks to be much easier...

 

Let me give some examples to illustrate my point.

Can you imagine Harry killing Hermione because Voldemort threatened to plague all sentient life with one barely noticed dust speck each day for the rest of time? Can you imagine killing your own best friend/significant other/loved one to stop the powers of the Matrix from hitting 3^^^3 sentient beings with nearly inconsquential dust specks? Of course not. No. Snap decision.

Eliezer, would you seriously, given the choice by Alpha, the Alien superintelligence that always carries out its threats, give up all your work, and horribly torture some innocent person, all day for fifty years in the face of the threat of a 3^^^3 insignificant dust specks barely inconveniencing sentient beings? Or be tortured for fifty years to avoid the dust specks?

I realize that this is much more personally specific than the original question: but it is someone's loved one, someone's life. And if you wouldn't make the sacrifice what right do you have to say someone else should make it? I feel as though if you want to argue that torture for fifty years is better than 3^^^3 barely noticeable inconveniences you had better well be willing to make that sacrifice yourself.

And I can’t conceive of anyone actually sacrificing their life, or themselves to save the world from dust specks. Maybe I'm committing the typical mind fallacy in believing that no one is that ridiculously altruistic, but does anyone want an Artificial Intelligence that will potentially sacrifice them if it will deal with the universe’s dust speck problem or some equally widespread and trivial equivalent? I most certainly object to the creation of that AI. An AI that sacrifices me to save two others - I wouldn't like that, certainly, but I still think the AI should probably do it if it thinks their lives are of more value. But dust specks on the other hand....

This example made me immediately think that some sort of rule is needed to limit morality coming from math in the development of any AI program. When the problem reaches a certain low level of suffering and is multiplied it by an unreasonably large number it needs to take some kind of huge penalty because otherwise to an AI it would be vastly preferable the whole of Earth be blown up than 3^^^3 people suffer a mild slap to the face.

And really, I don’t think we want to create an Artificial Intelligence that would do that.

I’m mainly just concerned that some factor be incorporated into the design of any Artificial Intelligence that prevents it from murdering myself and others for trivial but widespread causes. Because that just sounds like a sci-fi book of how superintelligence could go horribly wrong.

One way to manipulate your level of abstraction related to a task

26 Andy_McKenzie 19 August 2013 05:47AM

In construal level theory, ideas can be classified along a spectrum from concrete ("near" in Robin Hanson's terminology) to abstract ("far"). As a summary, here is the abstract from a 2010 review (pdf): 

People are capable of thinking about the future, the past, remote locations, another person’s perspective, and counterfactual alternatives. Without denying the uniqueness of each process, it is proposed that they constitute different forms of traversing psychological distance. Psychological distance is egocentric: Its reference point is the self in the here and now, and the different ways in which an object might be removed from that point—in time, in space, in social distance, and in hypotheticality— constitute different distance dimensions. Transcending the self in the here and now entails mental construal, and the farther removed an object is from direct experience, the higher (more abstract) the level of construal of that object. Supporting this analysis, research shows (a) that the various distances are cognitively related to each other, (b) that they similarly influence and are influenced by level of mental construal, and (c) that they similarly affect prediction, preference, and action.

Now, what if you want to think about some thing in a more or less near or far way? Here's one well-studied strategy to do so (e.g., see pdf here).

To think about a task in more concrete terms, ask yourself how you would do it. Then, however you answer that question, ask yourself how would you do that. Do this two (or so) more times, and you will be thinking about that task significantly more concretely. 

To think about a task in more abstract terms, ask yourself why you would do it. Then ask yourself why you would want that 3 (or so) more times. 

An excerpt from the 2007 study in the second link to give an example of how this would work: 

Suppose you indicate “taking a vacation” as one of your goals. Please write the goal in the uppermost square. Then, think why you would like to go on vacation, and write your answer in the square underneath. Suppose that you write “in order to rest.” Now, please think why you would like to rest, and write your answer in the third square. Suppose that you write “in order to renew your energy.” Finally, write in the last square why you would like to renew your energy.

Humans are utility monsters

67 PhilGoetz 16 August 2013 09:05PM

When someone complains that utilitarianism1 leads to the dust speck paradox or the trolley-car problem, I tell them that's a feature, not a bug. I'm not ready to say that respecting the utility monster is also a feature of utilitarianism, but it is what most people everywhere have always done. A model that doesn't allow for utility monsters can't model human behavior, and certainly shouldn't provoke indignant responses from philosophers who keep right on respecting their own utility monsters.

continue reading »

Interesting new Pew Research study on American opinions about radical life extension

10 [deleted] 09 August 2013 05:26PM

This new study by Pew Research on American opinions about radical life extension turned up some interesting results:

Asked whether they, personally, would choose to undergo medical treatments to slow the aging process and live to be 120 or more, a majority of U.S. adults (56%) say no. But roughly two-thirds (68%) think that most other people would.

Asked about the consequences for society if new medical treatments could slow the aging process and allow the average person to live decades longer, to at least 120 years old, about half of U.S. adults (51%) say the treatments would be a bad thing for society, while 41% say they would be a good thing.

An overwhelming majority believes that everyone should be able to get these treatments if they want them (79%). But two-thirds think that in practice, only wealthy people would have access to the treatments... About two-thirds agree that longer life expectancies would strain our natural resources and that medical scientists would offer the treatment before they fully understood how it affects people's health. And about six-in-ten (58%) say these treatments would be fundamentally unnatural.

About two-thirds of adults (63%) say medical advances that prolong life are generally good because they allow people to live longer, while about three-in-ten (32%) say medical advances are bad because they interfere with the natural cycle of life.

The survey contains a number of null findings that may be surprising. It turns out, for example, that many standard measures of religious beliefs and practices, including belief in God and frequency of attendance at religious services, are related to views on radical life extension only weakly, if at all. Nor is there a strong relationship in the survey between the gender, education or political party identification of respondents and what they say about longer human life spans... At least one question that deals directly with death, however, is correlated with views on radical life extension. People who oppose the death penalty are more inclined to say that longer life spans would be good for society.

I also find the demographic splits on page 3 to be surprising. On the question of whether treatments to extend life by decades would be a good thing for society, whites are significantly less likely to agree: 36% of whites agree whereas 48% of Hispanics and 56% of blacks do. There is a negative correlation with age (48% of adults 18-29, 46% of adults 30-49, 37% of adults 50-64, 31% of adults 65 and older) and with income (47% of those earning 30k and less, 42% of those earning from 30k-75k, and 39% of those earning 75k+). The income result in particular surprises me, as my intuition was that people with a higher quality of life would be significantly more pro-life extension. 

 

The Power of Pomodoros

48 elharo 14 May 2013 10:36AM

Until recently, I hadn't paid much attention to Pomodoro, though I've heard of it for a few years now. "Uncle Bob" Martin seemed to like it, and he's usually worth paying attention to in such matters. However, it mostly seemed to me like a way of organizing a variety of tasks and avoiding procrastination, and I've never had much trouble with that.

However after the January CFAR workshop suggested it in passing, I decided to give it a try; and I realized I had it all wrong. Pomodoros aren't (for me) a means of avoiding procrastination or dividing time among projects. They're a way of blasting through Ugh fields.

The Pomodoro technique is really simple compared to more involved systems like Getting Things Done (GTD). Here it is:

  1. Set a timer for 25 minutes
  2. Work on one thing for that 25 minutes, nothing else. No email, no phone calls, no snack breaks, no Twitter, no IM, etc.
  3. Take a five minute break
  4. Pick a new project, or the same project, if you prefer.
  5. Repeat

That's pretty much it. You can buy a book or a special timer for this; but there's really nothing else to it. It takes longer to explain the name than the technique. (When Francesco Cirillo invented this technique in the 1980s, he was using an Italian kitchen timer shaped like a tomato. Pomodoro is Italian for tomato.)

I got interested in Pomodoro when I realized I could use it to clean my office/desk/apartment. David Allen's GTD system appealed to me, but I could never maintain it, and the 2+ days it needed to get all the way to a clean desk was always a big hurdle to vault. However, spending 25 minutes at a time, followed by a break and another project seemed a lot more manageable.

I tried it, and it worked. My desk stack quickly shrunk, not to empty, but at least to a place where an accidental elbow swing no longer launched avalanches of paper onto the floor as I typed.

So I decided to try Pomodoro on my upcoming book. The publisher was using a new authoring system and template that I was unfamiliar with. There were a dozen little details to figure out about the new system--how to check out files in git, how to create a section break, whether to use hard or soft wrapping, etc.--and I just worked through them one by one. 25 minutes later I'd knocked them all out, and was familiar enough with the new system to begin writing in earnest. I didn't know everything about the software, but I knew enough that it was no longer averting. Next I used 25 minutes on a chapter that was challenging me, and Pomodoro got me to the point where I was in the flow.

That's when I realized that Pomodoro is not a system for organizing time or avoiding procrastination (at least not for me). What it is, is an incredibly effective way to break through tasks that look too hard: code you're not familiar with, an office that's too cluttered, a chapter you don't know how to begin.

The key is that a Pomodoro forces you to focus on the unfamiliar, difficult, aversive task for 25 minutes. 25 minutes of focused attention without distractions from other, easier tasks is enough to figure out many complex situations or at least get far enough along that the next step is obvious.

Here's another example. I had a task to design a GWT widget and plug it into an existing application, and I have never done any work with GWT. Every time I looked at the frontend application code, it seemed like a big mess of confused, convoluted, dependency injected, late bound, spooky-action-at-a-distance spaghetti. Now doubtless there wasn't anything fundamentally more difficult about this code than the server side code I have been writing; and if my career had taken just a slightly different path over the last six years, frontend GWT code might be my bread and butter. But my career didn't take that path, and this code was a big Ugh field for me. So I set the Pomodoro timer on my smartphone and started working. Did I finish? No, but I got started, made progress, and proved to myself that GWT wasn't all that challenging after all. The widget is still difficult enough and GWT complex enough that I may need several more Pomodoros to finish the job, but I did get way further and learn more in 25 minutes of intense focus than I would have done in a day or even a week without it.

I don't use the Pomodoro technique exclusively. Once I get going on a project or a chapter, I don't need the help; and five minute breaks once I'm in the flow just distract me. So some days I just do 1 or 2 or 0 Pomodoros, whatever it takes to get me rolling again and past the blocker.

I also don't know if this works for genuinely difficult problems. For instance, I don't know if it will help with a difficult mathematical proof I've been struggling with for months (though I intend to find out). But for subjects that I know I can do, but can't quite figure out how to do, or where to start, the power of focusing 25 minutes of real attention on just that one problem is astonishing.

View more: Prev | Next