Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: fubarobfusco 16 January 2017 01:46:08AM *  3 points [-]

It's been commented on before, once or twice!

Hitherto [1848] it is questionable if all the mechanical inventions yet made have lightened the day's toil of any human being. They have enabled a greater population to live the same life of drudgery and imprisonment, and an increased number of manufacturers and others to make fortunes. They have increased the comforts of the middle classes. But they have not yet begun to effect those great changes in human destiny, which it is in their nature and in their futurity to accomplish. Only when, in addition to just institutions, the increase of mankind shall be under the deliberate guidance of judicious foresight, can the conquests made from the powers of nature by the intellect and energy of scientific discoverers become the common property of the species, and the means of improving and elevating the universal lot.

— John Stuart Mill, Principles of Political Economy

Like every other increase in the productiveness of labour, machinery is intended to cheapen commodities, and, by shortening that portion of the working-day in which the labourer works for himself, to lengthen the other portion that he gives, without an equivalent, to the capitalist.

— Karl Marx, Capital

Comment author: Gram_Stone 16 January 2017 01:51:48AM 0 points [-]

Not sure if this should make me feel better or worse.

Comment author: TiffanyAching 15 January 2017 06:43:01PM 2 points [-]

'Twas ever thus. Bob at WidgetCorp says to his boss "You are demanding an unfair amount of labor from me for the wages you offer. You are demanding that I make 20 widgets a day. 15 widgets - or a raise - would be more fair." Management says "don't fret Bob! We've just installed the new Widgetron™. Making widgets will take half the time! By the way... we now need you to make 40 widgets a day. Hop to it."

I think you're giving the bosses here too much credit - or too little, I guess, depending on how you look at it, The main template describes how people may deceive themselves about their motivations for doing X. But surely in your example the employers never believed that reducing workload and stress were why they were getting the new big-ass printer? They got it because it was an investment in a new piece of equipment that would increase their profits by allowing their workers to make more signs in the same amount of time.

I mean, unless your employers are very different to mine, the "they realize" and "instead" parts of your subtemplate don't apply. There's no instead. They got the printer to maximize profits and then came up with something to tell their workforce about how it would reduce stress and workload. They lied - though the individuals involved might have described euphemistically it as "positive spin" or whatever.

Now, maybe your employers are nicer than mine - maybe your bosses only realized after they bought the BAP that they could use it to increase profits - but in my experience, top brass don't sign off on spending a bunch of money on new technology unless it has been thoroughly demonstrated that the new technology will help the bottom line. I would have said that usually it goes:

Employer realizes that technological artifact X can be used to increase productivity and decide to introduce it. In an attempt to maintain workplace morale as an added benefit, they bullshit their workers about how technological artifact X will reduce physical and/or cognitive demands.

Though this strategy can backfire when the workforce realize that T.A. X reduces physical demands while increasing cognitive demands, or reduces cognitive demands while increasing physical demands, or reduces neither, or worst of all, reduces the necessary workforce. Still, by the time they realize this it's often to late to do anything about it and T.A. X is the new normal.

Nobody ever tells Bob that his new target is, say, 35 widgets - an actual decrease in real workload - unless Bob has some leverage, whether it be labor laws or his union or his ability to take his skills and experience across the road to Widgets'R'Us.

Comment author: Gram_Stone 15 January 2017 09:10:38PM 0 points [-]

Yeah, post hoc rationalization or deception makes more sense than what I said.

X Is Not About Y: Technological Improvements and Cognitive-Physical Demands

1 Gram_Stone 15 January 2017 05:49PM

(I, the author, no longer endorse this article. I find it naive in hindsight.)

 

Recall the following template:

In some cases, human beings have evolved in such fashion as to think that they are doing X for prosocial reason Y, but when human beings actually do X, other adaptations execute to promote self-benefiting consequence Z.

I work in the sign industry, and it's worth knowing that the sign industry mostly involves printing images on cast sheets of polyvinyl chloride with adhesive on the back of it. This allows you to stick a graphic just about anywhere. Good-old-fashioned signs are now just a special case of vinyl application where the surface is a quadrilateral.

But sometimes, it seems like you could cut out the vinyl installation process: if you just wanted a solid white sign with some black text, and the substrate you're going to apply the vinyl to is already white, wouldn't it be nice if you could just print some black text directly on the substrate?

That's what a flatbed printer is for, which you can imagine as your standard HP desktop printer at 100x magnification with an unusually long air hockey table where the paper slot should be.

Now, when the management was trying to get the workforce excited about this new technological artifact, they would say things like, "This new artifact will reduce the amount of time that you spend on vinyl application, leaving you less stressed and with a decreased workload."

But when we actually started to use the artifact, our jobs didn't actually become less stressful, and our workloads didn't actually decrease.

I mean, yeah, we could technically produce the same number of signs in less time, but a corollary of this statement is that we could produce more signs in the same amount of time, which is what we actually did.

So, I propose the subtemplate:

Employer proposes the introduction of technological artifact X, ostensibly to reduce physical or cognitive demands, but when the employer actually introduces technological artifact X, they realize it can be used to increase output and do that instead.

I wonder if anyone else has more examples?

Comment author: Jiro 14 January 2017 06:00:31PM 0 points [-]

My point is that using fiction to sneak ideas about the real world past people is a cheat. It is possible to be certain about something fictional in a way in which one cannot be certain about the real world.

Comment author: Gram_Stone 14 January 2017 11:41:37PM 0 points [-]

Stipulation is obviously sometimes a cheat. I would be surprised if it was always one.

Comment author: TiffanyAching 12 January 2017 05:39:48AM 5 points [-]

I like this post. Sneaking "scary" ideas into fiction, where they can be faced in a context that feels safer - that makes a lot of sense to me. And while I think you're right that it's tricky to consciously use the technique on yourself, I've certainly had it happen that way for me accidentally. (Though I think it's worth mentioning that the moment of realization - the moment it hit me that the logical or moral conclusion I had accepted in a fictional context was also valid/applicable in real life - was still sometimes painful or at least jarring.)

You asked about other ways to "reduce the perceived hedonic costs of truthseeking". I have an example of my own that might be relevant, especially to the word "perceived". Have you ever seen that trick where someone pulls a tablecloth off a table quickly and smoothly enough that all the plates and glasses and things stay right where they were?

I was speaking to a friend-of-a-friend to whom I had just been introduced - call her Jenny. In casual conversation, Jenny brought up her belief in crystal healing and asked me directly what I thought of it. Our mutual friend winced in horror because she knows how I feel about woo and anticipated a scathing response, or at least a condescending lecture about evidence-based medicine.

I'm not completely tactless, and Jenny was nice. I didn't want to ruin her evening over some stupid crystals. I had an idea. I said, as near as I can recall, this:

"Oh, yes, I think crystal healing is amazing! Gosh, when you think that just by looking at a little piece of quartz or hematite or topaz and thinking about things like mental clarity or relaxation, we have the power to lower our anxiety levels, lessen our feelings of fatigue, even reduce our own blood pressure - I mean it's such a beautiful example of the power of the human mind, isn't it?"

And more in the same vein. Basically I gushed for five minutes about how cool the placebo effect is (without once using the term "placebo effect") and how cool the natural world is, and how cool it is that we're constantly learning more about things that used to be mysteries, and so on.

My friend was relieved and Jenny was nodding - a little hesitantly, like she was slightly bewildered by something she couldn't quite put her finger on, but she was listening and she wasn't upset or defensive or annoyed and the party proceeded without awkwardness or rancor.

I didn't tell any lies. Crystal healing does work, in the sense that it's better than nothing. Of course almost anything that doesn't do active harm or negate the effects of real treatments works better than nothing - that's the beauty of the placebo. Doesn't really matter if it's administered via sugar pill or amethyst or homeopathic milkshake, if the belief is there (and I've seen some intriguing evidence to suggest that even true belief isn't necessary, by the way - you might only need hope).

See what I mean about the tablecloth trick? I was able to introduce Jenny to a less-wrong way of thinking about crystals without the hedonic cost of totally dismantling her beliefs. Now, I don't think I convinced her that crystals aren't filled with mysterious healing energy, and we never got near the fact that real medicine should work better than a placebo, but it still felt like a win - because I slipped a line of retreat into her head without setting off her intruder-alert. I gave her the plans for a model where her beloved crystals are cool and interesting and not-useless and not-lame that doesn't rely on them being magic. I showed her that you could take away the tablecloth and leave her good china in place.

It's a small example but I think there's an argument for minimizing perceived hedonic cost by demonstrating to someone that the absence of one cherished belief does not necessarily mean that every cherished belief or value that apparently rests upon it must come crashing down. Relinquishing belief in the magic of crystals doesn't mean Jenny has to throw out her collection of pretty rocks. Relinquishing belief in God doesn't mean a life without joy or meaning or domestic felicity and I think that's the kind of thing a lot of people are really afraid of losing, not the abstract idea of God's existence itself. They need to know there's a table under there.

Comment author: Gram_Stone 14 January 2017 06:23:42AM 1 point [-]

(Upvoted.) Just wanted to say, "Welcome to LessWrong."

Comment author: Jiro 12 January 2017 05:19:10AM 2 points [-]

Entertaining such ideas in reality may be subject to the problem that you are a fallible human and incapable of the certainty needed to justify making the choice. It may be that fictional dictatorships could be better than fictional democracies, but in a real-life situation I could never know that the dictatorship is better with enough certainty to be able to choose the dictatorship.

Comment author: Gram_Stone 14 January 2017 06:12:07AM 0 points [-]

I think this is worth pointing out because it seems like an easy mistake to use my reasoning to justify dictatorship. I also think this is an example of two ships passing in the night. Eliezer was talking about a meta-level/domain-general ethical injunction. When I was talking to the student, I was talking about how to avoid screwing up the object-level/domain-specific operationalization of the phrase 'good governance'.

My argument was that if you're asking yourself the question, "What does the best government look like?", assuming that that is indeed a right question, then you should be suspicious if you find yourself confidently proposing the answer, "My democracy." The first reason is that 'democracy' can function as a semantic stopsign, which would stop you dead in your tracks if you didn't have the motivation to grill yourself and ask, "Why does the best government look like my democracy?" The second reason is that the complement of the set containing the best government would be much larger than the set containing the best government, so if you use the mediocrity heuristic, then you should conclude that any given government in your hypothesis space is plausibly not the best government. If you consider it highly plausible that your democracy is the end state of political progress, then you're probably underestimating the plausibility of the true hypothesis. And lastly, we hope that we have thereby permitted ourselves to one day generate an answer that we expect to be better than what we have now, but that does not require the seizure of power by any individual or group.

If, in the course of your political-philosophical investigations, you find yourself attempting to determine your preference ordering over the governments in your hypothesis space, and, through further argumentation, you come to the separate and additional conclusion that dictatorship is preferable to democracy, then the ethical injunction, "Do not seize power for the good of the tribe," should kick in, because no domain is supposed to be exempt from an ethical injunction. It just so happens that you should also be suspicious of that conclusion on epistemic grounds, because the particular moral error that that particular ethical injunction is intended to prevent may often be caused by an act of self-deception. And if you add a new government to your hypothesis space, and this government somehow doesn't fit into the category 'dictatorship', but also involves the seizure of power for the good of the tribe, then the ethical injunction should kick in then too, and you should once more be suspicious on epistemic grounds as well.

What do you think about all of that?

Comment author: Elo 12 January 2017 10:37:45PM 5 points [-]

Always a guideline. I am still uneasy about the link being here, and would prefer to make it clear, rather than be silent.

Comment author: Gram_Stone 12 January 2017 10:49:57PM 1 point [-]

Thanks for clarifying. It was easy for me to forget that as well as being a moderator, you're also just another user with a stake in what happens to LW.

Comment author: Elo 12 January 2017 09:54:07PM 4 points [-]

I am uneasy about this link being here because Brexit was politics. I am not removing it yet.

Comment author: Gram_Stone 12 January 2017 10:24:57PM 1 point [-]

Genuine question: Did the Apolitical Guideline become an Apolitical Rule? Or have I always been mistaken about it being a guideline?

Planning the Enemy's Retreat

14 Gram_Stone 11 January 2017 05:44AM

Related: Leave a Line of Retreat

When I was smaller, I was sitting at home watching The Mummy, with my mother, ironically enough. There's a character by the name of Bernard Burns, and you only need to know two things about him. The first thing you need to know is that the titular antagonist steals his eyes and tongue because, hey, eyes and tongues spoil after a while you know, and it's been three thousand years.

The second thing is that Bernard Burns was the spitting image of my father. I was terrified! I imagined my father, lost and alone, certain that he would die, unable to see, unable even to properly scream!

After this frightening ordeal, I had the conversation in which it is revealed that fiction is not reality, that actions in movies don't really have consequences, that apparent consequences are merely imagined and portrayed.

Of course I knew this on some level. I think the difference between the way children and adults experience fiction is a matter of degree and not kind. And when you're an adult, suppressing those automatic responses to fiction has itself become so automatic, that you experience fiction as a thing compartmentalized. You always know that the description of consequences in the fiction will not by magic have fire breathed into them, that Imhotep cannot gently step out of the frame and really remove your real father's real eyes.

So, even though we often use fiction to engage, to make things feel more real, in another way, once we grow, I think fiction gives us the chance to entertain formidable ideas at a comfortable distance.

A great user once said, "Vague anxieties are powerful anxieties." Related to this is the simple rationality technique of Leaving a Line of Retreat: before evaluating the plausibility of a highly cherished or deeply frightening belief, one visualizes the consequences of the highly cherished belief being false, or of the deeply frightening belief being true. We hope that it will thereby become just a little easier to evaluate the plausibility of that belief, for if we are wrong, at least we know what we're doing about it. Sometimes, if not often, what you'd really do about it isn't as bad as your intuitions would have you think.

If I had to put my finger on the source of that technique's power, I would name its ability to reduce the perceived hedonic costs of truthseeking. It's hard to estimate the plausibility of a charged idea because you expect your undesired outcome to feel very bad, and we naturally avoid this. The trick is in realizing that, in any given situation, you have almost certainly overestimated how bad it would really feel.

But Sun Tzu didn't just plan his own retreats; he also planned his enemies' retreats. What if your interlocutor has not practiced the rationality technique of Leaving a Line of Retreat? Well, Sun Tzu might say, "Leave one for them."

As I noted in the beginning, adults automatically compartmentalize fiction away from reality. It is simply easier for me to watch The Mummy than it was when I was eight. The formidable idea of my father having his eyes and tongue removed is easier to hold at a distance.

Thus, I hypothesize, truth in fiction is hedonically cheap to seek.

When you recite the Litany of Gendlin, you do so because it makes seemingly bad things seem less bad. I propose that the idea generalizes: when you're experiencing fiction, everything seems less bad than its conceivably real counterpart, it's stuck inside the book, and any ideas within will then seem less formidable. The idea is that you can use fiction as an implicit line of retreat, that you can use it to make anything seem less bad by making it make-believe, and thus, safe. The key, though, is that not everything inside of fiction is stuck inside of fiction forever. Sometimes conclusions that are valid in fiction also turn out to be valid in reality. 

This is hard to use on yourself, because you can't make a real scary idea into fiction, or shoehorn your scary idea into existing fiction, and then make it feel far away. You'll know where the fiction came from. But I think it works well on others.

I don't think I can really get the point across in the way that I'd like without an example. This proposed technique was an accidental discovery, like popsicles or the Slinky:

A history student friend of mine was playing Fallout: New Vegas, and he wanted to talk to me about which ending he should choose. The conversation seemed mostly optimized for entertaining one another, and, hoping not to disappoint, I tried to intertwine my fictional ramblings with bona fide insights. The student was considering giving power to a democratic government, but he didn't feel very good about it, mostly because this fictional democracy was meant to represent anything that anyone has ever said is wrong with at least one democracy, plausible or not.

"The question you have to ask yourself," I proposed to the student, "is 'Do I value democracy because it is a good system, or do I value democracy per se?' A lot of people will admit that they value democracy per se. But that seems wrong to me. That means that if someone showed you a better system that you could verify was better, you would say 'This is good governance, but the purpose of government is not good governance, the purpose of government is democracy.' I do, however, understand democracy as a 'current best bet' or local maximum."

I have in fact gotten wide-eyed stares for saying things like that, even granting the closing ethical injunction on democracy as local maximum. I find that unusual, because it seems like one of the first steps you would take towards thinking about politics clearly, to not equivocate democracy with good governance. If you were further in the past and the fashionable political system were not democracy but monarchy, and you, like many others, consider democracy preferable to monarchy, then upon a future human revealing to you the notion of a modern democracy, you would find yourself saying, regrettably, "This is good governance, but the purpose of government is not good governance, the purpose of government is monarchy."

But because we were arguing for fictional governments, our autocracies, or monarchies, or whatever non-democratic governments heretofore unseen, could not by magic have fire breathed into them. For me to entertain the idea of a non-democratic government in reality would have solicited incredulous stares. For me to entertain the idea in fiction is good conversation.

The student is one of two people with whom I've had this precise conversation, and I do mean in the particular sense of "Which Fallout ending do I pick?" I snuck this opinion into both, and both came back weeks later to tell me that they spent a lot of time thinking about that particular part of the conversation, and that the opinion I shared seemed deep.

Also, one of them told me that they had recently received some incredulous stares.

So I think this works, at least sometimes. It looks like you can sneak scary ideas into fiction, and make them seem just non-scary enough for someone to arrive at an accurate belief about that scary idea.

I do wonder though, if you could generalize this even more. How else could you reduce the perceived hedonic costs of truthseeking?

Comment author: Elo 11 January 2017 03:23:28AM 1 point [-]

I see Eliezer_Yudkowsky as account that it was posted from. Unsure what you are seeing.

Comment author: Gram_Stone 11 January 2017 04:41:39AM 0 points [-]

Additional data point: I see [deleted].

View more: Next