Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: fubarobfusco 16 January 2017 01:46:08AM *  3 points [-]

It's been commented on before, once or twice!

Hitherto [1848] it is questionable if all the mechanical inventions yet made have lightened the day's toil of any human being. They have enabled a greater population to live the same life of drudgery and imprisonment, and an increased number of manufacturers and others to make fortunes. They have increased the comforts of the middle classes. But they have not yet begun to effect those great changes in human destiny, which it is in their nature and in their futurity to accomplish. Only when, in addition to just institutions, the increase of mankind shall be under the deliberate guidance of judicious foresight, can the conquests made from the powers of nature by the intellect and energy of scientific discoverers become the common property of the species, and the means of improving and elevating the universal lot.

— John Stuart Mill, Principles of Political Economy

Like every other increase in the productiveness of labour, machinery is intended to cheapen commodities, and, by shortening that portion of the working-day in which the labourer works for himself, to lengthen the other portion that he gives, without an equivalent, to the capitalist.

— Karl Marx, Capital

Comment author: Gram_Stone 16 January 2017 01:51:48AM 0 points [-]

Not sure if this should make me feel better or worse.

Comment author: TiffanyAching 15 January 2017 06:43:01PM 2 points [-]

'Twas ever thus. Bob at WidgetCorp says to his boss "You are demanding an unfair amount of labor from me for the wages you offer. You are demanding that I make 20 widgets a day. 15 widgets - or a raise - would be more fair." Management says "don't fret Bob! We've just installed the new Widgetron™. Making widgets will take half the time! By the way... we now need you to make 40 widgets a day. Hop to it."

I think you're giving the bosses here too much credit - or too little, I guess, depending on how you look at it, The main template describes how people may deceive themselves about their motivations for doing X. But surely in your example the employers never believed that reducing workload and stress were why they were getting the new big-ass printer? They got it because it was an investment in a new piece of equipment that would increase their profits by allowing their workers to make more signs in the same amount of time.

I mean, unless your employers are very different to mine, the "they realize" and "instead" parts of your subtemplate don't apply. There's no instead. They got the printer to maximize profits and then came up with something to tell their workforce about how it would reduce stress and workload. They lied - though the individuals involved might have described euphemistically it as "positive spin" or whatever.

Now, maybe your employers are nicer than mine - maybe your bosses only realized after they bought the BAP that they could use it to increase profits - but in my experience, top brass don't sign off on spending a bunch of money on new technology unless it has been thoroughly demonstrated that the new technology will help the bottom line. I would have said that usually it goes:

Employer realizes that technological artifact X can be used to increase productivity and decide to introduce it. In an attempt to maintain workplace morale as an added benefit, they bullshit their workers about how technological artifact X will reduce physical and/or cognitive demands.

Though this strategy can backfire when the workforce realize that T.A. X reduces physical demands while increasing cognitive demands, or reduces cognitive demands while increasing physical demands, or reduces neither, or worst of all, reduces the necessary workforce. Still, by the time they realize this it's often to late to do anything about it and T.A. X is the new normal.

Nobody ever tells Bob that his new target is, say, 35 widgets - an actual decrease in real workload - unless Bob has some leverage, whether it be labor laws or his union or his ability to take his skills and experience across the road to Widgets'R'Us.

Comment author: Gram_Stone 15 January 2017 09:10:38PM 0 points [-]

Yeah, post hoc rationalization or deception makes more sense than what I said.

Comment author: Jiro 14 January 2017 06:00:31PM 0 points [-]

My point is that using fiction to sneak ideas about the real world past people is a cheat. It is possible to be certain about something fictional in a way in which one cannot be certain about the real world.

Comment author: Gram_Stone 14 January 2017 11:41:37PM 0 points [-]

Stipulation is obviously sometimes a cheat. I would be surprised if it was always one.

Comment author: TiffanyAching 12 January 2017 05:39:48AM 5 points [-]

I like this post. Sneaking "scary" ideas into fiction, where they can be faced in a context that feels safer - that makes a lot of sense to me. And while I think you're right that it's tricky to consciously use the technique on yourself, I've certainly had it happen that way for me accidentally. (Though I think it's worth mentioning that the moment of realization - the moment it hit me that the logical or moral conclusion I had accepted in a fictional context was also valid/applicable in real life - was still sometimes painful or at least jarring.)

You asked about other ways to "reduce the perceived hedonic costs of truthseeking". I have an example of my own that might be relevant, especially to the word "perceived". Have you ever seen that trick where someone pulls a tablecloth off a table quickly and smoothly enough that all the plates and glasses and things stay right where they were?

I was speaking to a friend-of-a-friend to whom I had just been introduced - call her Jenny. In casual conversation, Jenny brought up her belief in crystal healing and asked me directly what I thought of it. Our mutual friend winced in horror because she knows how I feel about woo and anticipated a scathing response, or at least a condescending lecture about evidence-based medicine.

I'm not completely tactless, and Jenny was nice. I didn't want to ruin her evening over some stupid crystals. I had an idea. I said, as near as I can recall, this:

"Oh, yes, I think crystal healing is amazing! Gosh, when you think that just by looking at a little piece of quartz or hematite or topaz and thinking about things like mental clarity or relaxation, we have the power to lower our anxiety levels, lessen our feelings of fatigue, even reduce our own blood pressure - I mean it's such a beautiful example of the power of the human mind, isn't it?"

And more in the same vein. Basically I gushed for five minutes about how cool the placebo effect is (without once using the term "placebo effect") and how cool the natural world is, and how cool it is that we're constantly learning more about things that used to be mysteries, and so on.

My friend was relieved and Jenny was nodding - a little hesitantly, like she was slightly bewildered by something she couldn't quite put her finger on, but she was listening and she wasn't upset or defensive or annoyed and the party proceeded without awkwardness or rancor.

I didn't tell any lies. Crystal healing does work, in the sense that it's better than nothing. Of course almost anything that doesn't do active harm or negate the effects of real treatments works better than nothing - that's the beauty of the placebo. Doesn't really matter if it's administered via sugar pill or amethyst or homeopathic milkshake, if the belief is there (and I've seen some intriguing evidence to suggest that even true belief isn't necessary, by the way - you might only need hope).

See what I mean about the tablecloth trick? I was able to introduce Jenny to a less-wrong way of thinking about crystals without the hedonic cost of totally dismantling her beliefs. Now, I don't think I convinced her that crystals aren't filled with mysterious healing energy, and we never got near the fact that real medicine should work better than a placebo, but it still felt like a win - because I slipped a line of retreat into her head without setting off her intruder-alert. I gave her the plans for a model where her beloved crystals are cool and interesting and not-useless and not-lame that doesn't rely on them being magic. I showed her that you could take away the tablecloth and leave her good china in place.

It's a small example but I think there's an argument for minimizing perceived hedonic cost by demonstrating to someone that the absence of one cherished belief does not necessarily mean that every cherished belief or value that apparently rests upon it must come crashing down. Relinquishing belief in the magic of crystals doesn't mean Jenny has to throw out her collection of pretty rocks. Relinquishing belief in God doesn't mean a life without joy or meaning or domestic felicity and I think that's the kind of thing a lot of people are really afraid of losing, not the abstract idea of God's existence itself. They need to know there's a table under there.

Comment author: Gram_Stone 14 January 2017 06:23:42AM 1 point [-]

(Upvoted.) Just wanted to say, "Welcome to LessWrong."

Comment author: Jiro 12 January 2017 05:19:10AM 2 points [-]

Entertaining such ideas in reality may be subject to the problem that you are a fallible human and incapable of the certainty needed to justify making the choice. It may be that fictional dictatorships could be better than fictional democracies, but in a real-life situation I could never know that the dictatorship is better with enough certainty to be able to choose the dictatorship.

Comment author: Gram_Stone 14 January 2017 06:12:07AM 0 points [-]

I think this is worth pointing out because it seems like an easy mistake to use my reasoning to justify dictatorship. I also think this is an example of two ships passing in the night. Eliezer was talking about a meta-level/domain-general ethical injunction. When I was talking to the student, I was talking about how to avoid screwing up the object-level/domain-specific operationalization of the phrase 'good governance'.

My argument was that if you're asking yourself the question, "What does the best government look like?", assuming that that is indeed a right question, then you should be suspicious if you find yourself confidently proposing the answer, "My democracy." The first reason is that 'democracy' can function as a semantic stopsign, which would stop you dead in your tracks if you didn't have the motivation to grill yourself and ask, "Why does the best government look like my democracy?" The second reason is that the complement of the set containing the best government would be much larger than the set containing the best government, so if you use the mediocrity heuristic, then you should conclude that any given government in your hypothesis space is plausibly not the best government. If you consider it highly plausible that your democracy is the end state of political progress, then you're probably underestimating the plausibility of the true hypothesis. And lastly, we hope that we have thereby permitted ourselves to one day generate an answer that we expect to be better than what we have now, but that does not require the seizure of power by any individual or group.

If, in the course of your political-philosophical investigations, you find yourself attempting to determine your preference ordering over the governments in your hypothesis space, and, through further argumentation, you come to the separate and additional conclusion that dictatorship is preferable to democracy, then the ethical injunction, "Do not seize power for the good of the tribe," should kick in, because no domain is supposed to be exempt from an ethical injunction. It just so happens that you should also be suspicious of that conclusion on epistemic grounds, because the particular moral error that that particular ethical injunction is intended to prevent may often be caused by an act of self-deception. And if you add a new government to your hypothesis space, and this government somehow doesn't fit into the category 'dictatorship', but also involves the seizure of power for the good of the tribe, then the ethical injunction should kick in then too, and you should once more be suspicious on epistemic grounds as well.

What do you think about all of that?

Comment author: Elo 12 January 2017 10:37:45PM 5 points [-]

Always a guideline. I am still uneasy about the link being here, and would prefer to make it clear, rather than be silent.

Comment author: Gram_Stone 12 January 2017 10:49:57PM 1 point [-]

Thanks for clarifying. It was easy for me to forget that as well as being a moderator, you're also just another user with a stake in what happens to LW.

Comment author: Elo 12 January 2017 09:54:07PM 4 points [-]

I am uneasy about this link being here because Brexit was politics. I am not removing it yet.

Comment author: Gram_Stone 12 January 2017 10:24:57PM 1 point [-]

Genuine question: Did the Apolitical Guideline become an Apolitical Rule? Or have I always been mistaken about it being a guideline?

Comment author: Elo 11 January 2017 03:23:28AM 1 point [-]

I see Eliezer_Yudkowsky as account that it was posted from. Unsure what you are seeing.

Comment author: Gram_Stone 11 January 2017 04:41:39AM 0 points [-]

Additional data point: I see [deleted].

Comment author: Gram_Stone 08 January 2017 09:49:20PM 1 point [-]

I know this is on the blogroll right now, but since it was originally on Facebook I thought it might be nice to start a place for discussion on LW. Linkposts are also quite a bit more visible than the blogroll.

Comment author: Gram_Stone 08 January 2017 05:03:11PM *  11 points [-]

This post is already getting too long so I deleted the section on lessons to be learned, but if there is interest I'll do a followup. Let me know what you think in the comments!

I at least would be interested in hearing anything else that you have to say about this topic. I'm not averse to private conversation on the matter either; most such conversations of mine are private.

Hypothesis: Fiction silently allows people to switch into truthseeking mode about politics.

A history student friend of mine was playing Fallout: New Vegas, and he wanted to talk to me about which ending he should choose for the game's narrative. The conversation was mostly optimized for entertaining one another, but I found that this was a situation where I could slip in my real opinions on politics without getting wide-eyed stares! Like this one:

The question you have to ask yourself is "Do I value democracy because it is a good system, or do I value democracy per se?" A lot of people will admit that they value democracy per se. But that seems wrong to me. That means that if someone showed you a better system that you could verify was better, you would say "This is good governance, but the purpose of government is not good governance, the purpose of government is democracy." (I do, however, understand democracy as a 'current best bet' or 'local maximum'.)

I have in fact gotten wide-eyed stares for saying things like that, even granting the final pragmatic injunction on democracy as local maximum. I find that weird, because it seems like one of the first steps you would take towards thinking about politics clearly, not even as cognitive work but for the sake of avoiding cognitive anti-work, to not equivocate democracy with good governance. If you were further in the past and the fashionable political system were not democracy but monarchy, and you, like many others, consider democracy preferable to monarchy, then upon a future human revealing to you the notion of a modern democracy, you would find yourself saying, regrettably, "This is good governance, but the purpose of government is not good governance, the purpose of government is monarchy."

But because we were arguing for fictional governments, I seemed to be sending an imperceptibly weak signal that I would defect in a real tossup between democracy and something else, and thus my conversation partner could entertain my opinion whilst looking through truthseeking goggles instead of political ones.

The student is one of two people with whom I've had this precise conversation, and I do mean in the particular sense of "Which Fallout ending do I pick?" I slipped this opinion into both, and both came back weeks later to tell me that they spent a lot of time thinking about that particular part of the conversation and that the opinion I shared seemed deep. If Eliezer's hypothesis about the origin of feelings of deepness is true, then this is because they were actually truthseeking when they evaluated my opinion, and the opinion really got rid of a real cached thought: "Democracy is a priori unassailable."

In the spirit of doing accidentally effective things deliberately, if you ever wanted to flip someone's truthseeking switch, you might do it by placing the debate within the context of a fictional universe.

View more: Next