Rationality Quotes September 2012
Here's the new thread for posting quotes, with the usual rules:
- Please post all quotes separately, so that they can be voted up/down separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself
- Do not quote comments/posts on LW/OB
- No more than 5 quotes per person per monthly thread, please.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (1088)
Steven Johnson, Everything Bad is Good For You
(His book argues that pop culture is increasing intelligence, not dumbing it down. He argues that plot complexity has increased and that keeping track of large storylines is now much more common place, and that these skills manifest themselves in increased social intelligence (and this in turn might manifest itself in overall intelligence, I'm not sure). Here, he's specifically discussing video games and the internet.)
I highly recommend the book, it's interesting in terms of cognitive science as well as cultural and social analysis. I thought it sounded only mildly interesting when I first picked it up, but now I'm thinking more along the lines that it's extremely interesting. At least give it a try, because it's difficult to describe what makes it so good.
Really? I thought it was very short and not in depth at all; yeah, his handful of graphs of episodes was interesting from the data visualization viewpoint, but most of his arguments, such as they were, were qualititative and hand-wavey. (What, there are no simplistic shows these days?)
It was rather broad and not very in depth, but it was largely conceptually oriented. He conceded that there were simplistic shows, but argued that the simplistic shows of today tend to be more complicated than the simplistic shows of yesterday. If you disagree...
I don't know how I'd refute him - there are so many TV shows, both now and then! One can cherrypick pretty much anything one likes, although I don't personally watch TV anymore and couldn't do it.
(I'm reminded how people online sometimes say 'anime really sucked in time period X', because they're only familiar with anime released in the '00s and '10s, while if you look at an actual full 30+ strong roster of one of their example 'sucking' years like eg. 1991, you'll often see a whole litany of great or influential series like Nadia, City Hunter, Ranma 1/2, Dragon Ball Z, and Gundam 0083: Stardust Memory. Well, yeah, if you forget entirely about them, I suppose 1991 seems like a really sucky year compared to 2010 or whatever.)
You could analyze the way that people in the TV business think and talk about complexity, while assuming that they know what they're doing. He seemed to do a bit of this.
Does he look at the possibility that people are getting more intelligent for some other reason, and popular art is the result of creators serving a more intelligent audience rather than more complex art making people smarter?
-- Nate Silver, The Signal and the Noise
-- W.H. Press et al., Numerical Recipes, Sec. 15.1
-- Iain McKay et al., An Anarchist FAQ, Sec. F.2.1
-- G.K. Chesterton
-- Bryan Caplan
Hitler was at least a hypocrite - he got his Jewish friends to safety, and accepted same-sex relationships in himself and people he didn't want to kill yet. The kind of corruption Caplan is pointing at is a willingness to compromise with anyone who makes offers, not any kind of ignoring your principles. And Nazis were definitely against that - see the Duke in Jud Süß.
?
Please provide evidence for this bizarre claim?
Spared Jews:
Whether Hitler batted for both teams is hotly debated. There are suspected relationships (August Kubizek, Emil Maurice) but any evidence could as well have been faked to smear him.
Hitler clearly knew that Ernst Röhm and Edmund Heines were gay and didn't care until it was Long Knives time. I'm less sure he knew about Karl Ernst's sexuality.
Wittgenstein paid a huge bribe to allow his family to leave Germany. Somewhere I read that this particular agreement was approve personally be Hitler (or someone very senior in the hierarchy).
That doesn't contradict the general point that Nazi Germany was generally willing to kill and steal from its victims (especially during the war) rather than accept bribes for escape.
This may have happened some of the time, but everything I read suggests it was the exception and not the rule.
The reason Jews did not emigrate out of Germany during the 30s was that Germany had a big foreign balance problem, and managed tight government control over allocation of foreign currency. Jews (and Germans) could not convert their Reichsmarks to any other currency, either in Germany or out of it, and so they were less willing to leave. And no other country was willing to take them in in large numbers (since they would be poor refugees). This continued during the war in the West European countries conquered by Germany. (Ref: Wages of Destruction, Adam Tooze)
Later, all Jewish property was expropriated and the Jews sent to camps, so there was no more room for bribes - the Jews had nothing to offer since the Nazis took what they wanted by force.
The last bit is most famously true of Rohm, though of course there's a dozen different things going on there.
Nietzsche, The Gay Science
Salman Rushdie, explaining identity politics
I think "identity politics" is a term of art which covers things other than that which aren't bad, like minority struggles.
You've got a point, and it's one that gets into hard issues. It can be quite hard for some people to decide whether they're being unfairly mistreated and to act on it, and the people for whom the decision is easy aren't necessarily sensible. Emotions are not a reliable tool for telling whether acting on a feeling of being unfairly mistreated makes sense.
How do you tell to what extent is a particular instance of people feeling outraged them just getting worked up for the fun of it over something they should endure, and to what extent are they building up enough allies and emotional energy to deal with a problem which (by utilitarian standards?) needs to be dealt with?
I don't know how to tell legitimate movements from illegitimate ones, but the term of art "identity politics" refers to both. ID politics is a specific kind of political advocacy, and there are both good ID politics arguments and bad ones. You'd probably just have to investigate the claims they're making on a case by case basis.
But, I wasn't trying to interrogate whether defining yourself by outrage can be good in some instances, I was trying to point out that the term "ID politics" refers to things outside of defining yourself in relation to outrage. Maybe I just misinterpreted what you were saying, but I thought your comment unintentionally hinted that you were unaware the phrase is a specific term of art. There are many types of identity politics that aren't about outrage or opposition.
You're quite right, I didn't know about it as a term of art.
I suppose I've mostly heard about the outrage variety of identity politics-- it tends to be more conspicuous.
Bill Clinton
Marcus Aurelius
Meh, there are worse things to be than a mean man.
There are considerably more worse things to be than a noble one.
(Baruch Spinoza)
Pierre Proudhon, to Karl Marx
More from Scott Adams:
-- xkcd 667
I cannot tell if this is rationality or anti-rationality:
Steve Ballmer
I'd saying telling an interviewer you have sufficient confidence in your product to not need a backup plan is rational, actually not having one isn't.
I'm reminded of a quote in Lords of Finance (which I finished yesterday) which went something like 'Only a fool asks a central banker about the currency and expects an honest answer'. Since confidence is what keeps banks and currencies going...
See, if instead of "I'm not paid to have doubts." he said "I am paid to address all doubts before a product is released", that would have made more sense.
This comes across as inauthentic and slightly scared to me. At best, he's not great at PR. At worst, he doesn't have any back up plan. So that would support calling it irrationality.
Well. I was thinking about it, and it seems like not having a backup plan is the kind of thing that would send bad signals to investors and whatnot. It's not clear to me that he's better off doing this than explaining how Microsoft is a fantastically professional company that's innovating and reaching into new frontiers, etc.
I don't know specifically what alternate products would potentially be good ideas for them though. I agree that backup plans are good in general but I don't know if they're good for Microsoft specifically, based on the resources they have. Windows is kind of their thing, I don't know if they could execute on anything else.
-William of Ockham
This is an interesting quote for historical reasons but it is not a rationality quote.
It makes a very important reply to anyone who claims that e.g. you should stick with Occam's original Razor and not try to rephrase it in terms of Solomonoff Induction because SI is more complicated.
Humans and their silly ideas of what's complicated or not.
What I find ironic is that SI can be converted into a similarly terse commandment. "Shorter computable theories have more weight when calculating the probability of the next observation, using all computable theories which perfectly describe previous observations" -- Wikipedia.
I read this as a reminder not to add anything to that map that won't help you navigate the territory. How is this not a rationality quote? Are you rejecting it merely because of the third disjunct?
The quote doesn't say that, this is (only) a fact about your reading.
I'm not especially impressed with the first two either, nor the claim to be exhaustive (thus excluding other valid evidence). It basically has very little going for it. It is bad epistemic advice. It is one of many quotes which require abandoning most of the content and imagining other content that would actually be valid. I reject it as I reject all such examples.
— Robert A. Heinlein
I think that quote is much too broad with the modifier "might." If you should procrastinate based on a possibility of improved odds, I doubt you would ever do anything. At least a reasonable degree of probability should be required.
Not to mention that the natural inclination of most people toward procrastination means that they should be distrustful of feelings that delaying will be beneficial; it's entirely likely that they are misjudging how likely the improvement really is.
That's not, of course, to say that we should always do everything as soon as possible, but I think that to the extent that we read the plain meaning from this quote, it's significantly over-broad and not particularly helpful.
There's also natural inclinations towards haste and impatience. (They probably mostly crop up around different things / in different people than procrastinatory urges, but the quote is not specific about what it is you could put off.)
I'm reminded of the saying, "A weed is just a plant in the wrong place." Different people require different improvements to their strategies.
That's certainly a fair point.
I suppose it's primarily important to know what your own inclinations are (and how they differ in different areas) and then try to adjust accordingly.
Do it today, and fix/retry tomorrow on failure?
Perhaps it's a one-time thing.
Edward Tufte, "Beautiful Evidence"
...what else?
Quantum physics
-- Jianzhi Sengcan
Edit: Since I'm not Will Newsome (yet!) I will clarify. There are several useful points in this but I think the key one is the virtue of keeping one's identity small. Speaking it out loud is a sort of primer, meditation or prayer before approaching difficult or emotional subjects has for me proven a useful ritual for avoiding motivated cognition.
For the curious, it's the opening of 信心铭 (Xinxin Ming), whose authorship is disputed (probably not the zen patriarch Jiangzhi Sengcan). In Chinese, that part goes:
(The Wikipedia article lists a few alternate translations of the first verses, with different meanings)
Case in point:
-- Ro-Man
Albert Einstein (maybe)
Cf. this and this.
Warning: Your milage may vary.
"The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man." -George Bernard Shaw
Sadly, duplicate.
-- Kaiki Deishū, Episode 7 of Nisemonogatari.
-- Ludwig Wittgenstein, Philosophical Investigations
-- Dienekes Pontikos, Citizen Genetics
Jon Skeet
I'll risk a bit of US politics, just because I like the quote:
Scott Adams on one of the two presidential candidates being skilled at the art of winning (with some liberal use of dark arts).
Ambrose Bierce
Ernest Hemingway
Excellent. A shortcut to nobility. One day of being as despicable as I can practically manage and I'm all set.
It does not state which (!) former self, so I would expect some sort of median or mean or summary of your former self and not just the last day. So I'm sorry but there is no shortcut ;-)
Tenzin Gyatso, 14. Dalai Lama
That's intriguing, but it also sounds like a case of non-apples.
Well, it is a necessary step to find other fruits.
"Junior", FIRE JOE MORGAN
"If your plan is for one year plant rice. If your plan is for 10 years plant trees. If your plan is for 100 years educate children" - Confucius
...If your plan is for eternity, invent FAI?
Depends how you interpret the proverb. If you told me the Earth would last a hundred years, it would increase the immediate priority of CFAR and decrease that of SIAI. It's a moot point since the Earth won't last a hundred years.
Sorry, Earth won't last a hundred years?
The idea seems to be that even if there is a friendly singularity, Earth will be turned into computronium or otherwise transformed.
I am surprised that this claim surprises you. A big part of SI's claimed value proposition is the idea that humanity is on the cusp of developing technologies that will kill us all if not implemented in specific ways that non-SI folk don't take seriously enough.
Of course you're right. I guess I haven't noticed the topic come up here for a while, and haven't seen the apocalypse predicted so straightforwardly (and quantitatively) before so am surprised in spite of myself.
Although, in context, it sounds like EY is saying that the apocalypse is so inevitable that there's no need to make plans for the alternative. Is that really the consensus at EY's institute?
I have no idea what the consensus at SI is.
Nanotech and/or UFAI.
Buckaroo Banzai
...
...
dur....
....
What?
I'll take the new -5 karma hit to point out that this comment shouldn't be downvoted. It is an interesting critique of the post it replies to.
interesting?
Probably it would be even more interesting if I could understand it.
Eliezer posted a comment that's essentially devoid of content. This satirizes the original quote's claim that one should be of "no mind whatsoever" by illustrating that mindlessness isn't particularly useful-- a truly mindless individual (like that portrayed in the comment) would have no useful contributions to make.
"No mind" is ordinary mind.
That went completely over my head. (I guessed he was alluding to some concept whose name began with “dur”, but I couldn't think of any relevant one.)
How is it a critique? The quote is an adequate expression of Eliezer's own third virtue of rationality, and I daresay if anyone had responded as uncharitably as that to his "Twelve Virtues", he would have considered 'dur' to be an adequate summary of that person's intellect.
How is it uncharitable? Eliezer is emptying his mind as recommended by Doctor Banzai. Not sure how it's a "critique" though.
See a priori, No Universally Compelling Arguments.
The critique is of the phrase "but to be of no mind whatsoever."
The uncharitable interpretation is that something without a mind is a rock; the charitable interpretation is to take "mind" as "opinion."
I ended up downvoting the criticism because it doesn't apply to the substance of the quote, but to its word choice, and is itself not as clear as it could be.
The criticism is that a martial artist or scientist is actually trying to attain a highly specific brain-state in which neurons have particular patterns in them; a feeling of emptiness, even if part of this brain state, is itself a neural pattern and certainly does not correspond to the absence of a mind.
The zeroth virtue or void - insofar as we believe in it - corresponds to particular mode of thinking; it's certainly not an absence of mind. Emptiness, no-mind, the Void of Musashi, all these things are modes of thinking, not the absence of any sort of reified spiritual substance. See also the fallacy of the ideal ghost of perfect emptiness in philosophy.
Cf. Mushin
And this critique I upvoted, because it is both clear and a valuable point. I still think you're using an uncharitable definition of the word "mind," but as assuming charity could lead to illusions of transparency it's valuable to have high standards for quotes.
You've mentioned this before, and I don't really know where it comes from. Do you have any specific philosopher or text in mind, or is this just a habit your perceive in philosophical argument? If so, in whose argument? Professional or historical or amateur philosophers?
Aside from some early-modern empiricists, and maybe Stoicism, I can't think of anything.
I'm amazed how you guys manage to get all that from "dur". My communication skills must be worse than I thought.
I agree that the response was not particularly charitable, but it's nevertheless generally a type of post that I would like to see more of on LessWrong-- I think that style of reply can be desirable and funny. See also this comment.
This is my home, the country where my heart is;
Here are my hopes, my dreams, my sacred shrine.
But other hearts in other lands are beating,
With hopes and dreams as true and high as mine.
My country’s skies are bluer than the ocean,
And sunlight beams on cloverleaf and pine.
But other lands have sunlight too and clover,
And skies are everywhere as blue as mine.
-Lloyd Stone
obviously he never visited the British Isles :D
Duplicate, please delete the other.
Michael Lewis, Moneyball, ch. 4 ("Field of Ignorance")
CS Lewis, The Screwtape Letters
-- Iain McKay et al., An Anarchist FAQ, section C.7.3
Matt Ridley, in The Origins of Virtue
-- Tenzin Gyatso, 14th Dalai Lama
-Bryan Caplan, Selfish Reasons to Have More Kids
I'd pick gripping lies over most nonfictional shows, which are mainly irrelevant or misleading truths.
-- GoodDamon (this may skirt the edge of the rules, since it's a person reacting to a sequence post, but a person who's not a member of LW.)
...and, more importantly, not on LessWrong.com.
Charles Handy describing the Vietnam-era measurement policies of Secretary of Defense Robert McNamara
Douglas Hubbard, How to Measure Anything
― Robert M. Pirsig, Zen and the Art of Motorcycle Maintenance: An Inquiry Into Values
Well. Surely that's only part of the real purpose of the scientific method.
Charles Kettering
A problem sufficiently well-stated is a problem fully solved.
Wow, I didn't even know that's a quote from someone! I had inferred that (mini)lesson from a lecture I heard, but it wasn't stated in those terms, and I never checked if someone was already known for that.
--Zhuangzi, being a trendy metacontrarian post-rationalist in the 4th century BC
Zhuangzi says knowledge has no limit, one could spend his entire life making a good map of a vast and diverse territory and it would not be enough to make a good map.
If one does not know this and makes maps for travel, he may be travelling to safe lands. This is weak evidence one is in danger.
If one knows this and still makes such maps, this is strong evidence one is in danger, for to travel to safe lands he would not make such foolhardy attempts.
Douglas Hubbard, How to Measure Anything
This is the second time I've come across you mentioning Hubbard. Is the book good and, if so, what audience is it goo for?
How to Measure Anything is surprisingly good, so I added it here.
That's a modest thing to say for a vain person. It even sounds a bit like Moore's paradox - I need advice, but I don't believe I do.
(Not that I'm surprised. I've met ambivalent people like that and could probably count myself among them. Being aware that you habitually make a mistake is one thing, not making it any more is another. Or, if you have the discipline and motivation, one step and the next.)
I love New Peter. He's so interesting and twisted and bizarre.
The following quotes were heavily upvoted, but then turned out to be made by a Will Newsome sockpuppet who edited the quote afterward. The original comments have been banned. The quotes are as follows:
— Aristosophy
— Aristosophy
If anyone objects to this policy response, please PM me so as to not feed the troll.
I do find some of Will Newsome's contributions interesting. OTOH, this behaviour is pretty fucked up. (I was wondering how hard it would be to implement a software feature to show the edit history of comments.)
Edited how?
If I remember correctly the second quote was edited to be something along the lines of "will_newsome is awesome."
Defection too far. Ban Will.
Will is a cute troll.
Hmm, after observing it a few times on various forums I'm starting to consider that having a known, benign resident troll might keep away more destructive ones. No idea how it works but it doesn't seem that far-fetched given all the strange territoriality-like phenomena occasionally encountered in the oddest places.
I've heard this claimed.
This behavior isn't cute.
This would be somewhat in fitting with findings in Cialdini. One defector kept around and visibly punished or otherwise looking low status is effective at preventing that kind of behavior. (If not Cialdini, then Greene. Probably both.)
If only the converse were true...
[oops; this was a repeat]
Repeat.
Luther Sloan to Juilian Bashir in Star Trek: Deep Space Nine, “Inquisition”, written by Bradley Thompson and David Weddle, created by Rick Berman and Michael Piller
Presuming that Starfleet Medical has limited enrollment, and that if he hadn't lied, a superior candidate would have enrolled, then that superior candidate would have saved those hundreds or thousands, and then a few more.
He was lying about having had gene therapy. He was a superior candidate by virtue of same but it would have kept him out because Starfleet is anti-gene-therapy-ist. (At least I assume so - I remember the character had the therapy and had to hide it, but not whether it came out in that episode or something else did.)
That is much more justifiable than the standard case of lying on applications.
I can imagine Star Robin Hanson writing an angry blog post about what this implies about Starfleet's priorities.
Robin Hanson has some interesting things to say about Battle Star Galactica.
Have you seen any Star Trek? Star Robin Hanson would have a lot of angry posts to write.
Some, as a child.
There was a (flimsy) historical reason - there had been wars about "augments" in the past; the anti-augments won (somehow), determined the war was about "people setting themselves above their fellow humans", and discouraged more people augmenting themselves/their children in this way by (ineffectively) making it a net negative.
I read somewhere that, in Star Trek land, genetic engineering of intelligent beings is highly correlated with evil, either because it's being done for an evil purpose to begin with or because the engineered beings themselves end up as arrogant, narcissistic jerks with a strong tendency toward becoming evil. The latter implies that there's a technical problem with the genetic engineering of humans that hasn't been solved yet, which Bashir was lucky to have avoided.
It might not be a technical problem. It might merely be that most augments are raised by people who keep telling them that they're genetically superior to everyone else and therefore create in them a sense of arrogance and entitlement. Which is only made worse by the fact that they actually are stronger, healthier and smarter than everyone else (but not by as big a margin as they tend to imagine).
Solzhenitsyn
Duplicate.
— Steven Kaas
If only it were a line. Or even a vague boundary between clearly defined good and clearly defined evil. Or if good and evil were objectively verifiable notions.
I think the intermediate value theorem covers this. Meaning if a function has positive and negative values (good and evil) and it is continuous (I would assume a "vague boundary" or "grey area" or "goodness spectrum" to be continuous) then there must be at least one zero value. That zero value is the boundary.
It would indeed cover this if goodness spectrum was a regular function, not a set-valued map. Unfortunately, the same thoughts and actions can correspond to different shades of good and evil, even in the mind of the same person, let alone of different people. Often at the same time, too.
This shows that there is disagreement & confusion about what is good & what is evil. That no more proves good & evil are meaningless, than disagreement about physics shows that physics is meaningless.
Actually, disagreement tends to support the opposite conclusion. If I say fox-hunting is good and you say it's evil, although we disagree on fox-hunting, we seem to agree that only one of us can possibly be right. At the very least, we agree that only one of us can win.
You don't think even a vague boundary can be found? To me it seems pretty self-evident by looking at extremes; e.g., torturing puppies all day is obviously worse than playing with puppies all day.
By no means am I secure in my metaethics (i.e., I may not be able to tell you in exquisite detail WHY the former is wrong). But even if you reduced my metaethics down to "whatever simplicio likes or doesn't like," I'd still be happy to persecute the puppy-torturers and happy to call them evil.
Animal testing.
And even enjoying torturing puppies all day is merely considered "more evil" because it's a predictor of psychopathy.
John Perry, introduction to Identity, Personal Identity, and the Self
He bought the present ox along with the future ox. He could have just bought the present ox, or at least a shorter interval of one. This is known as "renting".
Which future ox did he buy?
Baruch Spinoza Ethics
http://lesswrong.com/lw/i0/are_your_enemies_innately_evil/
-- Tim Kreider
The interesting part is the phrase "which sounds like the kind of lunatic notion that’ll be considered a basic human right in about a century, like abolition, universal suffrage and eight-hour workdays." If we can anticipate what the morality of the future would be, should we try to live by it now?
If we can afford it.
Moral progress proceeds from economic progress.
Morality is contextual.
If we have four people on a life boat and food for three, morality must provide a mechanism for deciding who gets the food. Suppose that decision is made, then Omega magically provides sufficient food for all - morality hasn't changed, only the decision that morality calls for.
Technological advancement has certainly caused moral change (consider society after introduction of the Pill). But having more resources does not, in itself, change what we think is right, only what we can actually achieve.
That's an interesting claim. Are you saying that true moral dilemmas (i.e. a situation where there is no right answer) are impossible? If so, how would you argue for that?
My view is that a more meaningful question than ‘is this choice good or bad’ is ‘is this choice better or worse than other choices I could make’.
I think they are impossible. Morality can say "no option is right" all it wants, but we still must pick an option, unless the universe segfaults and time freezes upon encountering a dilemma. Whichever decision procedure we use to make that choice (flip a coin?) can count as part of morality.
I take it for granted that faced with a dilemma we must do something, so long as doing nothing counts as doing something. But the question is whether or not there is always a morally right answer. In cases where there isn't, I suppose we can just pick randomly, but that doesn't mean we've therefore made the right moral decision.
Are we ever damned if we do, and damned if we don't?
When someone is in a situation like that, they lower their standard for "morally right" and try again. Functional societies avoid putting people in those situations because it's hard to raise that standard back to it's previous level.
The question is, can we? Does anyone happen to have any empirical data about how good, for example, Greco-Romans were at predicting the moral views of the Middle Ages?
Additionally, is merely sounding "like the kind of lunatic notion that’ll be considered a basic human right in about a century" really a strong enough justification for us to radically alter our political and economic systems? If I had to guess, I'd predict that Kreider already believes divorcing income from work to be a good idea, for reasons that may or may not be rational, and is merely appealing to futurism to justify his bottom line.
If we had eight-hour workdays a century ago, we wouldn't have been able to support the standard of living expected a century ago. I'm not sure we could have even supported living. The same applies to full unemployment. We may someday reach a point where we are productive enough that we can accomplish all we need when we just do it for fun, but if we try that now, we'll all starve.
Is that true? (Technically, a century ago was 1912.)
Wikipedia on the eight-hour day:
The quote seemed to imply we didn't have them a century ago. Just use two centuries or however long.
My point is that we didn't stop working as long because we realized it was a good idea. We did because it became a good idea. What we consider normal now is something we could not have instituted a century ago, and attempting to institute now what what will be normal a century from now would be a bad idea.
One of these things is not like the others.
Yes, no state has ever implemented truly universal suffrage (among minors).
In Jasay's terminology, the first is a liberty (a relation between a person and an act) and the rest are rights {relations between two or more persons (at least one rightholder and one obligor) and an act}. I find this distnction useful for thinking more clearly about these kinds of topics. Your mileage may vary.
I was actually referring to the the third being what I might call an anti-liberty, i.e., you aren't allowed to work more than eight-hours a day, and the fact that is most definitely not enforced nor widely considered a human right.
How is that different from pointing out that you're not allowed to sell yourself into slavery (not even partially, as in signing a contract to work for ten years and not being able to legally break it), or that you're not allowed to sell your vote?
I'd say each of the three can be said to be unlike the others:
So "all of these things are not like the others".
I thought eight-hours workdays were about employers not being allowed to demand that employees work more than eight hours a day; I didn't know you weren't technically allowed to do that at all even if you're OK with it.
See Lochner v. New York. Within the last five years there was a French strike (riot? don't remember exactly) over a law that would limit the workweek of bakers, which would have the impact of driving small bakeries out of business, since they would need to employ (and pay benefits on) 2 bakers rather than just 1. Perhaps a French LWer remembers more details?
To see why, assume that without any restrictions on workday length, workers supply more than 8 hours. Let's say, without loss of generality, that they supply 10. (In other words, the equilibrium quantity supplied is ten.)
If employers can't demand the equilibrium quantity, but they're still willing to pay to get it, then employees will have the incentive to supply it. In their competition for jobs (finding them and keeping them), employees will be supply labor up until the equilibrium quantity, regardless of whether the bosses demand it.
Working more looks good. Everyone knows that; you don't need your boss to tell you. So if there's competition for your spot or for a spot that you want, it would serve you well to work more.
So if your goal is to prevent ten-hour days, you'd better stop people from supplying them.
At least, this makes sense to me. But I'm no microeconomist. Perhaps we have one on LW who can state this more clearly (or who can correct any mistakes I've made).
It would be very hard to distinguish when people were doing it because they wanted to, and when employers were demanding it. Maybe some employees are working that extra time, but one isn't. The one that isn't happens to be fired later on, for unrelated reasons. How do you determine that worker's unwillingness to work extra hours is not one of the reasons they were fired? Whether it is or not, that happening will likely encourage workers to go beyond the eight hours, because the last one that didn't got fired, and a relationship will be drawn whether there is one or not.
Not if it's actually the same morality, but depends on technology. For example, strong prohibitions on promiscuity are very sensible in a world without cheap and effective contraceptives. Anyone who tried to live by 2012 sexual standards in 1912 would soon find they couldn't feed their large horde of kids. Likewise, if robots are doing all the work, fine; but right now if you just redistribute all money, no work gets done.
Lack of technology was not the reason condoms weren't as widely available in 1912.
Right idea, not a great example. People used to have lots more kids then now, most dying in childhood. Majority of women of childbearing age (gay or straight) were married and having children as often as their body allowed, so promiscuity would not have changed much. Maybe a minor correction for male infertility and sexual boredom in a standard marriage.
You seem to have rather a different idea of what I meant by "2012 standards". Even now we do not really approve of married people sleeping around. We do, however, approve of people not getting married until age 25 or 30 or so, but sleeping with whoever they like before that. Try that pattern without contraception.
You might. I don't. This is most probably a cultural difference. There are people in the world to day who see nothing wrong with having multiple wives, given the ability to support them (example: Jacob Zuma)
Strong norms against promiscuity out of wedlock still made sense though, since having lots of children without a committed partner to help care for them would usually have been impractical.
Not if they were gay.
Are you sure you can. It's remarkably easy to make retroactive "predictions", much harder to make actual predictions.
If you are a consequentialist, you should think about the consequences of such decision.
For example, imagine a civilization where an average person has to work nine hours to produce enough food to survive. Now the pharaoh makes a new law saying that (a) all produced food has to be distribute equally among all citizens, and (b) no one can be compelled to work more than eight hours; you can work as a volunteer, but all your produced food is redistributed equally.
What would happen is such situation? In my opinion, this would be a mass Prisoners' Dilemma where people would gradually stop cooperating (because the additional hour of work gives them epsilon benefits) and start being hungry. There would be no legal solution; people would try to make some food in their free time illegally, but the unlucky ones would simply starve and die.
The law would seem great in far mode, but its near mode consequences would be horrible. Of course, if the pharaoh is not completely insane, he would revoke the law; but there would be a lot of suffering meanwhile.
If people had "a basic human right to have enough money without having to work", situation could progress similarly. It depends on many things -- for example how much of the working people's money would you have to redistribute to non-working ones, and how much could they keep. Assuming that one's basic human right is to have $500 a month, but if you work, you can keep $3000 a month, some people could still prefer to work. But there is no guarantee it would work long-term. For example there would be a positive feedback loop -- the more people are non-working, the more votes politicians can gain by promising to increase their "basic human right income", the higher are taxes, and the smaller incentives to work. Also, it could work for the starting generation, but corrupt the next generation... imagine yourself as a high school student knowing that you will never ever have to work; how much effort would an average student give to studying, instead of e.g. internet browsing, Playstation gaming, or disco and sex? Years later, the same student will be unable to keep a job that requires education.
Also, if less people have to work, the more work is not done. For example, it will take more time to find a cure for cancer. How would you like a society where no one has to work, but if you become sick, you can't find a doctor? Yes, there would be some doctors, but not enough for the whole population, and most of them would have less education and less experience than today. You would have to pay them a lot of money, because they would be rare, and because most of the money you pay them would be paid back to state as tax, so even everything you have could be not enough motivating for them.
If you are a bayesian, you should think about how much evidence your imagination constitutes.
For example, imagine a civilization where an average person gains little or no total productivity by working over 8 hour per day. Imagine, moreover, that in this civilization, working 10 hours a day doubles your risk of coronary heart disease, the leading cause of death in this civilization. Finally, imagine that, in this civilization, a common way for workers to signal their dedication to their jobs is by staying at work long hours, regardless of the harm it does both to their company and themselves.
In this civilization, a law preventing individuals from working over 8 hours per day is a tremendous social good.
Work hour skepticism leaves out the question of the cost of mistakes. It's one thing to have a higher proportion of defective widgets on an assembly line (though even that can matter, especially if you want a reputation for high quality products), another if the serious injury rate goes up, and a third if you end up with the Exxon Valdez.
You mean “incentives to fully report your income”, right? ;-) (There are countries where a sizeable fraction of the economy is underground. I come from one.)
The same they give today. Students not interested in studying mostly just cheat.
Systems that don't require people to work are only beneficial if non-human work (or human work not motivated by need) is still producing enough goods that the humans are better off not working and being able to spend their time in other ways. I don't think we're even close to that point. I can imagine societies in a hundred years that are at that point (I have no idea whether they'll happen or not), but it would be foolish for them to condemn our lack of such a system now since we don't have the ability to support it, just as it would be foolish for us to condemn people in earlier and less well-off times for not having welfare systems as encompassing as ours.
I'd also note that issues like abolition and universal suffrage are qualitatively distinct from the issue of a minimum guaranteed income (what the quote addresses). Even the poorest of societies can avoid holding slaves or placing women or men in legally inferior roles. The poorest societies cannot afford the "full unemployment" discussed in the quote, and neither can even the richest of modern societies right now (they could certainly come closer than the present, but I don't think any modern economy could survive the implementation of such a system in the present).
I do agree, however, about it being a solid goal, at least for basic amenities.
To avoid having slaves, the poorest society could decide to kill all war captives, and to let starve to death all people unable to pay their debts. Yes, this would avoid legal discrimination. Is it therefore a morally preferable solution?
How do you envision living by this model now working?
That is, suppose I were to embrace the notion that having enough resources to live a comfortable life (where money can stand in as a proxy for other resources) is something everyone ought to be guaranteed.
What ought I do differently than I'm currently doing?
Not if the morality you anticipate coming into favour is something you disagree with. If it's something you agree with, it's already yours, and predicting it is just a way of avoiding arguing for it.