Here's the new thread for posting quotes, with the usual rules:

  • Please post all quotes separately, so that they can be voted up/down separately.  (If they are strongly related, reply to your own comments.  If strongly ordered, then go ahead and post them together.)
  • Do not quote yourself
  • Do not quote comments/posts on LW/OB
  • No more than 5 quotes per person per monthly thread, please.

 

Rationality Quotes July 2012
New Comment
473 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Person: "It's not paranoia if they're really out to get you."
Robot: " ... Paranoia is such a childish emotion. You're an adult. Why aren't all your enemies dead by now?"

-- RStevens

The words "I am..." are potent words; be careful what you hitch them to. The thing you're claiming has a way of reaching back and claiming you.

--A.L. Kitselman

See also Paul Graham's essay Keep Your Identity Small, on the same subject.

Just explained the Higgs Boson to my friend even though I don't understand it myself. He was very convinced. I bet this is how religions get started.

-Rob DenBleyker

4ChrisNJ
Ha! I was in a checkout at the mall and pulled up a science blog to see the developments on the Higgs-Boson. When I heard the 99.9999% proof I literally could not hold in my verbal amazement. Well no one around me (mother, sister, scared check-out girl) had the slightest clue what it was about and explaining only led to resentment and confusion (despite using an apologetic light tone i.e. leaving out the "God Particle" association.)
4MixedNuts
I'm betting on psychotic episodes. Any way to settle it?
9DanArmak
Induce psychotic episodes in some people, explain Higgs boson to others, compare outcome religiosity.
5A1987dM
Now I'm reminded of when my mother phoned me asking me “what's about this God particle they've found and everyone's talking about? does it prove that God exist, or that God doesn't exist?” and I told her not to mind journalists as they don't understand a thing and they're just trying to sell newspapers, and to look at the cover picture on my Facebook profile instead. (It shows the Lagrangian of the Standard Model before symmetry breaking.) She was a bit disappointed by that. ;-)

I often tried plays that looked recklessly daring, maybe even silly. But I never tried anything foolish when a game was at stake, only when we were far ahead or far behind. I did it to study how the other team reacted, filing away in my mind any observations for future use.

— Ty Cobb

4Miller
I wonder if he let his teammates know this at the time. They are unlikely to approve and then what would he do. I'd wager this was more about creating drama around him and his team than studying the opponent. I've done this kind of thing in online multiplayer contexts, and the feedback you receive from this is substantially more weighted to your own team than the opponents.

We find it difficult and disturbing to hold in our minds arguments of the form ‘On the one hand, on the other.’ If we are for capital punishment we want it to be good in all respects, with no serious drawbacks; if we are against it, we want it to be bad in all respects, with no serious advantages. We want the world of facts to dictate to us, virtually, how to act; but this it will never do. We always have to make a choice.

-- Theodore Dalrymple, article in "Library of Law and Liberty".

It's strange that we have many phrases like "on the one/other hand", "pros and cons", and "both sides of the story", then.

8ScottMessick
These phrases are mainly used in near mode, or when trying to induce near mode. The phenomenon described in the quote is a feature (or bug) of far mode.
7Fyrius
Not wanting to take a principle to heart is not the same thing as denying that's the way things work, though. I think most people acknowledge (or at least give lip service) that being able to be objective is virtuous and often important. Even the ones who are rubbish at actually being so in real life. And of course it's entirely possible to be blatantly one-sided about capital punishment, but still want to hear both sides of the story when your kids are having an argument. And of course it's also entirely possible to realise you should be objective, even if that's more difficult and disturbing and less satisfying. You can just grit your teeth and tell your need for one-sidedness to shut up and let you think properly.
2MixedNuts
True, though we're still treating objectivity as fairness in arguments rather than even-handedness in truth inquiries. All these phrases refer to two sides, not more.
-1NoSignalNoNoise
Because in an argument between their kids, people haven't already made up their minds.
-1Eugine_Nier
No, those phrases exist to help patch the flaw in human reasoning the parent describes. In fact it would be strange that we had those phrases and the corresponding flaw didn't exist.

Here is a hand. How do I know? Look closely, asshole, it's clearly a hand.

Look, if you really insist on doubting that here is a hand, or anything else, there's nothing really I can say to convince you otherwise. What the tits would the world even look like if this weren't a hand? What sort of system is your doubt endorsing? After all, you can't just say "It's not true that here is a hand." You have to be endorsing some other picture of the world. [...]

So it turns out when I say things like "Here is a hand" I'm not really making a claim about the world, I'm laying down some rules for discussion. If you doubt there's a hand here, then fuck you and that's all there is to it. We can't really talk about anything now, because we can't even agree on something as simple as a goddamn hand. When we all agree here is a hand, then we can go about discussing our world in meaningful ways. Skepticism just undermines a foundation and replaces it with nothing; it[']s paralyzing. The grounds for such radical skepticism don't exist; it presupposes and relies on the very certainty it tries to undermine.

This is more practical than you realize. There are people who actually believ

... (read more)

If you doubt there is a hand, I'll use it to smush a banana on your face. If you end up looking ridiculous with banana on your face, then there was in fact a hand and my foundation is better than yours. If I end up looking ridiculous trying to grab a banana of doubtful existence with no hands, I promise to admit your foundation is better than mine. If we disagree on what happens, why am I even aware of your existence?

4roystgnr
In grade school, I recall there being more than one occasion when I slapped a friend in the back of the head for such instructional purposes when he became too solipsistic. (this wouldn't disprove solipsism, of course, but it would imply a "masochistic solipsism", and it turned out he strongly preferred realism over that) In hindsight I wonder why he remained such a steadfast friend, and now I wonder whether, if I had ever had a banana handy, that would have been the last straw.
3duckduckMOO
People who are experiencing scepticism should have bananas smushed in their faces, is what you're saying? And apparently that's worth 12 upvotes.
1MixedNuts
I've got a worse one: people who are experiencing skepticism should have their children taken away, forcibly stabbed with a syringe needle, injected with chemicals chosen by the government, and returned only if they will allow an institution they hate to keep stuffing their kids with chemicals. Edit: Wait, that is controversial? Huh. Is LW unusually opposed to mandatory vaccination or am I wrong about the mainstream?
1Alicorn
...where did that come from?
2MixedNuts
Anti-vaccers.
-5duckduckMOO
4[anonymous]
It's a reasonably accurate translation of the spirit of the original into colorful English.
0thomblake
I concur.
0Bugmaster
Certain methods for obtaining beliefs are better than others, though. It turns out that one method gives you a ~4.5 billion year old Earth, but also cellphones, computer networks, plentiful food, eradication of many diseases, spaceflight, lolcats, and so on an so forth. The other method gives you an Earth that's as old as you want it to be, and good feelings, and... what else ? One of the many problems with solipsism is that it lacks an application.
-2Eugine_Nier
I deny that those things exist. ;)
-2Bugmaster
http://www.gamefaqs.com/ps2/466217-grand-theft-auto-iii/faqs/16584
0[anonymous]

If the "assumption" is so obvious and near-universal, why does Singer go on pompously to announce it.

Possibly for the same sorts of reasons that Eliezer wrote this big long thing to "restore a naïve view of truth", or that Nick Bostrom wrote this big long thing to explain why death is bad ... namely, that people have come up with all kinds of non-obvious, idiosyncratic rationalizations to justify the status-quo of starvation, ignorance, and death; that these rationalizations have, over the centuries, become cached thoughts; and that therefore getting back to "obvious and near-universal" basics is both desirable and nontrivial.

6Peter Wildeford
Yeah, this is what I had in mind. Another lesson I think it teaches is it is easy to get caught up in long, drawn out debates about positions that are nearly impossible to conclusively refute (think theism).
0[anonymous]
.

Fun fact: William Lane Craig has rigorously argued that it's best to one-box on Newcomb's problem.

If the rule you followed led you to this of what use was the rule?

— Anton Chigurh

7[anonymous]
.
-3mwengler
Recced because it is funny and relevant, I am actually quite enjoying the Chigurh quotes. Although I am tired of Will always bringing in Catholic stuff. :)
5Will_Newsome
Lane Craig isn't Catholic, and I didn't bring him up.
3Oligopsony
That question has some surprising correlations more generally (at least for someone who's been trained to cluster LW positions together into a natural set.)
5Luke_A_Somers
At best, don't do that for the same reasons he did - but even there I'm sure that he's right even on the reasoning some of the time.
0[anonymous]
.
0A1987dM
I hadn't heard of him before. [follows the link] Don't we?
[-][anonymous]350

Suppose we find a society which lacks our understanding of human physiology, and that speaks a language just like English, except for one curious family of idioms. When they are tired they talk of being beset by fatigues, of having mental fatigues, muscular fatigues, fatigues in the eyes and fatigues of the the spirit. Their sports lore contains such maxims as 'too many fatigues spoils your aim' and 'five fatigues in the legs are worth ten in the arms'. When we encounter them and tell them of our science, they want to know what fatigues are. They have been puzzling over such questions as whether numerically the same fatigue can come and go and return, whether fatigues have a definite location in matter and space and time, whether fatigues are identical with some physical states or processes or events in their bodies, or are made of some sort of stuff. We can see that they are off to a bad start with these questions, but what should we tell them? One thing we can tell them is that there simply are no such things as fatigues - they have a confused ontology. We can expect them to retort: 'You don't think there are fatigues? Run around the block a few times and you'll know better! There are many things your science might teach us, but the non-existence of fatigues isn't one of them!

--Dan Dennett, Brainstorms: Philosophical Essays on Mind and Psychology

1Alejandro1
That's one of my favorite Dennett passages. A similarly great anthropological metaphor is his tale of the forest god Feenoman and the "Feenomanologists" who study this religion. I have not been able to find it online, but it is in the essay "Two approaches to mental images", in the same book.
[-][anonymous]280

We are all aware that the senses can be deceived, the eyes fooled. But how can we be sure our senses are not being deceived at any particular time, or even all the time? Might I just be a brain in a tank somewhere, tricked all my life into believing in the events of this world by some insane computer? And does my life gain or lose meaning based on my reaction to such solipsism?

--- Project PYRRHO, Specimen 46, Vat 7. Activity recorded M.Y. 2302.22467. (TERMINATION OF SPECIMEN ADVISED)

From Sid Meier's Alpha Centauri

"Buddhism IS different. It's the followers who aren’t."

-- A Dust Over India.

Commentary: Reading this made me realize that many religions genuinely are different from each other. Christianity is genuinely different from Judaism, Islam is genuinely different from Christianity, Hinduism is genuinely different from all three. It's religious people who are the same everywhere; not the same as each other, obviously, but drawn from the same distribution. Is this true of atheistic humanists? Of transhumanists? Could you devise an experiment to test whether it was so, would you bet on the results of that experiment? Will they say the same of LessWrongers, someday? And if so, what's the point?

Now that I think on it, though, there might be a case for scientists being drawn from a different distribution, or computer programmers, or for that matter science fiction fans (are those all the same distributions as each other, I wonder?). It's not really hopeless.

2brahmaneya
I don't his comment about Buddhist people being not different is even true. They are, for example, on the average, less violent than Muslims. They're simply not different to the extent he expected them to be.
2ChristianKl
I don't think that the claim is really supported by the observations that he made in the article. In Buddhism lying isn't as bad as it is in Christanity. Using violence is more accepted in Christian culture than in Buddnism. As a result the followers do act differently. They are less likely to use violence against him but more likely to lie to him. Why do you think that people are the same everywhere? And what do you mean with "the same"?
5fubarobfusco
How much of this difference can actually be attributed to the followers attempting to obey religious precepts, and how much is simply floating in the sea of cultural memes in the parts of the world where Buddhism and Christianity respectively happen to be common? Would you expect practicing Christians in Japan, Korea, China, or India (and who are ethnically Japanese, Korean, etc.) behave more like your model of "Buddhists" or "Christians"?
5ChristianKl
Religion is more than obeying general precepts. During the time my Catholic grandmother was in school she wanted to read some book. Before reading it she asked her priest to allow her to read it because it was on the Catholic census. Following the religion seriously and not reading anything that's on the census has an effect that goes beyond the general precepts. A lot of Buddhists are vegetarians. A lot of Buddhists mediate. Those practices have effects. Your question assumes that people in Japan can be either "Christians" or "Buddhists" but can't be both. Even when the Chrisitans in Malta pray to Allah you can't be Muslim and a Christian at the same time. There no similar problem with being a Zen Buddhist and being Christian at the same time. I think that there a correlation but I'm not sure about the extend to which Far East Christians resemble Western Christians. Making a decision to convert to Christianity when you live in China has a lot of apsects that don't exist when someone who lives in a Christian town simply decides to stay Christian.
2fubarobfusco
I'm not sure I understand your response. Let me restate what I was getting at above, in responding to this assertion: This claim makes a prediction regarding the rates of lying and violence among "followers" of Buddhism and Christianity. But what counts as a data point for or against this claim depends on what could be meant by "the followers" of these religions. Two possible interpretations: 1. "People who explicitly consider themselves to be Buddhists or Christians, and who attempt to live according to what they think the precepts of Buddhism or Christianity are"; 2. "People who come from those cultures which we call 'Buddhist' or 'Christian' respectively, regardless of whether those individuals consider themselves observant or religious at all." For instance, I consider myself an atheist, but I was raised in a Christian family and live in a society where Christianity is the predominant religious influence. I have read the Gospels (and most of the rest of the Bible); by contrast I have not read the Qur'an, the Tripitaka, the Vedas, or the Talmud. I don't pray, attend church, or listen to the teachings of priests or pastors. By interpretation 1, I am not a Christian; and whether I happen to lie or do violence would not count for or against the claim above. (It would also not count regarding Buddhism; although I've done Zen meditation more recently than I've done Christian worship ...) By interpretation 2, my cultural background counts me as a Christian; and my tendencies to lie or do violence would count for or against the claim above. So, I'm asking: What would count as evidence for or against the claim regarding the rate of lying and violence among Christians and Buddhists?
-2ChristianKl
I don't think you understand what Buddhism happen to be. If I go into something rumored to be a Buddhist monastry and ask the inhibatans whether they attempt to live according to the precepts of Buddhism there a fairly good chance that the answer is no. Attempting stuff means having attachment to it. Buddhism is about moving beyond such attachments. What's my empiric claim? log(Time spent in Buddhist rituals + X /Time spent in Christian rituals +X) correlates with log(Rate of lying Y / Rate of being violent + Y) The formula is only supposed to give a general idea. There probably a better way to express the idea.
2Viliam_Bur
That's evidence that the religion does not change people too much. Which might be a good thing. Religious cults do change people. An average Scientologist does not behave the same way as an average Christian. You could measure the influence of the religion by measuring how the distribution of personalities changes. On the other hand, let's not reverse stupidity here. Changing personality is generally a bad thing, but that is not necessary, just very probable.
2wedrifid
It's also evidence that religion may change people in the same way regardless of details.
2Eugine_Nier
If LW-rationality goes mainstream, it's followers will then be drawn from the same distribution.
0faul_sname
I find it unlikely that we'll have to opportunity to observe this.
0Eugine_Nier
I think it's plausible that LW-rationality, or rather a third hand version of it, will go mainstream.
1kajro
You could define equivalence relations on the set of religious people (RP) and the set of atheistic humanists (AH). In most cases, the people in the sets only interact with (or at least influenced by) other members of the same or similar sets. Turn these interactions into operations on members of the set (a,b in RP, a*b = "a makes b feel awkward/scared/unhappy around a" or maybe something based on social relationships between members). These operations would create new "people" whose characteristics are similar to that of the person who has been molded by the defined social interaction(s). Starting from a certain subset of RP, these operations could possibly generate the entire set of members (i.e a*b = c in RP, where c has the equivalent disposition as someone who has interacted with b under some applicable equivalence relation). Do the same for AH (using the same equivalence relation), and compare the structures. Under different types of interactions between members, this could reveal some interesting group-theoretical properties. Maybe there is a generating set for RP and not for AH if we keep the equivalence relations from getting too specific. I guess what I'm getting at is that the structural elements of a certain set of people could tell us something about the distribution that the set was pulled from, or even invalidate the need to look at the distribution at all. Maybe the structure is even more important; these sets could pull from the same distribution, but the ideologies that formed these sets could result in drastically different results from operations (social interactions or relationships) between members of the set. Or we could see if only the generating members of the set were pulled from the same distribution, but the social interactions between them created a set member not from the original distribution, resulting in the set having to pull from that distribution also. Anyway, this is probably not coherent or useful at all, but if nothing else

Human behavior is economic behavior. The particulars may vary, but competition for limited resources remains a constant.

– CEO Nwabudike Morgan in Alpha Centauri

I find it troubling how much I want to upvote you just beause you're quoting SMAC.

7mwengler
I finally wikipediaed this and see you are talking about a Sid Meier video game. I played Civilization once for about an hour (where I was amazed when my 10 year old consultant on the game told me I was an idiot for going democratic, that I would have had a much better military if I'd gone communist and then built a statue of liberty, or something like that). I have spent countless hours on Railroad Tycoon back before Steve Jobs got fired. Do I want to get SMAC and risk ruining my life? Perhaps have myself lashed to a mast before I try it? Is SMAC addictive?

SMAC is my favorite of the Civilization series for two reasons:

The first is that it's just a very well-made game- it has lots of features and internal mechanics which took Civilization over a decade to catch up to (and still doesn't do as well).

The second is that it starts at slightly-future tech, and proceeds to singularity. I find that way more satisfying than starting at agriculture and proceeding to slightly-future tech, partly because I like sci-fi more than I like history, and partly because it lets you consider more interesting questions.

For example, the seven factions in the game aren't split on racial lines, but on ideological lines: there are seven competing views for how society should be organized and what the future should look like, and each of them has benefits and penalties that are the reasonable consequences of their focuses.

SMAC is deeply flawed for three reasons:

The AI is over a decade old, and so it's difficult to be challenged once you know how the game works. (This was also before they had figured out a good way to hamstring ICS, and so ICS is the dominant yet unfun strategy.)

The multiplayer code is over a decade old, and so not only are the AI difficult to ... (read more)

5sixes_and_sevens
This is making me feel old. Me and a few college mates had a SMAC multiplayer game running for the better part of a year. If someone told me now that I could have a multiplayer game experience by taking my turn, zipping up the game file and emailing it to the next person in the cycle, I would laugh in their face.
1RobertLumley
Alien crossfire added 7 more civilizations, two of which are even more imbalanced than University. Which I wasn't sure was possible.
1Vaniver
Right- I tended to play SMAC instead of SMACX because of the balance issues (or, at least, play it with just the original 7 because it did add new buildings and secret projects) and the new 7 had weird divisions. The Corporation and the University seem like natural divides- but, say, the Angels were just odd ("We're super hackers!" "Wouldn't that make sense for a gang inside another civilization, rather than a full civilization?").
5TheOtherDave
It's also worth noting that the game allows for creating custom factions... the faction definitions are just parameters in a text file. So one can self-medicate the balance issues if desired.
3RobertLumley
Yeah, I tend to agree. My favorite mix is playing as University, with the Gaians, Peacekeepers, Cybernetic, Planet Cult, and Believers, with one of the progenitor factions to make things interesting.
7RobertLumley
SMAC is the crown jewel of the series, if you ask me. The expansion, Alien Crossfire is almost impossible to find legally though, and adds a lot to the game. Is it addictive? I don't know, largely because it's difficult to specify what is "addictive" and what isn't. The best answer I can give you is yes, in bursts. I'll play it for eight hours in a row one day and then not touch it for a month.
5sixes_and_sevens
I got the original SMAC and SMAX in one set on Amazon a few years ago. A quick google reveals it's still available. Less than $5.
3RobertLumley
That's good. I heard somewhere it was really rare. Guess it's not.
4Bugmaster
I don't know about "addictive", but I can tell you that playing SMAC with 5 to 7 human players, and no AIs, will definitely have a... transformative... effect on your life. You will be amazed at how quickly things go from to Trust no one.
-1Eugine_Nier
I've successfully played diplomacy games with friends without it ruining any friendships.
2Bugmaster
Alpha Centauri is much more conducive to abject paranoia than Diplomacy, though -- at least, the way we played it. We would start a game by taking turns on the same machine, for the first 10 turns or so, during lunch. Then, we would go back to work, and take our turn on that machine when it came up (we'd VNC into it). This way, the game doesn't disturb our actual work too much, and each player can take as long to micromanage his cities as he wants. Thus, all the player-to-player interaction takes place on back channels -- through email, or clandestine meetings. This fact, combined with the knowledge that one tech advance, or one airstrike at the right time, could shift the entire balance of power, results in truly Cold War-grade levels of paranoia. It is an exhilarating experience, in a way. I should probably mention that no relationships were ruined by our games, either, as far as I can tell. A game is still only a game, after all.
-1Eugine_Nier
The diplomacy games I'm referring to were also played one move a day.
4Oligopsony
Yes.
2DaFranker
Yes to all of those questions.
2TheOtherDave
My experience of the single-player game was that it was fun, but the AI was sufficiently stupid that (a) it was trivial to beat unless I was extremely unlucky in the first twenty years or so, and (b) it rewarded tedious amounts of micromanaging. There are various "play with one hand tied behind your back" style variants that can extend the fun for a little while, but that sort of thing only goes so far. So, no, it wasn't especially addictive... I played it a lot for a little while, played it a little for a longer while, and haven't looked at it in years. I never got into the multiplayer version, but can see where it might be a lot more addictive.
4RobertLumley
I recently rediscovered it and realized how many quotes fit into LW memes. And apparently there was an expansion too. I never knew that until about a month ago.
4tgb
Another amusing one from Alpha Centauri:
5MBlume
This actually seems wrong. Clicking "retry" seems to map to "make the same attempt, in the same way, and hope things go better". It's worth trying once or twice, but eventually you have to update towards the possibility that the strategy you're trying is fundamentally flawed, that it will never work, abort, and come at things from a completely different angle.

Doing the same thing over and over again in the hopes of eventually getting a different result is, I'm told, one definition of insanity.

It is also, in my experience, an important aspect of physical therapy.

I've never understood that saying. Most real life actions are practically speaking nondeterministic. I've often found it worthwhile to test each course of action 10 times and keep track of what fraction it worked (if the course of action is quick and easy to test).

4MBlume
Right, which is why sometimes you need help -- sometimes a domain expert tells you that yes, you might naively think that, having tried the same thing 25 times, you can reasonably give up, but that's not true in this case because of these biological mechanisms.
3wedrifid
In lieu of (and in most cases in precedence over) biological mechanisms I would take testimony from the expert that, for example, "30 of the 50 people I have seen learn this took 30 or more attempts and I don't know of a better way to try than what you are doing".
2[anonymous]
If you really did the same thing in the same environment and expected a different result it would be insane, realistically I never expect the world to respond to my actions the same way twice so that saying holds about as much weight as any other truism.
0mwengler
Well, there will always be a difference in the readings on the clocks on the wall for each try, it is hard for one person to do the same thing 10 times simultaneously. So if you allow the "except for things you didn't think could possibly matter, or were unaware of" to remain implicit, do you get a better feeling about it?
0mwengler
I've just been doing a 750 piece jigsaw puzzle with the kids while on vacation. I can't tell you how many pieces didn't fit until about the 7th time I tried them. Anybody who thinks doing a 750 piece jigsaw puzzle has nothing to do with the philosophy of science or engineering either has not done a 750 piece jigsaw puzzle, or has not done science or engineering, or is not thinking optimally. I think like everything in practical truth, theory is quite different from reality. It is the philosopher's noble task to narrow that difference, even as improvements in practice widen it faster than it can ever be narrowed.

I never felt I was studying the stupidity of mankind in the third person. I always felt I was studying my own mistakes.

-Daniel Kahneman, winner of the Nobel Prize in Economics

[-]baiter240

"New rule: If you handle snakes to prove they won't bite you because God is real, and then they bite you -- do the math."

– Bill Maher, Real Time with Bill Maher, 6/8/2012

video article

(An important lesson, but I wonder if it's wise to teach it in the context of politics. Among other things, I worry that the messages "boo religion!", "yay updating on evidence!", "boo religious conservatives!", "yay pointing out my enemies are inferior to me!", "yay rationality!", "yay my side for being comparatively rational!", &c. will become mixed up and seen as constituting a natural category even if they objectively shouldn't be. (Related.))

Sure. But if I handle snakes to prove they won't bite me because God is real, and they don't bite me -- you do the math.

More seriously, though: the sentiment expressed in the quote is flawed, IMHO. Evidence isn't always symmetrical. Any particular transitional fossil is reasonable evidence for evolution; not finding a particular transitional fossil isn't strong evidence against it. A person perjuring themselves once is strong evidence against their honesty; a person once declining to perjure themselves is not strong evidence in favour of their honesty; et cetera.

I think this might have something to do with the prior, actually: The stronger your prior probability, the less evidence it should take to drastically reduce it.

Edit: Nope, that last conclusion is wrong. Never mind.

3Jack
Right. Sensitivity does not equal specificity. Maher makes the mistake of assuming the rate of false positives and false negatives for the 'snakebite test for god' are equal. The transitional fossil test for evolution and the perjury test for honesty both have high false negative rates and low false positive rates.
1Fyrius
Hm, I thought that reasoning argued against your own non-serious first paragraph rather than what Bill said. If the idea is "if God is real (and won't let snakes bite me), then they won't bite me", then being bitten shows that the first part is false, but not being bitten doesn't say anything about the first part being true or false. Or if you don't want to get hung up on formal logic, then it's valid but very weak evidence, like a hypothesis not being falsified in a test.
1Ezekiel
What Bill Maher said was that if a person claims that ~Bite is significant evidence for God, they must admit that Bite is significant evidence for ~God. I'm saying I don't think that's accurate. The sentiment that one should update on the evidence is obviously great, but I think we should keep an eye on the maths.
0Fyrius
Fair enough, if the premise is that ¬Bite → God exists.
0[anonymous]
It's more clearly apparent when you
7Desrtopa
Strictly speaking, the bible says of Jesus's followers "they will pick up serpents." It doesn't say "they will pick up serpents and not get bitten." Of course, it does also say they can drink deadly poison without being harmed. As it happens, I am related to and share my last name with this guy.
1wedrifid
Seems like this calls for either preventative antidotes and something to prove or a serious selective breeding program!
3lavalamp
Why does this have 12 upvotes? The fact that this is slightly funny and for our "side" doesn't make it good logic. We've no reason to think snakebites and deities ought to be correlated at all. Reversed stupidity is not intelligence and all that. This ought to be below the visibility threshold.
6Desrtopa
But if you do think that snakebites and deities are correlated, then the correlation has to run both ways. I didn't upvote since it's more politics than rationality, but there is a useful lesson there.
0lavalamp
Sure, it has to go both ways. And it is evidence against some sort of snake-handling god. But not against gods in general.
4Desrtopa
If a snake handler supposes that their ability to safely handle snakes is evidence that they're protected by the Christian god as a disciple of Jesus, then they must suppose in turn that their inability to handle snakes safely is evidence that they aren't protected by the Christian god as a disciple of Jesus. At least some part of the edifice has to take a hit. I don't know if any non-Christian religion uses snake handling as a religious ritual. In Christianity, it's practiced in some minor denominations as an interpretation of a specific line.
1pragmatist
Yes, but it is entirely consistent for a snake handler to think that handled snakes not biting is very strong evidence for the existence of a Christian god while also thinking that handled snakes biting is very weak evidence against the existence of a Christian god, weak enough not to significantly dent their faith. So Maher's argument as stated doesn't work. "Doing the math" here might just mean very very slightly reducing your credence in God's existence.
0lavalamp
I don't disagree with this and am not arguing against it. My point is, there's lots of people (including probably a very large majority of christians) who don't conceive of god as caring particularly one way or the other about snake handlers. For these people, Maher's argument doesn't hold at all. Of course, the snake handlers themselves should update (modulo what pragmatist said).
0hairyfigment
That seems like a lot of comments you made there. Are you saying that if people consistently held poisonous snakes without getting bitten, and on inspection only their vocal faith distinguished them from other people, you personally would not increase your belief in literal god(s) one whit? If so, how do you justify this seemingly anti-Bayesian position?
0lavalamp
All pro-snake-handling gods are gods, but not all gods are pro-snake-handling gods. Evidence against pro-snake-handling gods is evidence against such a tiny slice of god-space that I'm calling it a rounding error. Evidence in the other direction would have a drastically different effect, of course. In the hypothetical world where snake handling ability was perfectly correlated with stated beliefe and all confounding factors have been accounted for, I would massively increase the probability mass I give to pro-snake-handling gods (and consequently, gods in general).
-1wedrifid
Is that still true at the limit of zero gods existing? It certainly precludes a counter-example!
0lavalamp
Does it help if I add the qualifier "hypothetical" or "possible"? I.e., All possible pro-snake-handling gods are gods, but not all possible gods are pro-snake-handling gods. Otherwise I'm not sure I follow what you're saying...
0wedrifid
Yes. (And I only mentioned the exception because it surprised me that the near tautology had a counterexample.)
2Richard_Kennaway
It is promised that "these signs will follow those who believe". So if they do bite you, then God is still real, but you didn't have enough faith. Just doing this.

A computer is like a violin. You can imagine a novice trying first a phonograph and then a violin. The latter, he says, sounds terrible. That is the argument we have heard from our humanists and most of our computer scientists. Computer programs are good, they say, for particular purposes, but they aren't flexible. Neither is a violin, or a typewriter, until you learn how to use it.

-Marvin Minsky

Thinking of your brain (and yourself) like an instrument to played might be useful for instrumental rationality.

0[anonymous]
Upvoted, but there might be some partiality involved due to the (perceived) dig at those hated Greens. ;)

All mushrooms are edible. But some of them you can eat only once.

From Paleohacks.

It seems like the author is defying the common usage without a reason here. The common usage of edible is "safe to eat", or more precisely "able to be eaten without killing you", and I don't see what use redefining it to mean "able to be swallowed" is. It just seems like a trite, definitional argument that is primarily about status.

Nonetheless, the sentiment "You can do X, but only once" seems broadly useful.

5RobertLumley
Can you explain how so? This does not seem obvious to me. It seems broadly true, but not broadly useful. (And I'm not really sure what you mean by useful anyway.)

My model of Eliezer says: "You can launch AGI, but only once."

5MixedNuts
I think I get it. If you have a big weapon of doom that will ruin everything, it's not useless; you can use it when you're absolutely desperate. So options that sound completely stupid are worth looking at when you need a last resort.
4sketerpot
Having a scary desperate option, along with clear, publicly-known criteria which will trigger it, can prevent things from deteriorating to the point where you'll be tempted to use that desperate option. A honeybee will die if it stings you, but it will sting you if it feels too threatened, so people try to avoid antagonizing honeybees, and the bees don't end up dead because people didn't antagonize them. Related: Thomas Schelling's "Strategy of Conflict".
1mwengler
Just because you can do something doesn't mean the price for doing it is acceptable. Just because the price for doing something is your own death (or consignment to non-volatile ROM) doesn't mean the price is unacceptable.

I agree with the sense of your comment but wish to nitpick - I think "nontoxic" means you can eat it without it killing you. Crayons fit this definition, but are not properly called "edible"; many flowers can be eaten without killing you but "edible flowers" are the ones you might actually want to eat on purpose. "Edible" is narrower.

8NancyLebovitz
I take "All mushrooms are edible. But some of them you can eat only once." to be a useful warning, hopefully made more memorable by being framed as a joke.
7duckduckMOO
Apart from the hilarious joke, this quote makes the point that "will kill you" is not actually the same as impossible to eat, which more generally generally points out that impossible is often used in place of "really bad idea." I read edible as a synonym for eatable. Poisonous mushrooms: edible. rocks, not edible. That's how that word is attatched in my head. I assume you read it as non-poisonous/fit to eat so it feels like a crass and overt redefinition. If the guy who wrote that reads that word the same way I assume you do it's a really cheap joke. If he doesn't the quote makes a lot of sense.
4nshepperd
Sure. It's really an amusing play on words more than a rationality quote.
3komponisto
You and Alicorn are confusing denotation and connotation here. "Edible" simply means "able to be eaten"; it is used instead of "eatable", because the latter is for some reason not considered a "standard" or "legitimate" word. As such, it possesses exactly the same semantics as "eatable" would; in fact, a sufficiently supercilious English teacher will correct you to "edible" if you say "eatable". (Similarly "legible" instead of "readable", although "readable" seems to be increasingly accepted these days.) Yes, it's true that people only usually apply the word to a more restricted subset of things than those which won't kill the eater; but such a behavioral tendency should not be confused with the actual semantics of the word. The sense of the quote is exactly the same as if it had been: In this case, it would hardly be legitimate to complain that "can be eaten" means "safe to be eaten". The fact is that the phrase is ambiguous, and the quote is a play on that ambiguity. Likewise in its original form, with "edible". You've just provided a reasonable first-approximation analysis of wit!

Of course 'edible' does literally mean 'can be eaten', and equally of course, it is normally interpreted as 'fit to be eaten'. That's why paleohacks writes it that way. It's a joke!

5tut
When did this turn into the jokes thread?
3scav
If you're not having fun, why bother?

(Similarly "legible" instead of "readable", although "readable" seems to be increasingly accepted these days.)

Something "illegible" cannot have its component characters distinguished or identified. Something that is merely "unreadable" might just have ridiculously convoluted syntax or something.

6Shmi
The standard definition of edible is fit to be eaten, not "able to be eaten".
[-]gwern100

Indeed. Given people like Monsieur Mangetout or disorders like pica, it's hard to see why we would even bother using the word 'edible' if it didn't mean fit to be eaten.

-5komponisto
4A1987dM
I've seen a distinction being made between “legible” applying to typography etc. and “readable” applying to grammar etc., so that a über-complicated technical text typeset in LaTeX would be legible but not readable, and a story for children written in an awful handwriting would be readable but not legible.
2bentarm
To claim that the actual semantics of a word can be defined by anything other than the behavioural tendencies of its users is, at best, highly controversial. Whatever you or I may think, "irregardless" just is a (near) synonym for "regardless" and, to judge from my own experience (and the majority of comments from native speakers on the thread) "edible" actually means "safe to eat" (although, as Alicorn says, it's a little bit more complicated than that). Words mean exactly what people use them to mean - there is no higher authority (in English, at least, there isn't even a plausible candidate for a higher authority).
2TheOtherDave
I'm advisedly ignoring the original context, but I'm curious about the idea that your behavioral tendencies in particular (and mine) with respect to the usage of "irregardless" don't affect the actual semantics of the word. At best, it seems that "irregardless" both is and is not a synonym for "regardless"... as well as both being and not being an antonym of it. Unless only some usages count? Perhaps there's some kind of mechanism for extrapolating coherent semantics from the jumble of conflicting usages. Is it simple majoritarianism?
0komponisto
On the contrary, it's trivially true. If semantics depended exclusively on behavior patterns, then novel thoughts would not be expressible. The meaning of the word "yellow" does not logically depend solely on which yellow objects in the universe accidentally happen to have been labeled "yellow" by humans. It is entirely possible that, sitting on a planet somewhere in the Andromeda galaxy, is a yellow glekdoftx. Under the negation-of-my-theory (I'll try not to strawman you by saying "under your theory"), that would be impossible, because, due to the fact that humans have never previously described a glekdoftx as "yellow", the extension of that term does not include any glekdoftxes. Examples like this should suffice to demonstrate that semantic information does not just contain information about verbal behavior; it also contains information about logical relationships. Guess what: I agree! Here, indeed, is my proof of this fact: 1. "Edible" means "able to be eaten". 2. In the relevant contexts, "able to be eaten" means "safe to eat". 3. Therefore, "edible" means "safe to eat". See how easy that was? And yet, here I am, dealing with a combinatorial explosion of hostile comments (and even downvotes), all because I dared to make a mildly nontrivial, ever-so-slightly inferentially distant point! Insert exclamation of frustration here. Yes, that thought is in my cache too. It doesn't address my point, which is more subtle.
0TimS
It's reasonable to play with the expected meanings - but playing with the expected meanings in this case seems inconsistent with applying the label "Rationality Quote." The quote is isomorphic to "Don't eat poisonous things - and some things are poisonous." That quote won't get upvotes if posted as a Rationality Quote - why should its equivalent?
-3komponisto
I don't see the equivalence. But remember, I'm not defending the quote as a Rationality Quote. I'm only defending the quote against the charge of inappropriate word choice.
0Eugine_Nier
Upvoted for this.
0RobertLumley
I don't think I'm confusing the two, I'm saying the connotation is what's important when the connotation is what is almost always used. And I'm not claiming that the quote is wrong, just that it's not really a rationality quote.
4komponisto
Unfortunately, this sentence itself seems to betray some confusion: "connotation" is not a kind of alternative definition; hence it makes no sense to say that "the connotation is what is almost always used". Rather, both denotation and connotation are always present whenever a word is used. "Connotation" refers to implications a word has outside of its meaning. For example, the words "copulate" and "fuck" have the same meaning (denotation), but differing connotations. The crucial difference is that, while changing the denotation of a word (or getting it wrong) can change the truth-value of a statement, merely changing the connotation never can. Instead, it merely changes the register, signaling-value, or "appropriateness" of the statement. A scientist, in the ordinary course of affairs, might report having observed two lizards copulating; but it would be rather shocking to read in a scientific paper about lizards fucking, and one virtually never does. However, if a scientist ever were to write such a thing, the complaint would not be that they had claimed something false; it would be merely that they had made an inappropriate choice of language. A lot of verbal humor results from using "inappropriate" connotations. The "edible" quote is an example of this, in fact. The listener understands that the sentence is true but still "off" in some way. Using an inappropriate connotation is not a misuse of the word, otherwise the humor wouldn't work (or at least, it wouldn't work in the same way -- there are other forms of verbal humor which do involve incorrect usage). Well, I agree about that -- but that doesn't really seem to have been the main thrust of your comment. Your claim seemed to be that the quotee had redefined the word "edible"; and this is what I am disputing.
1Username
This is a silly argument.
-8komponisto

Reminds me of advice to people who want to know if they can sue someone: You can always sue. You just can't always expect to win.

I can call spirits from the vasty deep. Hotspur: Why, so can I, or so can any man; But will they come when you do call for them?

9DanielLC
Similarly: The Seventy Maxims of Maximally Effective Mercenaries I don't really see the point of either of these quotes. Edit: Fixed. Thanks.
3[anonymous]
Its not air-droppable if there's no aircraft capable of lifting it!
3arundelo
Because Markdown renumbers numbered lists for you (making it easier for you to re-order them). Prevent it with a backslash before the period: > 11\. Everything is air-droppable at least once.
2MBlume
Are the maxims actually collected somewhere, or just referenced piecemeal in the comic?
4DanielLC
Wikipedia has them.

Religion begins by being taken for granted; after a time, it is elaborately proved; at last comes a time (the present) when the whole effort is to induce people to let it alone.

--John Stuart Mill (1854).

However lousy it is to sit in your basement and pretend to be an elf, I can tell you from personal experience it's worse to sit in your basement and try to figure if Ginger or Mary Ann is cuter.

-- Clay Shirky

From the same page:

if you take Wikipedia as a kind of unit, all of Wikipedia, the whole project--every page, every edit, every talk page, every line of code, in every language that Wikipedia exists in--that represents something like the cumulation of 100 million hours of human thought. [...] And television watching? Two hundred billion hours, in the U.S. alone, every year. Put another way, now that we have a unit, that's 2,000 Wikipedia projects a year spent watching television. Or put still another way, in the U.S., we spend 100 million hours every weekend, just watching the ads. This is a pretty big surplus. People asking, "Where do they find the time?" when they're looking at things like Wikipedia don't understand how tiny that entire project is

This gives me a new perspective on human insanity, or more positively, on how much relatively low-hanging fruit is out there.

2faul_sname
This seems ridiculously low. That's an average of less than one minute per person worldwide.
1Eugine_Nier
Most people don't contribute to Wikipedia.
0thomblake
I think I've spent about a minute contributing to Wikipedia - and I'm one of those rare humans with access to a computer and clean water. EDIT: Wait, including talk pages... probably several minutes.

I begin with the assumption that suffering and death from lack of food, shelter, and medical care are bad. I think most people will agree about this, although one may reach the same view by different routes. I shall not argue for this view. People can hold all sorts of eccentric positions, and perhaps from some of them it would not follow that death by starvation is in itself bad. It is difficult, perhaps impossible, to refute such positions, and so for brevity I will henceforth take this assumption as accepted. Those who disagree need read no further.

Peter Singer

1gwern
Reminds me a little of Avicenna.

"A good plan violently executed now is better than a perfect plan executed next week."

General Patton

Obviously not true in all cases, but good advice for folks that have trouble getting things done despite being extremely intelligent (which this community has more than its fair share of).

0matabele
2Vaniver
Consider Eisenhower: Other humans must be interacted with in real-time. Consider a non-military analogy: a good comeback confidently issued now is better than a perfect comeback issued next week. It also works for computing. Consider languages which have a REPL to those that don't: for many applications, good code executed now is better than perfect code executed next week. This is often because requirements change over time, and the future cannot be predicted- the customers won't know what module they want next until they've used the current module.
2TheOtherDave
A fellow director is fond of saying, as she puts together rehearsal plans for the show she's about to direct, that while 95% of what she ends up doing in rehearsal is pulled out of her ass rather than planned, rehearsal plans are nevertheless an indispensable part of preparing her ass for rehearsals.

Why do you insist that the human genetic code is "sacred" or "taboo"? It is a chemical process and nothing more. For that matter—we—are chemical processes and nothing more. If you deny yourself a useful tool simply because it reminds you uncomfortably of your mortality, you have uselessly and pointlessly crippled yourself.

– Chairman Sheng-ji Yang in Alpha Centauri

For that matter—we—are chemical processes and nothing more.

While this is in some sense true, it doesn't add up to normality; it is an excuse for avoiding the actual moral issues. Humans are chemical processes; humans are morally significant; therefore at least some chemical processes have moral significance even if we don't, currently, understand how it arises, and you cannot dismiss a moral question by saying "Chemistry!" any more than you can do so by saying "God says so!"

I don't think it's an excuse - it's an aside from the rest of the quote. If you take out that sentence, the quote still makes sense. I think the moral question (from a consequentialist point of view, at least) is put aside when he assumes (accurately, in my opinion) that the tool is "useful". It's usefulness to humans is all that matters, which is his point.

5Multiheaded
In-game, Yang does view it as an excuse, though, because he's more or less a totalitarian, nihilistic sociopath.
2DanArmak
Moral significance is not a fact about morally significant humans. It's a fact about the other humans who view them as morally significant. Our brains' moral reasoning doesn't know about, or depend on, the chemical implementations of morally significant humans' bodies. Therefore there are no moral questions about chemistry, including human biochemistry. The original quote is correct: DNA should not be held sacred; DNA-related therapy is a tool like any biological or medical procedure. It has no moral status, and should not be assigned qualities like sacredness. Only specific applications of tools have moral status. As I said, morality is in the eye of the beholder; one might therefore think it's possible to assign moral status to anything one wishes. However, assigning moral status to tools, methods, nonspecific operations, generally leads to repugnant conclusions and/or contradictions. Some people nevertheless say certain tools are immoral in their eyes. Other people value e.g. logical consistency higher than moral instincts. It's a matter of choice.
-2RolfAndreassen
I suspect that, if I propose to drip an unknown liquid into your eyes, you will find the question of its chemistry very morally significant indeed. Since our morality is embedded in, and arises from, physics, the moral questions are indeed at some level about chemistry even if the current black-box reasoning we use has no idea how to deal with information expressed in chemical terms. When we fully understand morality, we will be able to take apart the high-level reasoning that our brains implement into reasoning about the moral significance of individual atoms.
0DanArmak
As I said: "Only specific applications of tools have moral status." The action of dripping liquid into my eyes has moral status. The chemical formula of the liquid, whatever it may be, does not. The only chemistry really relevant to morality is the chemistry of our brains that assign moral status to other things. I know other formulations of "what is morally significant" are possible and sometimes seem useful, but they also seem to lead to the conclusion that everything is morally significant - e.g. assigning moral value to entire universe-states - which does away with the useful concept of some smaller thing being morally significant vs. amoral.
2RolfAndreassen
Right. Which is the same as the point I was originally making: At least one chemical process has moral significance.
9DanArmak
That's true. It seems I've been arguing past you or at a strawman. Sorry.

"It is every citizen's final duty to step into the Tanks, and become one with all the people."

-- Recycling Tanks, Chairman Sheng-ji Yang, Alpha Centauri

6[anonymous]
Great quote, especially the last line should be emphasised. Awesome audio of Yang quotes. The comments are also surprisingly entertaining and interesting especially consider this is on YouTube. ... ... ...
5[anonymous]
-- Chairman Sheng-ji Yang, "Essays on Mind and Matter" This argument may have influenced my thoughts several years later.
5dspeyer
Susan: Oh that's just -- Death of Rats: WHAT DO YOU MEAN, 'JUST'? --Terry Pratchett, Hogfather, tweaked for greater generality In the original, Susan finishes her line with "an old story", but by having DoR cut her off she could just as easily have said "chemistry" or "data" or something like that.
3DanArmak
Technically, he just said SQUEAK. Which is even more general.
5alex_zag_al
Lineage has been considered sacred since before it was known what chemicals made it up - think royal families, horror at the idea of racial intermixing, etc. And I don't see why that should change because we know what it's made of - for other reasons maybe, but not that.
1scav
Because some of the crazy reasons people have had for believing in stupidities like the divine right of kings, blood purity, racial supremacy etc. DO NOT SURVIVE EXAMINATION when you understand the underlying process. Once you re-name racial purity as, at best, a vulnerable monoculture (and at worst, inbreeding), racism becomes harder to defend intellectually. I have no idea how much its declining social acceptability is related to that. Probably not much. My intuition is that most "inbreeders" (as I call 'em) are not very intellectual about their racism anyway.
4[anonymous]
I love the Alpha Centauri quotes, the game probably infected me with lost of the memes that made LW appealing. For the longest time I couldn't see any virtue or weirdtopia in the Yang's Human Hive society, but I eventually came to saw the dystopian possibilities of it are no greater than that those of the other factions. Also in the context of the difficulty of a positive singularity (transcendence in the game) it has pragmatic arguments in its favour.

Many difficulties which nature throws our way, may be smoothed away by the exercise of intelligence.

--Titus Livius

Reality is the ultimate arbiter of truth. If your thoughts, beliefs, and actions aren't aligned with truth, your results will suffer.

--Steve Pavlina

Or, because running into heavy objects is a good intuition pump:

Reality is what trips you up when you run around with your eyes closed.

I think this was in a book by James P. Hogan, but a bit of Googling only reveals one or two other people quoting it but not remembering where it came from.

2CannibalSmith
Haven't followed Steve since about 2008. What has he been up to? Is he still newageous?
5ChristianKl
Yes, he is. Steve's idea of truth differs a bit from the lesswrong consensus.
3Vaniver
I haven't looked into his new material in around a year, now, and even then I was focused on his old stuff (I found him through researching Uberman, I think). I believe the answer is "even more than he was then." That quote is from his 2009 book.

I'd like to propose a new guideline for rationality quotes:

  • Please don't post multiple quotes from the same source.

I enjoy the Alpha Centauri quotes, but I think posting 5 of them at once is going a bit overboard. It dominates the conversation. I'm fine with them all getting posted eventually. If they're good quotes, they can wait a couple months.

[-][anonymous]130

And here we tinker with metal, to try to give it a kind of life, and suffer those who would scoff at our efforts. But who's to say that, if intelligence had evolved in some other form in past millennia, the ancestors of these beings would not now scoff at the idea of intelligence residing within meat?

--Prime Function Aki Zeta-5, Sid Meier's Alpha Centauri

0mwengler
@Konkvistador you will enjoy They're Made Out Of Meat
[-]Shmi130

Rational politician:

It certainly isn't the government's job to educate voters. Our system is designed to make candidates compete for votes, and the most effective way to compete is by appealing to emotion and ignorance. The last thing a politician wants is to be labeled professorial. That's the same as boring.

Dilbert blog

1Emile
I don't see the implied link between ... and The fact that appealing to emotion works to get elected doesn't mean that elected politicians have any incentive one way or the other towards educating voters.
2NoSignalNoNoise
That's the point. They have no incentive one way or another, so it's not their job. The quote doesn't say it's their job not to, just that it isn't their job.

It's better to light a candle than curse the darkness.

-Chinese proverb

Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias.

John Ioannidis Why Most Published Research Findings Are False

1DanArmak
Combining the two statements, many research findings are inaccurate measures of the prevailing bias.

Likewise people have their rituals in argument evaluation. Philosophers like to set out the premises in an orderly numbered fashion, and tend to regard this as making an argument clear. Whether or not it actually does so depends; unless the argument is being made from scratch, this procedure involves rearrangement and interpretation, so whether it actually does make things more clear, in terms of increasing understanding, seems to vary considerably. But it still feels like you are bringing order and clarity to a disordered muddle, so you find people who will swear by it, even though it's not difficult to find cases where it clearly introduced a distortion. There's an argument to be made -- it would, of course, be controversial among those who engage in this kind of practice -- that such people are taking the ritual itself to be a kind of clarity, by sympathetic magic, and are taking arguments in this form to be better arguments simply because they conform to ritual expectation. It may even have good practical results, if so; a ritual might well put one in the right state of mind for a certain kind of work, and there's no reason to think that philosophical thinking doesn't sometim

... (read more)
0Luke_A_Somers
Clarity is subjective. By reformatting something into a familiar pattern, it can easily become clearer to them, but muddier to someone else. But yes, sometimes, such systems don't do anyone any real good.

In the department of economy, an act, a habit, an institution, a law, gives birth not only to an effect, but to a series of effects. Of these effects, the first only is immediate; it manifests itself simultaneously with its cause -- it is seen. The others unfold in succession -- they are not seen: it is well for us, if they are foreseen. Between a good and a bad economist this constitutes the whole difference -- the one takes account of the visible effect; the other takes account both of the effects which are seen, and also of those which it is necessary to foresee. Now this difference is enormous, for it almost always happens that when the immediate consequence is favourable, the ultimate consequences are fatal, and the converse. Hence it follows that the bad economist pursues a small present good, which will be followed by a great evil to come, while the true economist pursues a great good to come, -- at the risk of a small present evil.

In fact, it is the same in the science of health, arts, and in that of morals. It often happens, that the sweeter the first fruit of a habit is, the more bitter are the consequences. Take, for example, debauchery, idleness, prodigality. When, therefore, a man absorbed in the effect which is seen has not yet learned to discern those which are not seen, he gives way to fatal habits, not only by inclination, but by calculation.

--From the introduction of Frederic Bastiat's "That Which is Seen, and That Which is Not Seen".

...there is nothing more difficult to take in hand, more perilous to conduct, or more uncertain in its success, than to take the lead in the introduction of a new order of things. Because the innovator has for enemies all those who have done well under the old conditions, and lukewarm defenders in those who may do well under the new. This coolness arises partly from ... the incredulity of men, who do not readily believe in new things until they have had a long experience of them.

Niccolo Machiavelli

-1Eugine_Nier
As well they should.

Intellectuals may like to think of themselves as people who "speak truth to power" but too often they are people who speak lies to gain power.

-- Thomas Sowell

4TimS
Depending on the speaker, this quote has the potential for reinforcing substantial status quo bias, since taking it serious would dramatically reduce the frequency of truly attempting to speak truth to power. In other words, the quote seems tailor-made for justifying a generalized counter-argument to all speak-truth-to-power actions.
3hairyfigment
— Epimenides the Cretan
[-][anonymous]90

"Man's unfailing capacity to believe what he prefers to be true rather than what the evidence shows to be likely and possible has always astounded me. We long for a caring Universe which will save us from our childish mistakes, and in the face of mountains of evidence to the contrary we will pin all our hopes on the slimmest of doubts. God has not been proven not to exist, therefore he must exist." Academician Prokhor Zakharov, Alpha Centauri

-3Danfly
Wasn't a temporary moratorium called on smac quotes recently? I have to admit this was one of my favourites from it though.
2[anonymous]
Oops. I didn't see anything about a moratorium.
1Danfly
Ah. I see what my mistake was now. It was just a recommendation by AngryParsley. It wasn't anything official. As I'm still something of a newbie here, I figured it was said by someone with a bit more clout.

Scientific theories are judged by the coherence they lend to our natural experience and the simplicity with which they do so.”

– Commissioner Pravin Lal in Alpha Centauri

2Ezekiel
Eloquent! What.
0RobertLumley
I can see how that second sentence is a bit confusing. FWIW, my interpretation is "Our understanding of the fundamental laws of nature delicately balance on our observations." But in retrospect, I agree it is better without that sentence.
0mwengler
I'm sorry I had to downvote this because I just read Popper. Biblical creationism and moral theory is a remarkably simple and coherent guide to our natural experience. It certainly isn't the Bible's accuracy or utility for designing working stuff that makes it so popular.
9RobertLumley
If you think creationism is a simple explanation for existence, you don't really have a great grasp on Occam's Razor. Saying "God did it" sounds nice and simple in English words. But it's one heck of a lot more complicated if you actually want to simulate that happening.

Never do anything on principle alone. If the principle of the thing is the only reason to do it, don't.

-- Bill Bryson

3Nominull
I think this is a bad principle to try to uphold. It means you have to understand the motivations behind all your principles, rather than just knowing that they are good principles. Which may be valuable for a small class of philosophers, but it's wasted effort for the general population.
0Joe
I doubt this is being put forward as a "principle to uphold" since that would be self-contradictory. It is probably aimed at the sorts of cases where someone might say "well I wouldn't have bothered but it was the principle of the thing".
4Eugine_Nier
And in most of those cases "the principle of the thing" refers to what we would call TDT/UDT-type considerations.

Much of the social history of the Western world over the past three decades has involved replacing what worked with what sounded good. In area after area - crime, education, housing, race relations - the situation has gotten worse after the bright new theories were put into operation. The amazing thing is that this history of failure and disaster has neither discouraged the social engineers nor discredited them.

-- Thomas Sowell

Without having a date on the quote, it's hard to know exactly which three decades he's referring to, but we certainly seem to be in a better position regarding crime, housing and race relations than three decades ago. Education, probably not so much. This sounds to me like just a meta-contrarian longing for a return to the imagined "good old days".

Without having a date on the quote, it's hard to know exactly which three decades he's referring to,

He published that in 1993, which was about at the historic peak of violent crime in the US since 1960. The situation has improved a lot since then, but through the decades of 1960-1990, things looked pretty grim.

1NoSignalNoNoise
Good to know. Updated.
[-][anonymous]100

Crime.

In the US at least the murder rates today are comparable to those of the 1960s only because of advances in trauma medicine.

Another important reason is that Americans have in the meantime embraced a lifestyle that would have struck earlier generations as incredibly paranoid siege mentality. (But which is completely understandable given the realities of the crime wave in the second half of the 20th century.)

Yet another reason is, of course, the draconian toughening of law enforcement and criminal penalties.

2[anonymous]
To clarify I was commenting on murder rates specifically in light of how trauma medicine has reduced the fraction of violent assaults that cause death. The factors you describe seem more along the lines of avoiding violent assault in the first place. Controlling for improvements in trauma medicine, today's murder rate would be three times that of the 1960s, but the numbers would be better than the controlled for medicine 1990s numbers, which where five times 1960s levels. In other words yes in the past 20 years Americans seem to be getting assaulted less and I think all of what you describe played a role. There is also the unfortunate problem of police sometimes having nasty incentives to misclassify crimes so some of the drop might be fictional.
-1Eugine_Nier
Which would, nevertheless, be considered absurdly lenient by the standards of any pre-20th century society.

I wouldn't call the present U.S. system "absurdly lenient." The system is bungling, inefficient, and operating under numerous absurd rules and perverse incentives imposed by ideology and politics. At the same time, it tries to compensate for this, wherever possible, by ever harsher and more pitiless severity. It also increasingly operates with the mentality and tactics of an armed force subduing a hostile population, severed from all normal human social relations.

The end result is a dysfunctional system, unable to reduce crime to a reasonable level and unable to ensure a tolerable level of public safety -- but if you're unlucky enough to attract its attention, guilty or innocent, "absurd leniency" is most definitely not what awaits you.

6Alicorn
Interesting. Where did you find this fact? Are there others like it there?
[-][anonymous]120

Murder and Medicine: The Lethality of Criminal Assault 1960-1999

Despite the proliferation of increasingly dangerous weapons and the very large increase in rates of serious criminal assault, since 1960, the lethality of such assault in the United States has dropped dramatically. This paradox has barely been studied and needs to be examined using national time-series data. Starting from the basic view that homicides are aggravated assaults with the outcome of the victim’s death, we assembled evidence from national data sources to show that the principal explanation of the downward trend in lethality involves parallel developments in medical technology and related medical support services that have suppressed the homicide rate compared to what it would be had such progress not been made.We argue that research into the causes and deterrability of homicide would benefit from a “lethality perspective” that focuses on serious assaults, only a small proportion of which end in death.

To be clear there are other possible explanations for why violent assault as recorded has become less lethal, I just think this one is by far the most plausible.

I always think it's weird on cop shows and the like where an assaulter is in custody, the victim is in the hospital, and someone says "If he dies, you're in big trouble!". The criminal has already done whatever he did, and now somehow the severity of that doing rests with the competence of doctors.

1TheOtherDave
I can see the logic of treating the severity of the crime as contingent on the actions (and perhaps intentions) of the criminal rather than the actual results, such that the fact that someone dies as a consequence of my battering them doesn't make it an act of murder. But that also applies to shorter-window consequences, like when I shoot someone and they dodge to the left and the bullet hits them in the shoulder vs. I shoot someone, they dodge to the right, and the bullet hits them in the throat. Treating the severity of the crime as contingent on consequences in the firing-a-gun case and contingent on actions in the battering-someone case would seem equally weird to me.
0asparisi
It makes sense as an interrogation tactic, at any rate. If you're going for a confession and the person is distraught (either by what they did or by getting caught doing what they did) then it's a variation on "confess now or you'll get a worse sentence" with the added bonus that the timeline on the "confess" is both out of the interrogator's hands and it doesn't seem artifiical to the suspect.
0komponisto
Indeed, this seems to be an area where the legal system opts for a consequentialist approach; no surprise, then, that you would find it weird.
0scav
Um, I thought consequentialism was about evaluating the goodness of a course of action based on its probable consequences. If all it amounts to is hindsight then it's not much use for making ethical decisions about future actions. But I think that would be a straw man. If you apply that crazy approach to consequentialism then I should be allowed to stand on a roof heaving bricks out into the street, and I'm not doing anything wrong unless and until one of them actually hits somebody.
3komponisto
Consequentialism is about deriving the ethical value of actions from their consequences. If someone thinks that the badness of an action is not determined until the consequences are known (like the police in Alicorn's example, or more to the pont the legal system they represent), then, necessarily, they are applying consequentialist moral intuitions, and not deontological moral intuitions. No one said anything about "all it amounts to" being "hindsight". Your second paragraph is a straw man. While it is true that if someone believes that, they must be a consequentialist, it does not follow that a consequentialist must necessarily believe that.
1scav
I did say that it would be a straw man version of consequentialism. But I think I misunderstood what you were saying, or at least where your emphasis was, so I was kind of talking past you there :( Thankfully in other areas the law is not concerned only with the contingent consequences of actions in general. Conspiracy to commit a crime is a crime. Attempted murder is a crime. Blackmail is a crime even if the victim refuses to be bullied and the blackmailer doesn't follow through on their threat. Kidnapping isn't considered to be babysitting if the victim is released unharmed. So yeah. I think anyone could find it a little weird with or without calling it consequentialist.
2komponisto
Perhaps, but my point was that, since it does presuppose consequentialism, non-consequentialists such as Alicorn would be particularly disposed to find it weird (whether or not some consequentialists would also have a similar reaction).
0ArisKatsaris
Well, I suppose it's easier to prove that the victim could have died from the violence inflicted, if they do actually die.... but yeah, on the whole I agree.
3Alicorn
If we're relying on doctor competence anyway here, we could see about getting official professional opinions on to what extent the injuries could have been lethal. Like retroactive triage.
1Nornagest
I've no idea of the data's provenance, but this table claims aggravated assault rates of 86/100,000 in 1960, 440/100,000 in 1993, and 252/100,000 in 2010 if I've got my math right. Murder rates are 5.08/100,000, 9.51/100,000, and 4.77/100,000 respectively. So the decline in murder since 1993 has outpaced the decline in assault (it also rose less steeply between '60 and '93), and trauma medicine's a plausible cause, but both declines are quite real: I wouldn't say the comparison to the 1960s is valid only because of medical improvements. In any case, 1960 was more like fifty years ago. The per-100,000 aggravated assault rate in 1980 was just under 300 -- substantially over the 2010 numbers.
-1Multiheaded
Downvoted for several obvious reasons. Seriously, just fucking THINK of the quote in this context a little bit!

Are you assuming that Thomas Sowell is defending, say, racial discrimination? If so, then you'd be wrong. He's talking about things like affirmative action which are intended to help disadvantaged groups, but which he contends have had the exact opposite effect.

If you meant something else, then please say it instead of assuming that it's obvious.

0Multiheaded
Oh, sorry! I did indeed assume just that (and some things about the general racial supremacist attitude of Western societies before decolonization, etc), while totally overlooking that he's an American and that they indeed have that curious issue. In fact... yeah, not to defend my brashness or anything, but mentioning "race relations" in that context so off-handedly is indeed bound to make people think of one as a Segregationist or something!

totally overlooking that [Thomas Sowell is] an American

He also happens to be black, if that's relevant.

0[anonymous]
Not me.
6TheOtherDave
Can you expand on what additional information you believe you're providing when you "explain" a downvote in this way, rather than just downvoting silently? From where I sit the "explanation" seems purely an attempt to shame Viliam_Bur in public, and by extension to shame anyone who might agree with that quote or think it at all compelling. Is that what you have in mind?

...’Twas a nest full of young birds on the ground
The cutter-bar had just gone champing over
(Miraculously without tasting flesh)
And left defenseless to the heat and light.
...The way the nest-full every time we stirred
Stood up to us as to a mother-bird
Whose coming home has been too long deferred,
Made me ask would the mother-bird return
And care for them in such a change of scene
And might our meddling make her more afraid?
That was a thing we could not wait to learn.
We saw the risk we took in doing good,
But dared not spare to do the best we could
Though harm sh

... (read more)
[-]Shmi70

I distrust those people who know so well what God wants them to do, because I notice it always coincides with their own desires.

Susan B. Anthony

7sketerpot
That is not always true.
6MixedNuts
Mortification of the flesh is at least a mixed case. Delicious kinky endorphins.

Hope is the confusion of the desire for a thing with its probability.

Arthur Schopenhauer

3Fyrius
If that's how it works, then I suspect paranoia is the same thing, but with fear instead of desire.

[H]ow you get to Carnegie Hall is you sell out Town Hall twice in a year, and now you sell enough tickets to do a show at Carnegie Hall.

-- Louis C.K.

Misunderstanding of probability may be the greatest of all impediments to scientific literacy.

Stephen Jay Gould

In the case of any person whose judgement is really deserving of confidence, how has it become so? Because he has kept his mind open to criticism of his opinions and conduct. Because it has been his practice to listen to all that could be said against him; to profit by as much of it as was just, and expound to himself...the fallacy of what was fallacious.

–John Stuart Mill

Every argument must start from some unargued (because commonly accepted) assumptions. This is both logically necessary (as with axioms) and crucial for brevity of the argument. Since many truths are entangled, explaining all assumptions would eventually lead to listing a lot of our knowledge from many domains, and including many other arguments.

0mwengler
If all truths are entangled, does that mean in some worlds these truths are false?

Anything can be an instrument, Chigurh said. Small things. Things you wouldnt even notice. They pass from hand to hand. People dont pay attention. And then one day there's an accounting. And after that nothing is the same. Well, you say. It's just a coin. For instance. Nothing special there. What could that be an instrument of? You see the problem. To separate the act from the thing. As if the parts of some moment in history might be interchangeable with the parts of some other moment. How could that be? Well, it's just a coin. Yes. That's true. Is it?

— Cormac McCarthy, No Country for Old Men

7shokwave
--- the character Chigurh, from the same novel and author. It's almost like a koan for me - thinking about what in my history I have lost on a coin toss is a great jumping point into more introspection.
-8mwengler
4gwern
--The Ones Who Walk Toward Acre

It's worth remembering, especially for people who are interested in meta-ethics, that often you can skip the confusion and jump ahead to Rough Consensus and Running Code. On rare occasions this will come back to bite you, but usually not (and if it does, you can jump back to the beginning).

We are accustomed to thinking of evolution in a biological context, but modern evolutionary theory views evolution as something much more general. Evolution is an algorithm; it is an all-purpose formula for innovation, a formula that, through its special brand of trial and error, creates new designs and solves difficult problems. Evolution can perform its tricks not just in the "substrate" of DNA, but in any system that has the right information processing and information-storage characteristics. In short, evolution s simple recipe of "diff

... (read more)
0DaFranker
This piques my curiosity on a certain point of interest: Has the argument "It's just an algorithm" ever been used as a counter to the claim that Evolution as a biological phenomenon should not be conflated with "Technological Evolution", "Corporate Evolution", "Personal Evolution", etc.? More importantly, would there be an efficient way of defusing this potentially mind-killer-route argument without misleading the other party into thinking their assumption is correct when the inferential distance is too large for a technical explanation of the misuse of categories and labels (AKA They're not even aware of Lesswrong's existence and are not trained in scientific thought or rationality)?
0TimS
Can you be more clear about what types of conflation you find problematic? If I do a better job representing my clients, then when new lawyer hiring decisions are made, I expect to receive additional clients. Do you feel it is unclear to call that natural selection, or evolution? I agree that using "evolution" as a synonym for change (i.e. Personal Evolution vs. Personal Self-Improvement) is problematic, but I'm not sure that the quote under discussion helps that issue.
1DaFranker
I can certainly try to! Well, I do find problematic any types of conflation that lead to incorrect assumptions and unreasonable predictions, but that's a little unclear too. In general day-to-day interactions, the most common problematic is where someone with whom I'm discussing will know of "darwinian evolution" and, of course, the phrase "Survival of the fittest!", but will have no technical understanding of the actual algorithm. Thus, what they see is that when a species lives where there are a lot of large, tough, and highly nutricious hard-shelled nuts, that species will gradually get longer beaks or stronger claws to pierce through the shell and get at the tasty bits. They don't see how all kinds of other possible variations also get tried, and get rejected because they die and the ones more adapted keep reproducing. Thus, in their model, it's as if the entire membership of the species suddenly started growing longer beaks. The approximate generalization is fairly accurate on evolutionary timescales, but misrepresents the cause of the change, which is where things start going wrong. They then translate it to being the same in, say, better lawyers, to steal your example. The misunderstanding often mixes with hindsight bias, in my experience, to produce beliefs that lawyers who fail to survive in a fictive environment where clients like cookie-bribes are incompetent by property of being unable to evolve and adapt. Those lawyers were clearly incompetent. It's simple Evolution theory that you should be more adapted and provide cookie-bribes to your clients if the environment is like that. That was obvious. Beyond this, however, I now notice that something is wrong because I'm unable to clarify the exact issue further, which suggests that I may mentally be myself wrongly unifying or inferencing several things in my mental model and in my memory of related events. Perhaps if I re-read (once I find it) the article by Eliezer that talked about something similar, I

A perennial favourite: "If you torture the data enough, they will confess."

Often attributed to Ronald Coase, however this version was likely: "If you torture the data long enough, nature will confess" - perhaps implying a confession of truth. Another version, attributed to Paolo Magrassi: "If you torture the data enough, it will confess anything" - perhaps implying a confession of falsehood.

Personally, I find the ambiguous version of greater interest.

-1A1987dM
But if you torture them too long, they will confess falsehoods.
0matabele
Interesting that you should prefer 'they', referring to the plural data; some versions of the aphorism also use this form - in retrospect, I prefer this form. Torturing data is a common problem in my field (geophysics). With large but sparse datasets, data can be manipulated to mean almost anything. Normal procedure: first make a reasonable model for the given context; then make a measureable prediction based upon your model; then collect an appropriate dataset by 'tuning' your measuring apparatus to the model; then process your data in a standard way. In the case that that your model is not necessarily wrong; then make another measureable prediction based upon your model; collect another dataset by an independent experimental method; then ... Even when following this procedure, models are often later found to be wildly erroneous; in other words, all of the experimental support for your model was dreamt up.
0A1987dM
What I was thinking about when typing that was indeed a model by some geophysicists. They had found some kind of correlation between some function of solar activity and some function of seismic activity, but those functions were so unnatural-looking that I couldn't help thinking they tweaked the crap out of everything before getting a strong-enough result.
0matabele
You were likely referring to some of the recent work of Vincent Courtillot. A video summarizing some of his work here. The most interesting aspect of this work, is that Courtillot did not start out with any intention of finding correlations with climate; his field is geomagnetism. Only after noticing certain correlations between geomagnetic cycles and sun spot cycles, did suspected correlations with natural climate cycles become evident.

After a while, Kit noticed that a large part of the pattern that made a bridge or a tower was built entirely out of people.

Kij Johnson, "The Man Who Bridged the Mist"

nominated for this year's Hugo

It may be expecting too much to expect most intellectuals to have common sense, when their whole life is based on their being uncommon -- that is, saying things that are different from what everyone else is saying. There is only so much genuine originality in anyone. After that, being uncommon means indulging in pointless eccentricities or clever attempts to mock or shock.

-- Thomas Sowell

Retracted, because it's a duplicate.

[This comment is no longer endorsed by its author]Reply
8gwern
--Dr. Samuel Johnson; "The Life of Cowley", Lives of the English Poets (1781)
0Nominull
duplicate
4Oscar_Cunningham
Link to the original.

Men show their characters in nothing more clearly than in what they think laughable.

-- Johann Wolfgang von Goethe

(re-posted on request.)

Most powerful is he who has himself in his own power.

--Seneca

2wedrifid
Don't know about that. He who has everyone else in his power sounds rather powerful too.
0MBlume
Ey who has everyone else in eir power has everyone else in the power of someone ey doesn't have control over.

Too many not-words in one sentence for me I'm afraid.

Reframed with more standard pronouns: if I have everyone else in my power, but not myself, then everyone else is in the power of someone I don't control.

1Fyrius
In that case, most powerful is she who has herself in her own power, plus the greatest number of other people. (I opt for Eliezer's coin flip method of gender-neutral pronoun usage, by the way.)
5TheOtherDave
I'm reminded of a propositional logic class that spent some time discussing "Everybody loves my baby, but my baby don't love nobody but me."
-3mwengler
Rephrased using an honest coin.
1Fyrius
(I rolled my die just once because the latter two pronouns are anaphors that refer back to the first, and this statement doesn't only apply to genderqueer people. :) )

Not everything that is faced can be changed. But nothing can be changed until it is faced.

– James Baldwin

The obscure language was likely due to the political context of the original; try substituting 'identified' for 'faced'.

0tut
Or acknowledged, or accepted. I don't see facing an issue as obscure language, but this is a good aphorism. Upvoted.

As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality.

Albert Einstein

(Quoted here but not in any LW quotes thread.)

Posted to wrong month, moved.

An excerpt from Wise Man's Fear, by Patrick Rothfuss. Boxing is not safe.

The innkeeper looked up. "I have to admit I don't see the trouble," he said apologetically. "I've seen monsters, Bast. The Cthaeh falls short of that."

"That was the wrong word for me to use, Reshi," Bast admitted. "But I can't think of a better one. If there was a word that meant poisonous and hateful and contagious, I'd use that."

Bast drew a deep breath and leaned forward in his chair. "Reshi, the Cthaeh can

... (read more)
[This comment is no longer endorsed by its author]Reply

Another Goethe quote, whilst on that tack; seems appropriate for disciples of GS.

Love is an ideal thing, marriage a real thing; a confusion of the real with the ideal never goes unpunished.

-- Johann Wolfgang von Goethe

4DaFranker
There's one (okay, more like 1.6) major problem with that quote, everything else being otherwise good: The implicitly absolute categorization of "love" as "ideal", and the likewise-implicit (sneaky?) connotation that love is not as real as it is ideal or marriage as ideal as it is real. Love is a very real thing. There are very real, natural, empirically-observable and testable things happening for whatever someone identifies as "love". However, further discussion is problematic, as "love" has become such a wide-reaching symbol that it becomes almost essential to specify just what interpretation, definition or sub-element of "love" we're talking about in most contexts if ambiguity is to be avoided.
0gwern
Goethe is writing in a time influenced by German Romanticism (for which he was partly guilty); it would not be amiss if one were to capitalize love there as 'Love' - an abstraction, not some empirical neural correlates.
0DaFranker
I'm not quite sure what this abstraction would even correspond to. In fact, when I ask myself what abstract meaning 'Love' could possibly have, I find myself confused. It seems there might be some 'Love' somewhere that feels like it is the ideal, abstract 'Love', but no matter where I search I cannot find it on my map. I'd like it if you could help me map this "abstract ideal" in my conceptspace map, if that's possible.
1gwern
It's not worth trying to understand beyond Goethe having fun at some idealists' expense. I took a course on Romanticism, and came out with little better understanding than you have now.
-1matabele
When mapping labels (symbols) to their underlying concepts, look for the distinction, not the concept. Distinctions divide a particular perspective of the map; each side of the distinction being marked with a label. In early Greek philosophy the opposites were: love and strife (see empedocles.) (An abstraction corresponds to a class of distinctions, where each particular distinction of the class, corresponds to another abstraction.)
0DaFranker
Oh! That makes a lot more sense. It doesn't seem like the most reliable technique, but this particular term is now a lot clearer. Thanks! Of course, this seems to me like 'Love' is then merely a general "Interface Method", to be implemented depending on the Class in whatever manner, in context, will go against strife and/or promote well-being of cared-for others. Which is indeed not something real, but a simple part of a larger utility function, in a sense.
-2matabele
A good resource on distinctions (if you are not yet aware of it), is George Spencer-Brown's Laws of Form. These ideas are being further explored (Bricken, Awbrey), and various resources on boundary logic and differential logic, are now available on the web.
6gwern
I'm not really sure Laws of Form is a good resource, and I'm not sure it's good at all. A crazy philosophy acquaintance of mine recommended it, so I read it, and couldn't make very much of it (although I was disturbed that the author apparently thought he had proved the four-color theorem?). Searching, I got the impression that one could say of the book 'what was good in it was not original, and what was original was not good'; later I came across a post by a Haskeller/mathematician I respect implementing it in Haskell which concluded much the same thing:
-22matabele
[-][anonymous]00

Wise men store up knowledge, but the mouth of a fool invites ruin.

-Proverbs 10:14

[This comment is no longer endorsed by its author]Reply

If you want to learn why you think whatever it is you think, strip away existing context and force it into a new one and see what happens.

The Last Psychiatrist, at http://thelastpsychiatrist.com/2011/11/judge_beats_his_daughter_for_b.html

"'Whereof one cannot speak thereof be silent,' the seventh and final proposition of Wittgenstein’s Tractatus, is to me the most beautiful but also the most errant. 'Whereof one cannot speak thereof write books, and music, and invent new and better terminology through mathematics and science,' something like that, is how I would put it. Or, if one is not predisposed to some such productivity, '. . . thereof look steadfastly and directly into it forever.'"

-- Daniel Kolak, comment on a post by Gordon Cornwall.

5ChristianKl
This misses the point that Wittgenstein made. Inventing better terminology doesn't help you if you don't have any information in the first place. Something might have happened before the big bang. The big bang erased all information about what happened before the big bang. Therefore we shouldn't speak about what happened before the big bang. Gods might exist or might not exist. We don't have any evidence to decide whether they exist. Therefore we should stop speaking about gods. To come to a question that more central to this community: We have no way to decide through the scientific method whether the Many Worlds Hypothesis is true. According to Wittgenstein we should therefore be silent. Inventing new terminology doesn't help with those issues.
4Danfly
I'm by no means an expert on this, but I was under the impression that Wittgenstein meant that language was an insufficient tool to express the "things we must pass over in silence", e.g. metaphysics, religion, ethics etc., but that he nevertheless believed that these were the only things worth talking about. My understanding was that he believed that language is only good for dealing with the world of hard facts and the natural sciences and, while we cannot use it to express certain things, some of these things might be "shown" by different means, in line with his comment that the unwritten part of the tractatus was the most important part. This conclusion from one of hist lectures largely sums up how I would understand his view of many of the "things we must pass over in silence". This is largely the way I have been led to interpret it through reading other people's interpretations and it is probably wrong, but I thought that I'd try and express it here, because I do have a strong desire to expand my knowledge of Wittgensteinian philosophy. One thing which I do think is quite likely though, is that Wittgenstein would consider any written "interpretation" of his work to ultimately be "nonsense" insofar as any written part of it is concerned.
1TheOtherDave
IIRC, in "On Certainty" in particular, Wittgenstein had a lot to say about the role of language and how it is not primarily a mechanism for evaluating the truth-value of propositions but rather a mechanism for getting people to do things. In particular, I think he dismisses the entire enterprise of Cartesian doubt as just a game we play with language; arguing that statements like "There exists an external reality" and "There exists no external reality" simply don't mean anything. So I'd be surprised if he were on board with language as a particularly useful tool for hard facts or natural sciences, either. Admittedly, it's been like 20 years since I read it, and it's a decidedly gnomic book to begin with, and I'm no kind of expert on Wittgenstein. So take it with a pound of salt.
1Alejandro1
The Tractatus is a product of what is called the early or first Wittgenstein, while "On Certainty" belongs to his latter stage. By that time he had repudiated the emphasis of the Tractatus on logical correspondence with facts and switched to speaking of language games and practical uses. In both phases his position on "unspeakable" things like ethics and metaphysics was similar (roughly the one Danfly summarizes at the beginning of the parent quote).
0Danfly
I just noticed how poorly written part of my above comment was. I think I've fixed it now. I'm glad to see a positive response to it at least, since it shows that people care more about substance than the clarity of writing, which seems more than a little apt when talking about Wittgenstein. It also indicates that I haven't been entirely misled in my interpretation of a notoriously difficult philosopher. As much as it might be fun to pretend that my strange writing style was intended as a way of reaching people with "similar thoughts" in a truly Wittgensteinian sense, it was not. It was a boring old mistype. I am nowhere near smart enough to pull that off.

I predict that if we were to poll professional economists a century from now about who is the intellectual founder of the discipline [economics], I say we'd get a majority responding by naming Charles Darwin, not Adam Smith.

Robert H. Frank, 2011 September 12, speaking on Russ Roberts' EconTalk podcast. The rest of the quote can be found near 14:11 in the transcript. Robert H Frank was talking a lot about his book The Darwin Economy: Liberty, Competition, and the Common Good.

8Swimmy
Interesting, and I would happily bet against that prediction.

Duplicate.

[This comment is no longer endorsed by its author]Reply
2[anonymous]
You double posted this quote and, while this came first, the other has a meaningful reply on it.
1RobertLumley
Whoops. You're right. I meant to grab another one. I'll delete this, thanks.

If you wish to advance into the infinite, explore the finite in all directions.

-- Johann Wolfgang von Goethe

If you wish to advance into the infinite, explore the finite in all directions.

That sounds incredibly deep. (By which I mean it is bullshit.)

For some reason, this thread reminds me of this Simpsons quote:

"The following tale of alien encounters is true. And by true, I mean false. It's all lies. But they're entertaining lies, and in the end, isn't that the real truth?"

8TheOtherDave
Upvoted for correct usage of a technical term. :-)
2wedrifid
My favourite technical term out of all the technical terms!
1Incorrect
I think it is intended to mean "If you want to accomplish impractical things, work on practical subtasks." I don't see what's wrong with that.
6wedrifid
That's an excellent quote. Let's find an impressive external source who says that and quote them!
7TheOtherDave
Or, failing that, pick an impressive external source and ask them to write back to you saying that, so you can subsequently quote it attributed to "Impressive Source (private communication)"
2wedrifid
Excellent idea. I used to do this on certain assignments at times.
0DaFranker
As a variant: Introduce some freeloader code in Watson to have it randomly blurt out quotes from a list of quotations sent to a specific email address each time it appears in public. This gives you both the Impressive Source criterion and a public statement of the quote.
-4matabele
Not necessarily deep; a couple of concrete interpretations: There is often much hidden wisdom in interpretation of aphorisms, which perhaps explains my preference for the poetic turn of phrase.
8wedrifid
No, there are intentionally vague deep sounding comments to which wisdom can be associated. You've just given multiple meanings to the same words. Those other meanings may be useful but the words themselves are nonsense.
-4matabele
That pretty much describes any proposition. If you wish, substitute the word 'noise' for the word 'symbol, then the paragraph describes an utterance. There is a good resource on semiotics here.
2wedrifid
No it doesn't. Not all propositions are intentionally vague and deep sounding. Were I inclined to substitute in 'noise' it would be as a contrast to 'signal'.
-1matabele
-- Johann Wolfgang von Goethe
1wedrifid
This is an excellent quote and belongs at the top level. (I downvoted it here because the point you are trying to make by replying with it is approximately backwards. An intended insult which would make more sense as a compliment.)
-5matabele

Bad luck isn't brought by broken mirrors, but by broken minds.

Dr Frank Mandel from Suspiria by Dario Argento

[-]Fyrius-20

Minor spoiler alert. (I think you know the drill.)

Nsgre Oebaa jvaf n qhry:

Ynql Neela: "Lbh qba'g svtug jvgu ubabe!"

Oebaa: "Ab."

Oebaa fzvyrf naq cbvagf gb gur zna ur whfg qrsrngrq.

"Ur qvq."

Game of Thrones (TV series), episode S01E06

(Rational agents should WIN.)

6MinibearRex
I like the quote, though really there's no particular reason to put it in rot13. Minor point: The character's name is spelled Oebaa
1Fyrius
...huh. Well wow. I'm going to remember that trick, that's clever. I had no idea you could do that here. Also, noted, and fixed.
1Fyrius
If those four people who downvoted this would enlighten me as to why this is a bad quote, that would be much appreciated.
7Grognor
I have a general policy of downvoting anything in rot13. No, I'm not going to work to read your comment! Instead, put your spoiler text in the hover text of a fake url, like this Syntax: [like this](http://notareal.url/ "See? See how much better this is?")
-1Fyrius
Ah. I just picked up that technique from MinibearRex up there. I see you said it first, so kudos to you, then. It's a useful trick. I'll remember it. ...incidentally, if it's too much work to click the link, copy-paste the text and click the button, then you might save yourself even more time and effort by just scrolling on without bothering to click the thumbs-down button either. There are friendlier ways to express disapproval, too. But thanks for the advice, I'll try to be less of a bother next time.
1MinibearRex
This is kind of funny. I learned this trick from Grognor's comment when I saw it in the recent comments section. And then I decided to try it out when I noticed the misspelling, not realizing it was on the same post.
6tut
First, it is an appeal to consequences against honor. Worse, it is an appeal to fictional consequences. Second, honor is not the opposite of rationality. Just making an argument against honor would not automatically be a rationality quote even if it was a good argument. Third it was encrypted which made me waste more than three times the amount of time reading it that I would have if it was in plain text. When it turned out to be bad this made the disappointment much worse.
4Fyrius
Jeez, you guys. You miss the point. - - -- Eliezer Yudkowsky The point isn't that honour is bad, the point is (much more generally) that rational agents shouldn't follow the Rules and lose anyway, they should WIN. Whether the Rules are the rules of honour, of mainstream science or of traditional rationalism, or whatever, if they don't get you to win, find a way that does. And it's futile to complain about unfairness after you lost, or the guy you were rooting for did. The only part that appeals to fictional consequences is the additional implication that oftentimes, an ounce of down-to-earth pragmatism beats any amount of lofty ideals if you need to actually achieve concrete goals. I thought adding that "rational agents should win" reference would make the intended idea clear enough. But I'll take my own advice and just make a mental note to be clearer next time.
4Mass_Driver
I dunno, I think all of that is overstated. I mean, sure, perfectly rational agents will always win, where "win" is defined as "achieving the best possible outcome under the circumstances." But aspiring rationalists will sometimes lose, and therefore be forced to choose the lesser of two evils, and, in making that choice, may very rationally decide that the pain of not achieving your (stated, proactive) goal is easier to bear than the pain of transgressing your (implicit, background) code of morality. And if by "win" you mean not "achieve the best possible outcome under the circumstances," but "achieve your stated, proactive goal," then no, rationalists won't and shouldn't always win. Sometimes rationalists will correctly note that the best possible outcome under the circumstances is to suffer a negative consequence in order to uphold an ideal. Sometimes your competitors are significantly more talented and better-equipped than you, and only a little less rational than you, such that you can't outwit your way to an honorable upset victory. If you value winning more than honor, fine, and if you value honor more than winning, fine, but don't prod yourself to cheat simply because you have some misguided sense that rationalists never lose. EDIT: Anyone care to comment on the downvotes?
0Fyrius
P.S.: Regarding your third point, is there a less bothersome way to handle spoilers? I've only seen rot13 being used for that purpose here. I'd gladly make it less cumbersome to read if I could do so without risking diminishing the fun of other people who watch or intend to watch this series. (Or maybe the annoyance caused by the encryption is worse than the risk of spoiling just one scene in case there's anyone reading this who watches the series and is a season and a half behind... I dunno. Neither course of action should be a big deal.)
0[anonymous]
I have a general policy of downvoting anything in rot13. No, I'm not going to work to read your comment! Do this instead, put your spoiler text in the hover text of a fake url, like this Syntax: [like this](http://notareal.url/ See? See how much better this is?)
0Never_Seen_Belgrade
It could be more than four. Someone might have upvoted you.
-1Eugine_Nier
To the extent honor encodes valid ethical injunctions, ignoring it will cause you to loose in the long run.
1NancyLebovitz
Exactly-- compare Protected from Myself to "rationalists should win!".
-2Eugine_Nier
Would your opinion of the quote change if "fighting dishonorably" were replaced by "violating the Geneva convention"?
0Fyrius
Perhaps. I'd say that should depend on the price for failure and how that compares to the violation. But point taken.
-1Jay_Schweikert
Upvoted. It's maybe not obvious from the quote alone, but in context, "honor" doesn't mean abstaining from deceit or manipulation -- it means following the largely impractical "rules" of dueling, when the bottom line is just who kills the other man.

There is no mutually exclusive 'is – ought' distinction. The only mutually exclusive alternative to 'is' is 'is not'. This means that 'ought' either needs to find a comfortable home in the realm of 'is', or needs to be tossed into the realm of 'is not'.

A person comes to me and says, "Alonzo, you ought to do X."

I answer, "Prove it."

That person then says, "Well, as you know, an 'ought' statement cannot be derived from any set of 'is' statements . . . .”

"You can stop right there," I say. "We're done. You have just told

... (read more)
2Desrtopa
It doesn't seem very sensible to call a claim that someone "ought" to do something "false" if you're denying that an "ought" claim could ever be meaningful in the first place. Anyway, it's a very annoying argument. It seems an awful lot like saying "You can't prove there's a such thing as value, therefore I refuse to take your money." I'd be tempted to respond by hitting him with a stick until he conceded that stopping getting hit by a stick was a sufficient motivation to do X.
1CronoDAS
I think you misinterpreted the quote; Alonzo Fyfe is criticizing ethical non-naturalism (the claim that moral facts are not reducible to facts about the world), not endorsing it.
0Desrtopa
You're right that I misinterpreted it, but from reading the essay, it seems less like a substantive argument to me than dicking around with semantics. The whole point could have been made much more succinctly with a "taboo 'ought.'" Any argument that entails responding to "you ought to do X" with "prove it" is awfully unlikely to convince your interlocutor; it's rude and will only set them on edge.
3CronoDAS
"Taboo X" is a LessWrong-ism...
2Desrtopa
It is, but Less Wrong didn't invent the idea of recognizing arguments as conflicts of semantics.
9Nominull
If you're trying to win points for succinctness, including by reference the Sequences is probably not a good plan. That's the sin of hidden complexity.
-1Desrtopa
Assuming that your audience isn't familiar with the sequences and proceeds to go read the article, yes, that's not succinct. But the audience probably already has a cached idea of disagreements being semantic conflicts, so while he's not literally in a position to get the same idea across in two words, it could probably be compressed down at least as far as "When I say that I 'ought' to do something, I mean that it's in accordance with my own innate desires and values as a human. My values and desires are real 'is' facts about the universe with a physical basis, and so 'ought' facts can be neatly derived from 'is' facts. This is as useful a definition of 'ought' as you're likely to get, and a definition that divorces normative facts from positive ones, saying that you cannot derive an 'ought' from an 'is' doesn't offer any practical advantage."
1TimS
Quote from later in the post. It seems like Fyfe is saying "'Everyone else is doing it' is a reason for me to do the same." Does that seem right?
1TheOtherDave
Just looking at what you quote, it seems to me rather that he's saying that once I can demonstrate that others have sufficient reason for doing X, I have consequently demonstrated that sufficient reasons for doing X exist, and that was all I needed to do as far as ethics was concerned. In other words, that ethics is about determining reasons for action, full stop. Which I agree with, though I find Fyfe's presentation style here tendentious. (Edit: I would also say that I do think going further is useful. Specifically, "...further, those reasons apply to me just as well as those other people, and therefore I ought to do X" or "...however, those reasons don't apply to me, and therefore it is not the case that I ought to do X")

If our society seems more nihilistic than that of previous eras, perhaps this is simply a sign of our maturity as a sentient species. As our collective consciousness expands beyond a crucial point, we are at last ready to accept life's fundamental truth: that life's only purpose is life itself.

– Chairman Sheng-Ji Yang in Alpha Centauri

2DanArmak
I don't understand the quote. Under what definition of "nihilistic" does it make sense? Wikipedia says: Often true and valid. Agrees with the quote in that life has no purpose beyond itself - e.g. no supernatural gods. Doesn't follow, and is false in any case. Unless one argues that all existing or even possible things are senseless and useless. Which would render these two words quite senseless and useless, in my view. What is meant by 'nihilism' anyway?
4RobertLumley
I think "more nihilistic" is only meant to imply the progression of philosophical thought away from the dogmas of what "the purpose of life" was, which was for awhile, very broadly generalized, a progression from religion to nihilism. I also think nihilistic was chosen because it is a trope that is is much more present in the cultural vernacular than other, more more philosophically precise words, like absurdist, which would be more accurate.
2DanArmak
If I look at the Wikipedia one-line definition again, that seems to match: ...a sensible move away from religious, traditional values... ...which is branded by religionists as leading to thinking "existence is senseless and useless", although that's both empirically and logically wrong. This part is the 'meaning' of 'nihilism' in the vernacular, as you say.
2mwengler
In that same wikipedia article, follow the link to Moral Nihilism to learn If morality is not objective, than moral propositions do not have true-or-falseness about them, and all the discussions about morality are vapid. What Chang means is he gets to make it up as he goes along because 1) it is not wrong to make it up as he goes along because in nihilism, nothing is "wrong," and 2) there isn't a "right" either. Its possible a slightly warm-and-fuzzier Chang would choose Moral Relativism which is Moral Nihilism's more conventional 2nd cousin. But Nihilism makes for a much better story, it is stark and even the word sounds ominous.
3DanArmak
It seems very obvious and uncontroversial to me that morality is not objective. (Yay typical mind fallacy!) Morality is, or arises from, a description of human actions, judgements and thoughts. Aliens who behaved completely differently should be said to have different morals. It's not clear to me why someone would even think to argue for objective (=universally correct and unique) morals unless motivated by religion or tradition. Of course, it's also clear to me that our subjective morals add up to normality. For instance murder is generally morally wrong. Of course that should be read to say it's wrong in our eyes! Of course things are not right or wrong in themselves; value judgements, including moral values, are passed by observers who have values/preferences/moral theories. It seems to me that nihilism, if it is commonly understood to mean this, should be accepted by pretty much any materialist. This doesn't seem to be the case. What am I missing? What are the reasons to think there's something in nature (or in logic, perhaps) that should be identified as "objective morals"?
1mwengler
If I said "Murder is NOT wrong for humans, it is just a matter of personal choice" and you said "no you are wrong, murder is wrong for humans" I would conclude you are a moral realist, not a nihilist. I made a moral statement and you told me I was wrong. You seem to believe that that moral statement is either true or false no matter who says it, that "I think I'll murder Dan" is not just a subjective choice like "I think I'll read a Neil Gaiman book tonight" might be. But you also characterize morality as a description of human actions. If I say "I notice that murder is said to be wrong by many people but is practiced by some non-trivial minority of humans, there fore, since I observe it is part of the human moral landscape, I will pick a kid at random in the mall and shoot him." and you say "no, you shouldn't" then you are probably a moral realist. You apparently think that the proposition I proposed has a truth or falseness to it that exists outside yourself, and you are expressing to me that this statement I made is false. My moral nihilism which I have abandoned perhaps a week ago arose from my comparing the quality of moral facts and fact finding to the quality of scientific facts and fact finding. Science seemed developed through an objective process: you had to test the world to see if statements about the world were true or false. Whereas morality seemed to come entirely from intuitions and introspection. "you shouldn't kill random kids in the mall." "You should recycle." Blah blah blah where is even the test? In my case I was a nihilist in that I thought there was no sensible way to declare a moral statement to be a "fact" rather than a choice, but I was totally willing to kill reflecting my choices (i.e., kill someone who threatened me or my friends or my family). So I had what I thought was a de facto morality that I thought could not be justified as "fact" in the same way that engineering and physics textbooks could be justified. Upon being reminded o
1DanArmak
This is a bad framing of the issue. Murder (for humans) is not, properly speaking, right or wrong. Saying that it is will do for casual conversation but let's make things precise. The term "murder" also presupposes wrong-ness, so I'll replace it with 'killing'. Moral judgments (right/wrong) are descriptions given by people to actions. Killing may be wrong in my eyes, and separately in your eyes; it is not wrong or right in itself. This is true whether 'killing' stands here for a very specific case we are discussing, or whether we are making a generalization over some or all cases of actual or possible killing. (In the latter case, we will be implying some generalization such as 'most/all/typical/... cases of killing are wrong in X's eyes'.) We can also generalize over the person doing the moral judgment. For instance, if most/all/typical/... people think a case of killing is morally wrong, I can simply say that "it is wrong" without making explicit who does or doesn't agree with this judgment. This, as I noted above, is what we typically do in conversation - and it's OK, but only as long as everyone understands and agrees on who is said to (dis)approve of the action in question! Finally, all that I have said isn't necessarily incorrect even if you believe in objective moral truth. In that case you can view it as a definition of the words 'morally right/wrong'. We can talk about people's moral opinions even if there is a separate Objective Moral Truth that not all people agree with. We should just be clear when we're talking about truth, and when about opinions. However, I believe there is no such thing as objective moral truth. This isn't just because there's no evidence for it (which is true); the very concept seems to me to be confused. You say: Science starts with assumptions, and fundamental observations, that are about the objective world it describes. Morals start with assumptions and observations about human moral judgments. These judgments are the funct
0mwengler
To make it unambiguous, let us consider the action "mwengler, a human, goes to a randomly chosen location, abducts the first child under 4' tall he sees there, then takes that child, and kills it with a chainsaw. When asked about it he says 'I've done things like this before, I do it because I like the way it makes me feel.' " You say: This is an assertion which is either true or false. You assert it as true. By my reading of the definition, this makes you a moral nihilist. This on my part is not an act of judgment, but rather of labeling in a way which is common enough among a community who thinks about stuff like this to have been spelled out rather clearly in a wikipedia article. There are plenty of people who do believe there is an objective moral truth. So many that there is a label for it: moral realism. You can read about it in wikipedia and in the Standford Encyclopedia of Philosophy. The concept may be "confused," but it may be less "confused" after you read what some real clear philosophy writers have to say about it. By this I take it to mean you would like to define "it is wrong" and "it is right" to mean "most people think it is wrong" and "most people think it is right." I find a lot of problems with that definition. First, as a physicist I recognize a world of difference between 1) "electrons repel each other" and 2) "most physicists think electrons repel each other." They are probably both true, but negating the first would have vast implications for all of electronics, while the second would represent a remarkable, and possibly disastrous, social phenomenon. So I'd love to keep the semantic distinction between what is and what people think is. I guess my point about this would be, yes its natural to translate objective language into subjective rough equivalents if you think the subjectivity of morality is natural, inescapable, unavoidable. But you will misunderstand other people and be misunderstood by them if you do so and assume that they do
1DanArmak
Correct. I want to point out what this is an assertion about: it is about the meaning of the word 'morals'. I.e. a definition, not a statement of logical or physical fact. If you think that "there are objective morals" that is a different claim about the meaning of the word, but also (and much more importantly) a claim about the existence of something - and I'm asking you to define that something. Let's leave aside for now the issue of why you call this something "morals", let's taboo that word. Please describe this objectively existing something you are talking about. I don't even know if this is evidence for or against them being right. There are plenty of people who are very wrong about lots of things that are not part of their everyday lives. I have now read both articles. (You linked to Stanford twice, so I read the WP article "Moral Realism".) Wikipedia doesn't give a single argument for moral realism, it just says that if we accept it, that makes it convenient to reason about morals. Which is not evidence. The Stanford article lists many arguments against realism, but no arguments for it. It seems to conclude that because realism arises from "common sense and initial appearances" [I disagree strongly] and because they identify problems with some alternatives, realism should not be dismissed. Yet they identify no problems with my approach; and even if they did find problems with all known other approaches, as long as there is no problem to be found with the rejection of realism in itself, then there is no valid reason to accept realism. To sum up: moral realism claims truth-properties for moral statements, but it also claims they cannot be evaluated for truth on the basis of any observations of the physical objective universe. That reduces it to the statement "our common sense tells us so, you can't prove us wrong, we don't have to prove ourselves right". Not very great philosophy. No I don't want to define it so. It can and does mean different things i
0mwengler
I got lost in all the comments and accidentally replied to you in a reply to myself. That comment is here: http://lesswrong.com/lw/dei/rationality_quotes_july_2012/6z6h?context=3
-2Eugine_Nier
This is ultimately the case for all statements.
0DanArmak
I fail to see the relevance. Humans convince each other of many things all the time. If we couldn't, we wouldn't be here on this site! There are minds "out there" in mind-space whom we couldn't convince, but that doesn't mean there are such human minds, because humans are quite similar to one another. Are you seriously suggesting humanity is divided into moral realists and anti-realists, and no realist can possibly explain to me or convince me of their position and even talking about it is pointless?
-2Eugine_Nier
Yes, and those things include moral statements. No, because most if not all humans who call themselves moral non-realists are actually moral realists who believe themselves to be moral non-realists.
0DanArmak
Exactly. So I'm asking to be convinced - I'm asking for the evidence that convinced others to be moral realists. So far no such evidence has been given. Why do you think so? Where do I act as if I believed in moral realism? I am not aware of such.
-2Eugine_Nier
This is similar to the way people who claim to be physical non-realists still manage to avoid walking out of high story windows. If someone punched you or stole your stuff, I strongly suspect you'd object in moral terms.
0[anonymous]
To me, this is a point in favor of anti-realism. I hardly react at all when strangers get punched and worse (as we speak, probably). Tragedy is when I cut my finger.
-2Eugine_Nier
Ethical egoism is still a form of moral realism. (Disclaimer: I don't necessarily endorse full ethical egoism.)
0DanArmak
Of course I would, and that doesn't make me a moral realist. I would say: by the morals that I feel, and most other people also feel and agree on, the person who assaulted me acted immorally. Nothing to do with objective moral rules: just rules that I and most other people feel to be moral and agree on. More importantly, if some people in my place would appeal to "objective/factual morals", that is not in itself evidence for the existence of such objective morals. Since when I ask them (you) how they perceive these objective morals, how they even know them to exist, I receive so far no answer.
-2Eugine_Nier
What if I said that by the morals I feel it's ok for me to hit you? You could answer that most people disagree with me, but I suspect you'd object to being punched even if, e.g., you belonged to a low status group that people thought it was ok to abuse.
0DanArmak
I live by my morals, whether or not others share them. This doesn't change the fact that they are my morals, a feature of my brain state, and not some sort of objective independently existing morals. It's exactly the same situation as saying that I feel that my girlfriend is beautiful whether or not others agree, but that doesn't mean there's an objective standard of beauty in the universe that doesn't depend on observers. If I belonged to a low status group that most people had no moral issues with abusing, then I would keep saying they behave immorally according to my views, and they would keep ignoring my words and abusing me. I fail to see what about this situation suggests that I behave as if I believe in realist morals.
-4Eugine_Nier
And you would really be ok with them living by their morals and abusing you?
1DanArmak
Of course I would not be OK. I would want them to change their behavior and I would try to change it. This would be because of my preferences as to how people should behave towards me. These preferences don't exist independently of me. Morals are a special kind of preferences. Saying "there exist someone's morals but there do not exist morals by themselves" is exactly the same as saying "there exist someone's preferences but there do not exist preferences by themselves".
-2Eugine_Nier
So why should they act in accordance with your preferences?
1DanArmak
They wouldn't. Nobody ever acts other than by their own preferences. Me by mine, they by theirs. This is pretty much baked into the definition of 'preferences', although with non-utility-maximizers like humans the situation is more complex than we'd like. This is inherent in your own description of the scenario. You said they abuse me. So presumably their preferences (including their morals) are OK with that. I'm sure you understand all this. What made you think I believed anything different?
-2Eugine_Nier
Prediction: If you were forced to consider the situation in near mode, e.g., if you had something to protect that was being threatened, you wouldn't be arguing that preferences are relative to the individual, but why the other person was acting amorally.
1DanArmak
If I were in a crisis, I would be arguing whatever was most likely to convince the other person. If the other person was a moral realist - and most people instinctively are if they never really thought about the issue - then I would argue moral realism. And if the other person was religious - as again most people instinctively are - then I would argue about god. In neither case is that evidence that I believe in moral realism, or in gods; I would just be choosing the most effective argument. And even if I did believe in moral realism, or the fact that many others do - that is not strong evidence for moral realism itself, because it is explained by evolutionary reasons that made us feel this way. Valid evidence is not that people believe one way or another, but the reasons they can explicitly articulate for that belief. When you observe that most or even all people behave like moral realists under pressure, that is a fact about people, not about moral realism.
-1Eugine_Nier
So you admit that there are two different kinds of objective facts. Given that there are two different kinds, why can't there be more?
2DanArmak
These are two quite different things. We group them under one name, 'facts', but that is just a convention. That's why I wanted to find out which kind we were talking about. Saying that "there might be a third kind" is misleading: it is a matter of definitions of words. You propose there might be some undiscoverd X. You also propose that if we discovered X, we would be willing to call it "a new kind of fact". But X itself is vastly more interesting than what words we might use. Therefore please taboo "fact" and tell me, what is it you think there may be more of?
-2Eugine_Nier
There's a reason we use the same word for both of them. They have a lot in common, for example being extremely objective in practice.
0DanArmak
Certainly, they have a lot in common, as well as a lot of differences. But this discussion doesn't seem profitable. We shouldn't be discussing the probability that "another kind of fact" exists. Either someone has a suggestion for a new kind of fact, which we can then evaluate, or else the subject is barren. The mere fact that "we've not ruled out that there might exist more things we would choose to apply the word 'fact' to" is very weak evidence. We've not ruled out china teacups in solar orbit, either, but we don't spend time discussing them.
-2Eugine_Nier
So if I understand your meta-theory correctly, anyone living before the scientific method, or simple hasn't heard of it, should be a Cartesian skeptic.
0DanArmak
I'm sorry, I don't understand what you mean. By "Cartesian skeptic" do you mean a Cartesian dualist who is skeptical of pure materialism? Or a Cartesian skeptic who does not wish to rely on his senses, who is skeptical of scientific inquiry into objective reality? Or something else?
-2Eugine_Nier
Someone who doesn't believe his sense inputs necessarily reflect any reality.
0DanArmak
That's not physical anti-realism, but it's a sort of skepticism about physical realism. However, nothing can "prove" physical realism correct if you don't already accept it. If someone doesn't believe his sense inputs reflect something with independent existence, then any new information they receive via those very same sense inputs can't logically influence their belief. Learning about the scientific method would not matter. Living today or at Descartes' time or ten thousand years ago, there are still exactly the same reasons for being a physical realist: the world just seems that way, we act that way even if we proclaim we don't believe in it, we can't change or escape the world we perceive via our senses by wishing it, and we have a strong instinct not to die.
0TimS
There could be more. It just turns out that there aren't.
-2Eugine_Nier
Do you have any evidence for this besides not being able to think of a third meta-theory?
0DanArmak
Do you have any evidence against it? Are you able to think of a third?
-2Eugine_Nier
The zero-one-infinity hueristic.
0DanArmak
Interesting point. But that's very weak evidence (because as I said the two known instances have significant differences). Also, this is a heuristic and produces many false positives. At best it motivates me to remain open to arguments that there might be more kinds of 'truth', which I am. But the mere argument that there might be is not interesting, unless someone can provide an argument for a concrete example. Or even a suggestion of what a concrete example might be like.
-2Eugine_Nier
You should study more history of ideas; once you see several examples of seemingly-unsolvable philosophical problems that were later solved by intellectual paradigm shifts, you become much less willing to believe that a particular problem is unsolvable simple because we currently don't have any idea how to solve it.
0DanArmak
I don't believe a problem is unsolvable. I don't see a problem in the first place. I don't have any unsolved questions in my world model. You keep saying I should be more open to new ideas and unsure of my existing ideas. But you do not suggest any concrete new idea. You also do not point to the need for a new idea, such as an unsolved problem. You're not saying anything that isn't fully general and applicable to all of everyone's beliefs.
-2Eugine_Nier
The physical anti-realist doesn't see any problem in his world view either.
0DanArmak
I'm not interested in dialogue with physical anti-realists. Certain mutual assumptions are necessary to hold a meaningful conversation, and some kind of physical realism is one of them. Another example is the Past Hypothesis: we must assume the past had lower entropy, otherwise we would believe that we are Boltzmann brains and be unable to trust our memories or senses. A third example is induction: believing the universe is more likely to be lawful than not, that our past experience is at least in principle a guide to the future. If moral realists are on the same level as physical realists - if they have no meaningful arguments for their position based on shared assumptions, but rather say "we are moral realists first and everything else follows, and if it conflicts with any other epistemological principles so much the worse for them" - then I'm not interested in talking to them (about moral realism). And I expect a very large proportion of people who agree with LW norms on rational thinking would say the same.
0mwengler
I'd like to tie up some of the things you said in your first and second posts in this thread. You started with: and I responded with a link to the definition of moral nihilism in wikipedia, saying moral nihilism is the label for the belief that there are no objective moral truths or falsehoods. You responded with When you say something is "uncontroversial," that means that, barring some class of people too stupid or whacky to be bothered with, people competent to have an opinion agree with you that it is obvious that morality is not objective. In response to that I briefly summarized moral realism and linked to its wikipedia entry. By this I intended to show that the community of people who believed morality to be objective was competent and obvious enough to have a simple label on their belief, with that label cited and explained widely by (it seems) everybody who bothers to summarize moral philosophy. I did NOT mean to suggest that moral realism is obvious or that I believe it or that the negation of moral realism is obvious or that I believe that. I DID mean to suggest that it IS controversial, meaning literally that there are moral realists who controvert ("speak against") nihilists and nihilists who speak against moral realists. From there we go in to the weeds, or at least I do. First, I muddied the waters by talking about a particular example which I thought clarified some issues but which seems to clarify nothing, so it is not worth even mentioning again. But the other sort of amazing thing to me is you keep asking me to defiine moral realism. What do you want me to do, copy the first few paragraphs from the wikipedia article? I'm not going to do a better job than they do. If you think the definition is dopey or meaningless or whatever, then oh well. I have nothing to add. A belief that morality is subjective is controversial by any straightforward meaning of that word, nothing else I have said is as relevant to anything else you have said as that.
1DanArmak
To quote the definition of moral realism from Wikipedia: This immediately raises three questions: 1. How are propositions made true by objective features of the world? 2. Do we find that these objectively true propositions match our moral intuitions? If they do, then whose? But most importantly: 1. Why do you think some answer to (1), this mapping of non-moral fact to moral fact, of 'is' to 'ought', is unique, objective, morally important? The knowledge or belief in moral realism is acquired. People may be born with moral realist intuitions, but they are not born with coherent arguments in favor of moral realism. And no-one has the right to just believe something without proof. So my question is: what is the evidence that convinced any moral realist to be a moral realist? This is essential, all else is secondary. I've not found such evidence anywhere. In everything that I've read about moral realism, people are just trying to justify intuitions they have about morals, to claim that if not their morals then at least some morals must be objective and universal. As far as I can tell right now, the sole cause of some people being moral realists is that it gives them pleasure to believe so. They have faith in moral realism, as it were. Then, assuming that belief is provisionally true, they look for models of that world that will allow it to be true. But such reasoning is wrong. They must show evidence for moral realism in order to have the right to believe in it. Beliefs in gods, fairies, and p-zombies are also controversial. That doesn't make them worthy of discussion. In my phrasing in previous posts I may have assumed you yourself were at least uncertain about the truth of moral realism, and therefore knew of some valid argument for it. I talked of things being controversial or not on LW, not among all humanity. I'm sorry that that was unclear and confused the conversation.
1Eugine_Nier
You can replace the phrase "moral realist" with "physical realist" in the above statement and your subsequent argument and it remains equally valid.
1DanArmak
What exactly do you mean by 'physical realism'? At first I thought it was something like the simple claim that "the physical world objectively exists independently of us", or maybe like positivism. But googling 'physical realism' brings up mostly pseudoscientific nonsense, so it may not be a commonly used term, and there are no wikipedia/Stanford/etc. entries. So I wanted to make sure what you meant by it.
-2Eugine_Nier
More or less this.
0DanArmak
OK. Then your point is that people believe in physical reality, that exists independently of them, only because of intuition - the way their minds are shaped. This is correct as a description of why people in fact believe in it. The rejection of physical realism is solipsism. It is not a fruitful position, however, in the sense that people who say they don't believe in physical reality still act as if though they believe in it. They don't get to ignore pain, or retreat into an imaginary world inside their heads. I believe this is known as the "I refute it thus!" kicking-a-stone argument. My argument against moral realism does not work against physical realism. My argument is basically "show me the evidence", and physical anti-realism rejects the very concept of evidence. Physical realism is a requirement for my argument and for every other argument about the physical world, too. Regarding the more general point that we only believe in physical realism because of intuitions, and we have similar intuitions for moral realism. Once we understand why a certain intuition exists, evolutionarily speaking, that accounts for the entirety of the evidence given by the intuition. For instance we have a strong intuition that physics is Aristotelian in nature, and not relativistic or quantum. We understand why: because it is a good model of the physical world we deal with at our scale; relativistic and quantum phenomena do not happen much at our scale, so evolution didn't build us to intuit them. Similarly, we have moral intuitions, which both say things about morals and also say that morals are objective. From an evolutionary perspective, we understand why humans who believed their morals to be objective tended to win out over those who publicly proclaimed they were subjective and malleable. And that's a complete explanation of that intuition; it doesn't provide evidence that morals are really objective.
1nshepperd
At this point I might ask you what you both think you mean by morals being "really objective". Does it mean that all minds must be persuaded by it? But that is of course false, since there is always a mind that does the opposite. Does it mean that it's written on a stone tablet in space somewhere? But that seems irrelevant, because who would want to follow random stone-commandments found in space anyway, and what if someone modified the stone tablet? Does it mean something else? The definition of prime numbers isn't found on a stone tablet anywhere, or written in the fabric of space-time. Only the pebblesorters would be persuaded by an argument that a heap of 21 pebbles is composite. Yet would you say that the number 21 is "objectively" composite? Is the "existence" of anything necessary to make 21 composite?
2mwengler
I'm a fan of using other people's definitions of words, what with the purpose of words being to communicate with other people and all. Wikipedia does a nice job.#Objectivity_in_ethics) This article gives very concise descriptions of different types of subjective and objective ethical theories. The basic meaning, my summary of an already very summary wikipedia article is this. Subjective ethical theories say that moral statements are LIMITED TO ones on which fully informed well-functioning rational minds could (or do?) disagree, while objective ethical theories hypothesize AT LEAST SOME moral statements which are "mind independent," fully informed well-functioning rational minds would agree because the truth is "out there in the world" and not a creation of the mind. Dan made an interesting point early on that 'what was right and wrong for humans could be very different from what is right and wrong for an alien intelligence.' On its face, I would measure this statement as an objective and moral statement, and therefore if true, this statement would be part of an objective moral theory. A slightly different statement that I would judge as objective, but not moral, would be 'what a human believes is right and wrong may be very different from what an alien intelligence believes is right and wrong.' In the first version, we are actually making a statement about what IS right and wrong. Saying that ANYTHING is right or wrong is a moral statement. The fact that we say what IS right and wrong for humans and aliens might be different doesn't make these statements any less objective, anymore than saying "it is wrong to drive on the right side of the road in Britain, but it is wrong to drive on the left side of the road in France." is subjective. Any fully qualified moral statement will need to have the conditions under which the moral statement applies or not. If those qualifications include facts of location, genetics, rank or office, this does not make these statements
1DanArmak
I'm not sure what it means. I hear people say the words, "morals are or may be objective", and I ask them what they mean. And they only answer very vaguely and talk about things like "how can you be sure nothing is objective besides physics and logic" and "there exist undiscovered things that if we knew about we'd describe using the word 'objective'" and so on. At this point I don't want to assign meaning to "morals are objective". I want to taboo the word and hear some actual statements from someone who came into this discussion assigning concrete meaning to that statement (whether or not they believed it).
-2Eugine_Nier
Belonging to the same similarity cluster in thing space as mathematics and statements about the world.
0DanArmak
In your comment that you link to, you give a more narrow definition, specifying "the scientific method". I agree there might be things outside of that (which will undoubtedly be absorbed into accepted science over time, mutating the concepts of the scientific method to suit new knowledge). But here you specify all "statements about the world". In that case I can say outright that in no meaningful sense does there "exist" something not in the world which cannot interact with the world. By the generalized p-zombie principle: if it cannot interact with us, then it is not causally involved with your reason for speaking about it. Nothing you will ever think or do or say or believe in, or perceive with your senses, will be causally related to something outside "the world". So there is no reason to ever discuss such a thing. Further, math (logic) is in the world. It does not have some Platonic independent "existence" because existence is a predicate of things in the physical world; it makes as much sense for a pure circle to exist as to not exist. The reason we talk about math is that it is lawfully embodied in the physical world. Our brains are so built as to be able to think about math. When we think about math we find that we enjoy it, and also that we can use it for useful purposes of applied science. So we keep talking more about math. That is a complete explanation of where math comes from. No additional postulate of math "objectively existing" is required or indeed meaningful.
-2Eugine_Nier
I don't find the generalized p-zombie principle particularly convincing, in part because it's not clear what "interact" means. I think you're using the word "exists" to mean something different from what I mean by it. This may be one source of confusion.
0DanArmak
It means 'causally influence in at least one direction'. Two systems are said to interact if knowing something about one of them gives you information about the other. I know two meanings of the word 'exist'. First, predicate about states of the physical world (and by extension of other counterfactual or hypothetical worlds that may be discussed). There exists the chair I am sitting on. There does not exist in this room a sofa. Second, 'exists' may be a statement about a mathematical structure. There exist irrational numbers. There exists a solution to a certain problem, but not to another. What do you mean by 'exists'?
-2Eugine_Nier
Well, when you start dealing with mathematical systems, causality becomes a very tricky concept. Well, knowing mathematics certainly helps with studying the physical world. Belong to the same cluster in thing space as your two examples.
0DanArmak
IIUC this unpacks to "things such that if we talked about them, we would decide to use the same words as we do for the two examples". Applying this to "objective morals", I don't feel that the statement tells me much. If this is all you meant, that's a valid position, but not very interesting in my view. Could you more explicitly describe some property of objective morals, assuming they "exist" by your definition? Something that is not a description of humans (what word we would use to describe something) but of the thing itself?
0mwengler
It would seem that you believe that. So what is your proof? And that is a moral statement to boot. From the more physicsy side, I'd guess you believe that the sun will rise tomorrow, and that electrons at the center of the earth have the same rest mass and charge as the insanely small number of electrons that have actually had their mass and charge carefully measured. You probably believe a or not-a and that 2+2=4. Are you familiar with Godel's theorem? My recollection of it is that he proved that any formal system above a certain trivial level of complexity had true statements in it that were unprovable. I can tell you the easy way out is to claim you don't believe anything, that it is all in your head, not just the moral stuff. Of course then we need to invent new language to describe the difference between statements like "no-one has the right to just believe something without proof" and "no-one has the right to just believe something with proof." I guess you could say that just because you believe something doesn't mean you believe it is true, but that starts to sound more like a non-standard definition of the word "believe" than anything with useful content. As to my explaining to you the details of why anyone should be a moral realist, I"m not interested in attempting that. I'm not a committed moral realist myself, and I'd just have to do a lot of reading myself to find a good description of what ask for. Sorry.
0DanArmak
Proof is only meaningful in a system of shared assumptions (physics) or axioms (logic). The statement that "it's wrong to believe without proof", equivalently, that "a single correct set of beliefs is mandated given by your proof (=evidence) and assumptions (=prior)", is a logical consequence of the rules of Bayesian deduction. If moral realists, or anyone else, doesn't agree to Bayesianism or another commonly understood framework of proof, logic, and common assumptions, then I'm not interested in talking to them (about moral realism). I believe all those things (with very high probability, to be pedantic). I know Godel's incompleteness theorems. Saying that some true things are formally unprovable in a logic system is not significant evidence that any specific unproven statement (eg moral realism) is in fact true just because it's not disproven. And the theorem doesn't apply to probabilistic belief systems modeling the physical universe: I can have an arbitrary degree of confidence in a belief, as long as it's not probability 1, without requiring logical proof. However, the situation for moral realism isn't "unproven conjecture", it's more like "unformalized conjecture whose proponents refuse to specify what it actually means". At least that's the state of debate in this thread. Yet you assign sufficient probability to moral realism to think it's worth discussing or reading about. Otherwise you'd have said from the start, "I agree with you that moral realism has no evidence for it, let's just drop the subject". To have such a high prior requires evidence. If you don't have such evidence, you are wrong.
-2mwengler
I'm glad you recognize that. Then you should also recognize that for reasons having nothing to do with logical necessity you have accepted some things as true which are unprovable, in your case a particular interpretation of how to do Bayesian. All you have offered so far is assertion, and the appearance that you don't even realize that you are making assumptions until after it is pointed out. When I found my self in that position, it humbled me a bit. In Bayesian terms, it moved all of my estimates further from 0 and 1 than they had been. In any case, whatever program you have used to decide what you could assume, what if your assumptions are incomplete? What if you simply haven't tried hard enough to have something "more than zero" on the moral side? So you have picked your church and your doctrine and you wish to preserve your orthodoxy by avoiding intelligent apostates. This is not a new position to take, but it has always seemed to me to be a very human bias, so I am surprised to see it stated so baldly on a website devoted to avoiding human biases in the twenty-first century. Which is to say, are you SURE you want to treat your assumptions as if they were the one true religion? Why limit yourself to this one thread, populated as it isn't by anyone who claims any real expertise? Your particular rejection of moral realism doesn't seem to reflect much knowledge. For a Bayesian, knowing that other intelligent minds have looked at something, gathered LOTS of evidence and done lots of analysis, and reached a different conclusion than your prior should LOWER your certainty in your prior. Finding one guy who can't or won't spoonfeed you concentrated moral realism, and claiming on that basis that your prior of essentially zero must stand is not at all how I interpret the Bayesian program. In my interpretation, it is when I am ignorant that my mind is most open, that my estimates are furthest from 0 and 1. I wish someone like Eliezer, or who knows his morality wel
0DanArmak
Apologies for replying late. You seem to misunderstand my comments. Normally one tries to assume as little as necessary. To argue in favour of new assumptions, one might show that they are necessary (or even sufficient) for some useful, desirable conclusions. Are there any such here? If not, why assume e.g. moral realism when one could just as well assume any of infinitely many alternatives? NO. This is completely wrong. You have not understood my position. I said: Note emphasis. I am not demanding dialogue within a specific worldview. I'm asking that the rules of the worldview being discussed be stated clearly. I'm asking for rigorous definitions instead of words like "morality objectively exists", which everyone may understand differently. This thread is on LW. When people here said they gave a high prior to moral realism (i.e. did not dismiss it as I did), I assumed they were rational about it: that they had some evidence to support such a prior. By now it's pretty clear that this is not the case, so after these last few posts of clarification I think the thread should end. As for looking elsewhere, I did when referred - as with the Stanford encyclopedia of philosophy - and the presumably high quality summaries there confirmed me in my belief that there's nothing to moral realism, it's not a defensible or even well defined position, and it is not worth investigating. I lowered it. That's why I was willing to spend time on this conversation. Then I examined the evidence those other minds could offer and raised my certainty way back up. That is just wrong. A prior of zero knowledge does not mean assigning 0.5 probability to every proposition. Propositions are entangled, so that would be inconsistent. Besides, you have evidence-based priors about other propositions entangled with this one, so your prior isn't naive anyway. No he's not. See this post which was recently on Sequence Reruns and the other posts linked from it. See also the entire Metaethics Seq
3nshepperd
Perhaps they misunderstood what was referred to by "moral realism". The phrase certainly doesn't seem to be very well defined. For example Eliezer does say that there are things that are actually right, and actually wrong. mwengler seems to think this is sufficient to make him a moral realist. You don't. Classic recipe for confusion.
0duckduckMOO
Nihilist means moral anti-realist here I assume. This was how i always used the term originally.
1[anonymous]
I've found it useful to taboo and reduce "nihilist" because there are so many different definitions and connotations. I think Richard Joyce authored a paper on a similar point.
0amcknight
I wouldn't use wikipedia to get the gist of a philosophical view. At least to me, I find it to be way off a lot of the time, this time included. Sorry I don't have a clear definition for you right now though.
0mwengler
Because for certain concepts, this is an lesswrong is an echo chamber. Unfortunately, the idea that lesswrong is NOT an echo chamber is another one of those concepts So I will retract this comment.
-1[anonymous]
Don't quite see why this is so down voted.
0RobertLumley
Me either. This is one of my favorites. But that's why I posted it. :-)
[+][anonymous]-140
[+][comment deleted]60