Rationality Quotes September 2013

5 Post author: Vaniver 04 September 2013 05:02AM

Another month has passed and here is a new rationality quotes thread. The usual rules are:

  • Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
  • Do not quote yourself.
  • Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here.
  • No more than 5 quotes per person per monthly thread, please.

Comments (456)

Comment author: [deleted] 29 September 2013 02:42:26PM 2 points [-]

Here’s the bigger point: Americans (and maybe all humans, I’m not sure) are more obsessed with words than with their meanings. I will never understand this as long as I live. Under FCC rules, in broadcast TV you can talk about any kind of depraved sex act you wish, as long as you do not use the word “fuck.” And the word itself is so mysteriously magical that it cannot be used in any way whether the topic is sex or not. “What the fuck?” is a crime that carries a stiff fine –– “I’m going to rape your 8-year-old daughter with a trained monkey,” is completely legal. In my opinion, today’s “gluten-free” cartoon is far more suggestive in an unsavory way than the vampire cartoon, but it doesn’t have a “naughty” word so it’s okay.

Are we a nation permanently locked in preschool? The answer, in the case of language, is yes.

Bizarro Blog

Comment author: Jiro 30 September 2013 12:41:50AM 16 points [-]

Sorry, this is nonsense. It's not hard to Google up a copy of the FCC rules. http://www.fcc.gov/guides/obscenity-indecency-and-profanity :

"The FCC has defined broadcast indecency as “language or material that, in context, depicts or describes, in terms patently offensive as measured by contemporary community standards for the broadcast medium, sexual or excretory organs or activities.”

I am fairly sure that "I’m going to rape your 8-year-old daughter with a trained monkey" would count as describing sexual activities in patently offensive terms, and would not be allowed when direct use of swear words would not be allowed. Just because you don't use a list of words doesn't mean that what you say will be automatically allowed.

Furthermore, the Wikipedia page on the seven words ( http://en.wikipedia.org/wiki/Seven_dirty_words ) points out that " The FCC has never maintained a specific list of words prohibited from the airwaves during the time period from 6 a.m. to 10 p.m., but it has alleged that its own internal guidelines are sufficient to determine what it considers obscene." It points out cases where the words were used in context and permitted.

In other words, this quote is based on a sound-bite distortion of actual FCC behavior and as inaccurate research, is automatically ineligible to be a good rationality quote.

Comment author: Lumifer 30 September 2013 05:09:17PM 5 points [-]

I am fairly sure that "I’m going to rape your 8-year-old daughter with a trained monkey" would count as describing sexual activities in patently offensive terms, and would not be allowed when direct use of swear words would not be allowed.

What is the basis for you being sure?

Howard Stern, a well-known "shock jock" spent many years on airwaves regulated by the FCC. He more or less specialized in "describing sexual activities in patently offensive terms" and while he had periodic run-ins with the FCC, he, again, spent many years doing this.

The FCC rule is deliberately written in a vague manner to give the FCC discretionary power. As a practical matter, the seven dirty words are effectively prohibited by FCC and other offensive expressions may or may not be prohibited. Broadcasters occasionally test the boundaries and either get away with it or get slapped down.

Comment author: hairyfigment 30 September 2013 05:18:45PM 0 points [-]

Yes, and this illustrates another problem: if we agreed on what to ban, it would make more sense to use discretionary human judgment than rules which might be manipulated or Munchkin-ed. We don't agree.

I do think it would make sense in the abstract to ban speech if we had scientific reason to think it harmed people, the way we had reason to think leaded gasoline harmed people in the 1920s. But I only know one class of speech where that might apply, and it'll never get on TV anyway. ^_^

Comment author: Eugine_Nier 29 September 2013 06:31:24PM *  -1 points [-]

The reason is that banning certain words works much better as a Schelling point.

Comment author: Nornagest 29 September 2013 09:29:46PM 6 points [-]

I don't buy it. Even if we accept for the sake of argument that limiting sexual references on broadcast TV is a good plan (a point that I don't consider settled, by the way), using dirty words as a proxy runs straight into Goodhart's law: the broadcast rules are known in advance, and innuendo's bread and butter to TV writers. A good Schelling point has to be hard to work around, even if you can't draw a strict line; this doesn't qualify.

Comment author: Desrtopa 29 September 2013 07:44:23PM *  6 points [-]

Better for what, and better than what alternatives?

Comment author: Eugine_Nier 02 October 2013 04:23:04AM -1 points [-]

You wind up in endless arguments about whether this particular show is beyond the pail.

Comment author: Desrtopa 02 October 2013 02:43:51PM 2 points [-]

That doesn't seem like it answers my question.

What's the goal in this case? This sounds like it's only attempting to address effectiveness at avoiding disputes over standards, but that could more easily be achieved by not having any restrictions at all.

Comment author: arundelo 26 September 2013 05:17:23PM 15 points [-]

We have this shared concept that there's some baseline level of effort, at which point you've absolved yourself of finger-pointing for things going badly. [.... But t]here are exceptional situations where the outcome is more important than what you feel is reasonable to do.

-- Tim Evans-Ariyeh

Comment author: Nectanebo 27 September 2013 02:29:44PM 2 points [-]

Ideally, it would be nice if the world can move towards caring about the full outcome over factors like the satisfication of baseline levels of effort in more and more situations, not just exceptional ones.

Comment author: Kawoomba 25 September 2013 07:56:08PM 8 points [-]

I believe that the final words man utters on this Earth will be "It worked!", it'll be an experiment that isn't misused, but will be a rolling catastrophe. (...) Curiosity killed the cat, and the cat never saw it coming.

Jon Stewart, talking to Richard Dawkins (S18, E156)

Comment author: snafoo 01 October 2013 12:27:16AM 4 points [-]

Let's get one thing straight: ignorance killed the cat.

Curiosity was framed.

Comment author: Polina 25 September 2013 04:02:57PM 7 points [-]

Satisfy the need to belong in balance with two other human needs—to feel autonomy and competence—and the typical result is a deep sense of well-being.

Myers, D. G. (2012). Exploring social psychology (6th ed.). New York: McGraw-Hill, P.334.

Comment author: simplicio 27 September 2013 02:16:20PM 2 points [-]

So basically: be close to friends and family, save some money, find a job you're good at.

Comment author: Polina 27 September 2013 04:30:43PM 1 point [-]

That's close to my understanding of the quote. I suppose, "autonomy" means not just financial independence, but the sense of inner self, something beyond social roles.

Comment author: lavalamp 23 September 2013 09:42:20PM 15 points [-]

You asked us to make them safe, not happy!

--"Adventure Time" episode "The Businessmen": the zombie businessmen are explaining why they are imprisoning soft furry creatures in a glass bowl.

Comment author: Jay_Schweikert 22 September 2013 05:45:50PM 6 points [-]

Zortran, do you ever wonder if it's all just meaningless?

What's "meaningless?"

It's like... wait, really? You don't have that word? It's a big deal over here.

No. Is it a good word? What does it do?

It's sort of like... what if you aren't important? You know... to the universe.

Wait... so humans have a word to refer to the idea that it'd be really sad if all of reality weren't focused on them individually?

Kinda, yeah.

We call that "megalomania."

Well, you don't have to be a jerk about it.

Saturday Morning Breakfast Cereal

Comment author: XerxesPraelor 20 September 2013 05:07:43PM *  27 points [-]

There is one very valid test by which we may separate genuine, if perverse and unbalanced, originality and revolt from mere impudent innovation and bluff. The man who really thinks he has an idea will always try to explain that idea. The charlatan who has no idea will always confine himself to explaining that it is much too subtle to be explained. The first idea may be really outree or specialist; it may be really difficult to express to ordinary people. But because the man is trying to express it, it is most probable that there is something in it, after all. The honest man is he who is always trying to utter the unutterable, to describe the indescribable; but the quack lives not by plunging into mystery, but by refusing to come out of it.

G K Chesterton

Comment author: ChristianKl 22 September 2013 11:34:30AM *  6 points [-]

The man who really thinks he has an idea will always try to explain that idea.

I don't think that's the case. There are plenty of shy intellectuals who don't push their ideas on other people. Darwin sat more than a decade on his big idea.

There are ideas that are about qualia. It doesn't make much sense to try to explain a blind person what red looks like and the same goes for other ideas that rest of observed qualia instead of resting on theory. If I believe in a certain idea because I experienced a certain qualia and I have no way of giving you the experience of the same qualia, I can't explain you the idea. In some instances I might still try to explain the blind what red looks like but there are also instance where I see it as futile.

One way of teaching certain lessons in buddhism is to give a student a koan that illustrates the lesson and let him meditate over the koan for hours. I don't see anything dishonest about teaching certain ideas that way.

If someone thinks about a topic in terms of black and white it just takes time to teach him to see various shades of grey.

Comment author: shminux 16 September 2013 04:54:16PM 13 points [-]

Rationality wakes up last:

In those first seconds, I'm always thinking some version of this: "Oh, no!!! This time is different. Now my arm is dead and it's never getting better. I'm a one-armed guy now. I'll have to start drawing left-handed. I wonder if anyone will notice my dead arm. Should I keep it in a sling so people know it doesn't work or should I ask my doctor to lop it off? If only I had rolled over even once during the night. But nooo, I have to sleep on my arm until it dies. That is so like me. What happens if I sleep on the other one tomorrow night? Can I learn to use a fork with my feet?"

Then at about the fifth second, some feeling returns to my arm and I experience hope. I also realize that if people could lose their arms after sleeping on them there wouldn't be many people left on earth with two good arms. Apparently the rational part of my mind wakes up last.

Scott Adams on waking up with a numb arm.

Comment author: ViEtArmis 17 September 2013 09:16:38PM 5 points [-]

I woke up one time with both arms completely numb. I tried to turn the light on and instead fell out of bed. I felt certain that I was going to die right then.

Comment author: MugaSofer 17 September 2013 08:10:43AM 3 points [-]

Never experienced this exact experience - I don't sleep on my arm - but waking up stupid? Definitely.

Comment author: Desrtopa 16 September 2013 11:36:55PM 1 point [-]

Odd, this has never happened to me. Not the experience of waking up with a numb arm, but the experience of being at all worried about it.

I was quite worried the first time I experienced a numb arm which was both completely dead to sensation and totally immobile for multiple minutes, but after that had happened before, successive occurrences were no longer particularly worrying.

Comment author: bbleeker 17 September 2013 07:59:46AM *  1 point [-]

I've experienced 'pins and needles' many times, but a totally 'dead' arm only once. I didn't have any control over it, and when I tried to move it I hit myself in the nose. Quite hard, too!

Comment author: Desrtopa 17 September 2013 02:12:33PM 4 points [-]

When I experienced a "totally dead arm," I didn't just not have control over it, I couldn't even wiggle my fingers. It was pretty frightening, since as far as I knew the arm might have experienced extensive cell death from blood deprivation; after all, I had no sign of it being operational at all. My circulation was poor enough that I couldn't even tell if it was still warm, beyond residual heat from my lying on it.

It's happened twice again since then though, and the successive occasions were not particularly distressing.

Comment author: ShardPhoenix 05 October 2013 04:33:18AM *  1 point [-]

IIRC the numbness is caused by nerve compression, not blood-flow cutoff.

edit: Apparently it can be either way: http://www.wisegeek.org/what-are-the-most-common-causes-of-numbness-while-sleeping.htm

edit2: And another source claims it's due to nerves, so I dunno. I do find the nerve explanation more plausible than the blood-flow one.

Comment author: bbleeker 18 September 2013 07:49:33AM -1 points [-]

I couldn't do anything with the arm either, it just felt as if it wasn't there. It was decades ago, but I think I used my shoulder muscles to try and move it. I was probably scared too, but that part of the memory is quite vague.

Comment author: jsbennett86 15 September 2013 10:47:47PM *  9 points [-]

A term that means almost anything means almost nothing. Such a term is a convenient device for those who have almost nothing to say.

Richard Mitchell - Less Than Words Can Say

Comment author: Mestroyer 16 September 2013 04:30:36PM 0 points [-]

Counterexample: "it".

Comment author: Vaniver 16 September 2013 04:34:29PM *  4 points [-]

That seems to support the quote, actually: "it" typically has a single antecedent, or a small enough set that the correct antecedent can be easily identified by context. When it cannot be identified by context, this is seen as a writing error (such as here, here, or here).

Comment author: Document 16 September 2013 05:59:03PM *  0 points [-]
Comment author: LM7805 23 September 2013 09:45:46PM 1 point [-]

All of these are dummy subjects. English does not allow a null anaphor in subject position; there are other languages that do. ("There", in that last clause, was also a dummy pronoun.)

Comment author: Vaniver 16 September 2013 07:02:29PM 3 points [-]

For each of those, the meaning of "it" is clear from context. If it weren't, then it would be uncommunicative writing.

Comment author: Rain 15 September 2013 01:39:50PM *  0 points [-]

Comment author: wedrifid 15 September 2013 02:00:30PM 4 points [-]

What is the intended lesson or rationality insight here?

Comment author: pragmatist 15 September 2013 09:58:02AM *  1 point [-]

The failure mode of clever is "asshole."

-- John Scalzi

Comment author: DanArmak 15 September 2013 10:36:47AM *  6 points [-]

So is the failure mode of many people who are not, and don't hold themselves to be, clever. I fail to see the correlation.

ETA: Scalzi addresses a very specific topic, and even then he really seems to address some specific anecdote that he doesn't share. I don't think it's a rationality quote.

Comment author: jsbennett86 14 September 2013 09:41:19PM *  12 points [-]

Reality is one honey badger. It don’t care. About you, about your thoughts, about your needs, about your beliefs. You can reject reality and substitute your own, but reality will roll on, eventually crushing you even as you refuse to dodge it. The best you can hope for is to play by reality’s rules and use them to your benefit.

Mark Crislip - Science-Based Medicine

Comment author: ChristianKl 20 September 2013 07:43:57PM *  4 points [-]

Reality is one honey badger. It don’t care. About you, about your thoughts, about your needs, about your beliefs.

Reality cares about your beliefs.

People who don't believe in ego depletion don't get as much ego depleted as people who do believe in it.
People who believe that stress is unhealthy have a higher mortality when they have high stress than people who don't hold that belief.

Comment author: DanielLC 23 September 2013 01:32:36AM *  4 points [-]

I would expect that if you have more ego depletion than other people it would result in you being more likely to believe in ego depletion. Similarly, if you're suffering health problems due to stress, it would make you think stress is unhealthy.

Your point still stands. Reality does care about your beliefs when the relevant part of reality is you.

Comment author: [deleted] 24 September 2013 02:11:47PM 1 point [-]

I'd guess that there is causation in both directions to some extent, leading to a positive feedback loop.

Comment author: ChristianKl 23 September 2013 10:53:07AM 1 point [-]

I would expect that if you have more ego depletion than other people it would result in you being more likely to believe in ego depletion.

How high is your confidence that the effect can be completely explained that way?

Comment author: DanielLC 24 September 2013 12:11:01AM 1 point [-]

Not that high, but it does throw into question any studies showing a correlation, and it seems strange to site an example there's no evidence for.

Comment author: ChristianKl 24 September 2013 01:30:50PM -1 points [-]

Not that high

What does that mean in numbers?

Comment author: Benito 21 September 2013 10:00:44AM 3 points [-]

The placebo effect has little relevant effect. People who believe they can fly don't fare better when pushed of cliffs. A world where you believe x is different from a world where you believe not-x, and that has slight physical effects given that we are embodied, but to say 'Reality cares about your beliefs' sounds far to much like a defence of idealism, or the idea that 'everyone has their own truths'.

Comment author: ChristianKl 21 September 2013 12:30:55PM 2 points [-]

The placebo effect has little relevant effect.

I'm not sure whether that's true, the last time I investigated that claim I don't found the evidence compelling. Placebo's are also a relatively clumsy way of changing beliefs intentionally.

People who believe they can fly don't fare better when pushed of cliffs.

How do you know? If you pick a height that kills 50% of the people who don't believe that they can fly, I'm not sure that the number of people killed is the same for those who hold that belief. The belief is likely to make people more relaxed when they are pushed over the cliff which is helpful for surviving the experience.

I doubt that you find many people who hold that belief with the same certainity that they believe the sun rises tomorrow. If you don't like idealism, argue based on the beliefs that people actually hold in reality instead of escaping into thought experiments.

A world where you believe x is different from a world where you believe not-x, and that has slight physical effects given that we are embodied,

I would call 20000 death Americans per year for the belief that stress is unhealthy more than a slight physical effect.

'Reality cares about your beliefs' sounds far to much like a defence of idealism

I don't think that the fact that you pattern match it that way speaks against the idea. I think the original quote comes from a place of Descartes inspired mind-body dualism. We are embodied and the content of our mind has effects.

Comment author: [deleted] 22 September 2013 10:17:19AM *  1 point [-]

I doubt that you find many people who hold that belief with the same certainity that they believe the sun rises tomorrow. If you don't like idealism, argue based on the beliefs that people actually hold in reality instead of escaping into thought experiments.

The original quote is taken from an article about the vaccine controversy. People who don't vaccinate because they believe that God will protect them or whatever actually exist, and they may be slightly less likely to fall ill than people who don't vaccinate but don't hold that belief but a lot more likely to fall ill than people who do vaccinate.

Comment author: ChristianKl 22 September 2013 10:41:25AM *  0 points [-]

I think that there are few Christians who believe that no Christian who doesn't vaccinate will get Measles.

Many Chrisitan's do believe that there's evil in the world. They believe that for some complicated reason that they don't understand sometimes God will allow evil to exist. God is supposed to be an agent who's actions a mere human can't predict with certainity.

According to that frame, if someone get's Measles it's because God wanted that to happen. If on the other hand a child dies because of adverse reaction of a vaccine that the doctor gave the child then the parent shares responsibility for that harm because he allowed the vaccination.

I also don't now how the example of Japan is supposed to convince any Christian that his supposed belief in God preventing Measles of believing Christians is wrong.

While we are at the topic of the effect of beliefs, I don't think there good research about how beliefs that people hold effect whether they get illnesses. Part of the reason is that most doctors who do studies about the immune system don't think that beliefs are in their domain because they study the body and not the mind.

Comment author: Estarlio 13 September 2013 02:54:44PM 5 points [-]

"The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently."

  • Nietzsche, Morgenröte. Gedanken über die moralischen Vorurteile
Comment author: NancyLebovitz 13 September 2013 12:49:38PM 15 points [-]

Personally, a huge breakthrough for me was realizing I could view social situations as information-gathering opportunities (as opposed to pass-fail tests). If something didn't work - that wasn't a fail, it was DATA. If something did work... also data. I could experiment! People's reactions weren't eternal judgments about my worth, but interesting feedback on the approach I had chosen that day.

parodie

Comment author: jsbennett86 11 September 2013 06:03:45AM 21 points [-]

If you cannot examine your thoughts, you have no choice but to think them, however silly they may be.

Richard Mitchell - Less Than Words Can Say

Comment author: gwern 10 September 2013 04:18:02PM 14 points [-]

"...By the end of August, I was mentally drained, more drained, I think, than I had ever been. The creative potential, the capacity to solve problems, changes in a man in ebbs and flows, and over this he has little control. I had learned to apply a kind of test. I would read my own articles, those I considered the best. If I noticed in them lapses, gaps, if I saw that the thing could have been done better, my experiment was successful. If, however, I found myself reading with admiration, that meant I was in trouble."

His Master's Voice, Stanislaw Lem; p. 106 from the Northwestern University Press 3rd edition, 1999

Comment author: Kawoomba 10 September 2013 04:35:07PM 1 point [-]

over this he has little control

I like the self-test idea, but this sort of defeatism is kind of, well, self-defeating.

Comment author: gwern 02 October 2013 09:17:21PM 2 points [-]

I think it's true. Short of crude measures like stimulants, it does seem to ebb and flow for no obvious reasons. And it's useful to know if you're currently in a doldrum - you can give up forcing yourself to try to work on creative material, and turn to all the usual chores and small tasks that build up.

Comment deleted 10 September 2013 02:45:48PM [-]
Comment author: Estarlio 10 September 2013 03:57:27PM *  1 point [-]

Why are extremism and fanaticism correlated? In a world of Bayesians, there'd be a negative correlation. People would hold extreme views lightly, for at least three reasons. [...]

For fairness sake.

Comment author: Gunnar_Zarncke 10 September 2013 12:11:54PM *  0 points [-]

From http://metamodern.com/2009/05/17/how-to-understand-everything-and-why/

To avoid blunders and absurdities, to recognize cross-disciplinary opportunities, and to make sense of new ideas, requires knowledge of at least the outlines of every field that might be relevant to the topics of interest. By knowing the outlines of a field, I mean knowing the answers, to some reasonable approximation, to questions like these:

What are the physical phenomena? What causes them? What are their magnitudes? When might they be important? How well are they understood? How well can they be modeled? What do they make possible? What do they forbid?

And even more fundamental than these are questions of knowledge about knowledge:

What is known today? What are the gaps in what I know? When would I need to know more to solve a problem? How could I find what I need?

Comment author: ChristianKl 10 September 2013 06:11:27PM 1 point [-]

I don't think that's true. I can learn that I feel better when I'm exposed to sunlight without knowing the in- and outs- of vitamin D biochemistry.

The things that matters is to accurately measure whether I feel better and to measure when I'm exposed to sunlight.

Comment author: johnlawrenceaspden 08 September 2013 04:53:03PM 19 points [-]

When you know a thing, to hold that you know it, and when you do not know a thing, to allow that you do not know it. This is knowledge.

Confucius, Analects

Comment author: philh 08 September 2013 01:53:01AM *  34 points [-]

Fran: A million billion pounds says you’ll have nothing to show me.

Bernard: Oh, the old million billion. Why don’t we make it interesting, why don’t we say 50?

Black Books, Elephants and Hens. H/t /u/mrjack2 on /r/hpmor.

Comment author: aausch 07 September 2013 07:13:02PM 10 points [-]

“The first magical step you can do after a flood,” he said, “is get a pump and try to redirect water.”

-- Richard James, founding priest of a Toronto based Wicca church, quoted in a thegridto article

Comment author: Benito 07 September 2013 09:57:19AM 5 points [-]

Secondly, you might have the nagging feeling that not much has happened, really. We wanted an answer to the question "What is truth?", and all we got was trivial truths-equivalences, and a definition of truth for sentences with certain expressions, that showed up again on the right-hand side of that very definition. If that is on your mind, then you should go back to the beginning of this lecture, and ask yourself, "What kind of answer did expect?" to our initial question. Reconsider, "What is 'grandfather-hood'?". Well, define it in familiar terms. What is 'truth'? Well, define it in familiar terms. That's what we did. If that's not good enough, why?

by Hannes Leitgeb, from his joint teaching course with Stephan Hartmann (author of Bayesian Epistemology) on Coursera entitled 'An Introduction to Mathematical Philoosphy'.

The course topics are "Infinity, Truth, Rational Belief, If-Then, Confirmation, Decision, Voting, and Quantum Logic and Probability". In many ways, a very LW-friendly course, with many mentions and discussions of people like Tarski, Gödel etc.

Comment author: anandjeyahar 06 September 2013 08:28:10PM 0 points [-]

The biggest problem in the world is too many words. We should be able to communicate, distribution graphs of past experiences, directly from one human brain to another. ~Aang Jie

Comment author: RichardKennaway 06 September 2013 04:48:15PM *  8 points [-]

Do not deceive yourself with idle hopes
That in the world to come you will find life
If you have not tried to find it in this present world.

Theophanis the Monk, "The Ladder of Divine Grace"

Comment author: Eugine_Nier 06 September 2013 07:32:37AM 7 points [-]

Furthermore, to achieve justice -- to deter, to exact retribution, to make whole the victim, or to heal the sick criminal, whichever one or more of these we take to be the goal of justice -- we must almost always respond to force with force. Taken in isolation that response will itself look like an initiation of force. Furthermore, to gather the evidence we need in most cases to achieve sufficient high levels of confidence -- whether balance of the probabilities, clear and convincing evidence, or beyond a reasonable doubt -- we often have to initiate force with third parties -- to compel them to hand over goods, to let us search their property, or to testify. If politics could be deduced this might be called the Central Theorem of Politics -- we can't properly respond to a global initiation of force without local initiations of force.

Nick Szabo

Comment author: Torello 06 September 2013 01:23:53PM 3 points [-]

Is this a similar message to Penn Jillette saying:

"If you don’t pay your taxes and you don’t answer the warrant and you don’t go to court, eventually someone will pull a gun. Eventually someone with a gun will show up. "

or did I miss the boat?

Comment author: Manfred 06 September 2013 06:27:44PM 1 point [-]

Well, it's similar, but for two differences:

1) It uses a different and wider category of examples. Viz. "initiate force [...] to compel them to hand over goods, to let us search their property, or to testify."

2) It makes a consequentialist claim about forcing people to e.g. let us search their property for evidence: "we can't properly respond to a global initiation of force without local initiations of force."

The second difference here is important because it directly contradicts the typical libertarian claim of "if we force people to do things much less than we currently do, that will lead to good consequences." The first difference is rhetorically important because it is a place where people's gut reaction is more likely to endorse the use of force, and people have been less exposed to memes about forcibly searching peoples' property (compared to the ubiquity of people disliking taxes) that would cause them to automatically respond rather than thinking.

Comment author: Eugine_Nier 09 September 2013 01:27:38AM 1 point [-]

The second difference here is important because it directly contradicts the typical libertarian claim of "if we force people to do things much less than we currently do, that will lead to good consequences."

Actually that isn't what Szabo is saying. His point is to contradict the claim of the anarcho-capitalists that "if we never force people to do things, that will lead to good consequences."

Comment author: amitpamin 05 September 2013 07:39:35PM 8 points [-]

Professor Zueblin is right when he says that thinking is the hardest work many people ever have to do, and they don't like to do any more of it than they can help. They look for a royal road through some short cut in the form of a clever scheme or stunt, which they call the obvious thing to do; but calling it doesn't make it so. They don't gather all the facts and then analyze them before deciding what really is the obvious thing.

From Obvious Adam, a business book published in 1916.

Comment author: shminux 05 September 2013 05:44:37PM *  4 points [-]

I'm avoiding the term "free will" here because experience shows that using that term turns into a debate about the definition. I prefer to say we're all just particles bumping around. Personally, I don't see how any of those particles, no matter how they are arranged, can sometimes choose to ignore the laws of physics and go their own way.

For purely practical reasons, the legal system assigns "fault" to some actions and excuses others. We don't have a good alternative to that system. But since we are all a bunch of particles bumping around according to the laws of physics (or perhaps the laws of our programmers) there is no sense of "fault" that is natural to the universe.

Slightly edited from Scott Adams' blog.

And a similar sentiment from SMBC comics.

Comment author: simplicio 06 September 2013 08:20:12PM 8 points [-]

I prefer to say we're all just particles bumping around. Personally, I don't see how any of those particles, no matter how they are arranged, can sometimes choose to ignore the laws of physics and go their own way.

I personally can't see how a monkey turns into a human. But that's irrelevant because that is not the claim of natural selection. This makes a strawman of most positions that endorse something approximately like free will. Also:

For purely practical reasons, the legal system assigns "fault" to some actions and excuses others.

Just the legal system? Gah. Everybody on earth does this about 200 times a day.

Comment author: ChristianKl 10 September 2013 06:19:37PM -1 points [-]

I personally can't see how a monkey turns into a human. But that's irrelevant because that is not the claim of natural selection. This makes a strawman of most positions that endorse something approximately like free will.

I don't think that most positions that endorse free will don't believe at all that evolution happens.

When it comes to contempory philosophers I think a minority of those who advocate for the existence of free will deny evolution.

Comment author: simplicio 10 September 2013 09:21:34PM 2 points [-]

I know. I was making an analogy between a strawman of NS and a strawman of free will. Please read the "this" in "This makes a strawman" as referring to the OP.

Comment author: brainoil 05 September 2013 11:41:37AM 8 points [-]

I was instructed long ago by a wise editor, "If you understand something you can explain it so that almost anyone can understand it. If you don't, you won't be able to understand your own explanation." That is why 90% of academic film theory is bullshit. Jargon is the last refuge of the scoundrel.

Roger Ebert

Comment author: Manfred 05 September 2013 09:25:25PM 11 points [-]

Would be nice if this were true.

Comment author: brainoil 06 September 2013 08:10:56AM *  2 points [-]

It's probably true for academic film theory. I mean how hard could it really be?

Comment author: Estarlio 04 September 2013 11:10:25PM 15 points [-]

Foundations matter. Always and forever. Regardless of domain. Even if you meticulously plug all abstraction leaks, the lowest-level concepts on which a system is built will mercilessly limit the heights to which its high-level “payload” can rise. For it is the bedrock abstractions of a system which create its overall flavor. They are the ultimate constraints on the range of thinkable thoughts for designer and user alike. Ideas which flow naturally out of the bedrock abstractions will be thought of as trivial, and will be deemed useful and necessary. Those which do not will be dismissed as impractical frills — or will vanish from the intellectual landscape entirely. Line by line, the electronic shanty town grows. Mere difficulties harden into hard limits. The merely arduous turns into the impossible, and then finally into the unthinkable.

[...]

The ancient Romans could not know that their number system got in the way of developing reasonably efficient methods of arithmetic calculation, and they knew nothing of the kind of technological paths (i.e. deep-water navigation) which were thus closed to them.

Comment author: Anatoly_Vorobey 04 September 2013 11:03:05PM 1 point [-]

Isherwood was evidently anxious to convince the youth that the relationship he desired was that of lovers and friends rather than hustler and client; he felt possessive and was jealous of Bubi's professional contacts with other men, and the next day set off to resume his attempt to transform the rent boy into the Ideal Friend. Coached by Auden, whose conversational German was a good deal better than his own at this stage, he delivered a carefully prepared speech; he had, however, overlooked the Great Phrase-book Fallacy, and was quite unable to understand Bubi's reply.

-- Norman Page, Auden and Isherwood: The Berlin Years

Comment author: RolfAndreassen 06 September 2013 04:28:23AM 3 points [-]

Not quite seeing this as a rationality quote. What's your reasoning?

Comment author: Anatoly_Vorobey 06 September 2013 01:56:52PM 3 points [-]

"The Great Phrase-book Fallacy" is both amusing and instructive. I laughed when I read it because I remembered I'd been a victim of it too once, in less seedier circumstances.

Comment author: Torello 04 September 2013 09:50:49PM 4 points [-]

But it's not who you are underneath, it's what you do that defines you.

-Rachel Dawes, Batman Begins

Comment author: ITakeBets 04 September 2013 09:00:31PM 7 points [-]

Q: Why are Unitarians lousy singers? A: They keep reading ahead in the hymnal to see if they agree with it.

Comment author: Yahooey 04 September 2013 07:55:15PM 14 points [-]

There are no absolute certainties in this universe. A man must try to whip order into a yelping pack of probabilities, and uniform success is impossible.

— Jack Vance, The Languages of Pao

Comment author: wedrifid 12 September 2013 11:59:53PM 10 points [-]

There are no absolute certainties in this universe [..] is impossible.

Improbable would seem more appropriate.

Comment author: Moss_Piglet 04 September 2013 04:38:15PM 0 points [-]

For to translate man back into nature; to master the many vain and fanciful interpretations and secondary meanings which have been hitherto scribbled and daubed over that eternal basic text homo natura; to confront man henceforth with man in the way in which, hardened by the discipline of science, man today confronts the rest of nature, with dauntless Oedipus eyes and stopped-up Odysseus ears, deaf to the siren songs of old metaphysical bird-catchers who have all too long been piping to him 'you are more! you are higher! you are of a different origin!' - that may be a strange and extravagant task but it is a task - who would deny that? Why did we choose it, this extravagant task? Or, to ask the question differently; 'why knowledge at all?' - Everyone will ask us about that. And we, thus pressed, we who have asked ourselves that same question a hundred times, we have found and can find no better answer...

Friedrich Nietzsche

Comment author: Estarlio 10 September 2013 01:13:41PM 1 point [-]

Because it's really really useful?

Comment author: Moss_Piglet 10 September 2013 02:37:14PM 1 point [-]

Jeez, people really don't appreciate poetic language around here, huh?

(That would probably be close to my answer too, I'm just a little stunned by all the downvotes.)

Comment author: RolfAndreassen 04 September 2013 04:30:32PM 3 points [-]

Trouble rather the tiger in his lair than the sage among his books. For to you kingdoms and their armies are things mighty and enduring, but to him they are but toys of the moment, to be overturned with the flick of a finger.

-- Gordon R. Dickson, "The Tactics of Mistake".

Comment author: Dentin 04 September 2013 04:09:34PM *  32 points [-]

There is no glory, no beauty in death. Only loss. It does not have meaning. I will never see my loved ones again. They are permanently lost to the void. If this is the natural order of things, then I reject that order. I burn here my hopelessness, I burn here my constraints. By my hand, death shall fall. And if I fail, another shall take my place ... and another, and another, until this wound in the world is healed at last.

Anonymous, found written in the Temple at 2013 Burning Man

Comment author: Pavitra 04 September 2013 05:57:17PM 16 points [-]

Part of that seems to be from HPMOR. I'm not sure where the rest comes from.

Comment author: Dentin 04 September 2013 06:42:50PM 6 points [-]

Yeah, almost certainly HPMOR inspired. Eliezer's work has spread far.

Comment author: JonMcGuire 04 September 2013 04:03:52PM 27 points [-]

But, of course, the usual response to any new perspective is "That can't be right, because I don't already believe it."

Eugene McCarthy, Human Origins: Are We Hybrids?

Comment author: [deleted] 11 September 2013 02:45:52PM 5 points [-]

As a non-biologist, I kind-of suspect that article is supposed to be some kind of elaborate joke. It sounds convincing to me, but then again, so did Sokal (1996) to non-physicists; my gut feelings' prior probability for that claim is tiny (but probably tinier than rationally warranted; possibly, because it kind-of sounds like a parody of ancient astronaut hypotheses); and I can't find any mention of any mammal inter-order hybrids on Wikipedia.

Comment author: Ishaan 14 September 2013 11:27:59PM *  6 points [-]

This is a blatant parody. Probability of pig+chimp hybrids involved in human origins are at pascal-low levels.

It sounds convincing to me

This is worthy of notice. It really shouldn't have been remotely convincing..

Can you identify the factors which caused you to give the statements in this article more credibility than you would have given to any random internet source of an unlikely-sounding claim? Information about what went wrong here might be useful from a rationality-increasing perspective.

Comment author: [deleted] 21 September 2013 09:46:48AM 4 points [-]

Can you identify the factors which caused you to give the statements in this article more credibility than you would have given to any random internet source of an unlikely-sounding claim?

Mostly, the fact that I don't know shit about biology, and the writer uses full, grammatical sentences, cites a few references, anticipates possible counterarguments and responds to them, and more generally doesn't show many of the obvious signs of crackpottery.

Comment author: BIbster 24 September 2013 09:48:22AM 2 points [-]

This is exactly why I (amongst many?) find it so hard to separate the good-stuff from the bad-stuff. It's the way the matter is brought to you, not the matter itself. Very thoughtful way of bringing it, as Army1987 says, references, anticipation of counterarguments etc.

Comment author: ChristianKl 13 September 2013 07:34:16PM 6 points [-]

I would also very wary of McCarthy arguement. As having studied bioinformatics myself I would say:

Show me the human genes that you think come from pigs. If you name specific genes we can run our algorithms. Don't talk about stuff like the form vertebra when we have sequenced the genomes.

Comment author: gattsuru 11 September 2013 04:46:13PM *  11 points [-]

Sokal's paper brought up the possibility of a morphogenetic field affecting quantum mechanics, which sounds slightly less rigorous than a Discworld joke -- Sir Pratchett can at least get the general aspects of quantum physics correctly. Likewise, Mrs. Jenna Moran's RPGs have more meaningful statements on set theory than Sokal's joking conflation of the axiom of equality and feminist/racial equality. I'd expect a lot of non-physicists would consider it unconvincing, especially if you allow them the answer "this paper makes no sense".

((I'd honestly expect false positives, more than false negatives, when asking average persons to /skeptically/ test papers on quantum mechanics for fraud. Thirty pages of math showing a subatomic particle to be charming has language barrier problems.))

The greater concern here is that the evidence Mr. McCarthy uses to support his assertions is incredibly weak. The vast majority of his list of interspecies hybrids, for example, are either intra-familiae or completely untrustworthy (some are simply appeals to legends or internet hoax, like the cabbit or dog-bear hybrids). The only example of remotely similar variation to a chimpanzee-pig hybrid while being remotely trustworthy is an alleged rabbit-rat cross, but chasing the citation shows that the claimed evidence likely had a different (and at the time of the original experiment, unknown) cause and that the fertilization never occurred. Other cases conflate mating behavior and fertility, by which definition humans should be capable of hybridizing with rubber and glass. The sheer number of untrustworthy citations -- and, more importantly, that they're mixed together with the verifiable and known good ones -- is a huge red flag.

The quote's interesting -- and correct! as anyone who's shown the double-slit experiment can show -- but there's probably better ways to say it and theories to associate it with.

Comment author: ChristianKl 13 September 2013 07:19:31PM 4 points [-]

Sokal's paper brought up the possibility of a morphogenetic field affecting quantum mechanics, which sounds slightly less rigorous than a Discworld joke

The concept doesn't come from Sokal but from Rupert Sheldrake who used the term in his 1995 book (http://www.co-intelligence.org/P-morphogeneticfields.html).

There are plenty of New Age people who seriously believe that the world works that way.

Comment author: BIbster 24 September 2013 09:53:11AM *  3 points [-]

There are plenty of New Age people who seriously believe that the world works that way.

Or find it a reasonable / plausible theory... I'm married to one who evolved into one who reads that pseudo-science, instead of the Stephen Hawking she used to read 20 years ago...

Comment author: Manfred 11 September 2013 03:17:09PM 6 points [-]

Yeah, it's a good quote promoting open-mindedness, but of course that's because crackpots spend a lot of time trying to hide their theories from any criticism in the name of open-mindedness.

Comment author: JQuinton 04 September 2013 03:47:11PM *  15 points [-]

Somebody could give me this glass of water and tell me that it’s water. But there’s a lot of clear liquids out there and I might actually have a real case that this might not be water. Now most cases when something like a liquid is in a cup it’s water.

A good way to find out if it’s water is to test if it has two hydrogens per oxygen in each molecule in the glass and you can test that. If it evaporates like water, if it tastes like water, freezes like water… the more tests we apply, the more sure we can be that it’s water.

However, if it were some kind of acid and we started to test and we found that the hydrogen count is off, the oxygen count is off, it doesn’t taste like water, it doesn’t behave like water, it doesn’t freeze like water, it just looks like water. If we start to do these tests, the more we will know the true nature of the liquid in this glass. That is how we find truth. We can test it any number of ways; the more we test it, the more we know the truth of what it is that we’re dealing with.

  • An ex-Mormon implicitly describing Bayesian updates
Comment author: arborealhominid 05 September 2013 12:08:47AM *  18 points [-]

Another good one from the same source:

Truth can be sliced and analyzed in 100 different ways and it will always remain true.

Falsehood on the other hand can only be sliced a few different ways before it becomes increasingly obvious that it is false.

Comment author: NancyLebovitz 04 September 2013 02:43:55PM *  10 points [-]

The merit of The Spy Who Came in from the Cold, then – or its offence, depending where you stood – was not that it was authentic, but that it was credible.

John LeCarre, explaining that he didn't have insider information about the intelligence community, and if he had, he would not have been allowed to publish The Spy Who Came in from the Cold, but that a great many people who thought James Bond was too implausible wanted to believe that LeCarre's book was the real deal.

Comment author: SatvikBeri 03 September 2013 09:45:33PM 26 points [-]

I discovered as a child that the user interface for reprogramming my own brain is my imagination. For example, if I want to reprogram myself to be in a happy mood, I imagine succeeding at a difficult challenge, or flying under my own power, or perhaps being able to levitate objects with my mind. If I want to perform better at a specific task, such as tennis, I imagine the perfect strokes before going on court. If I want to fall asleep, I imagine myself in pleasant situations that are unrelated to whatever is going on with my real life.

My most useful mental trick involves imagining myself to be far more capable than I am. I do this to reduce the risk that I turn down an opportunity just because I am clearly unqualified[...] As my career with Dilbert took off, reporters asked me if I ever imagined I would reach this level of success. The question embarrasses me because the truth is that I imagined a far greater level of success. That's my process. I imagine big.

Scott Adams

Comment author: philh 03 September 2013 07:46:55PM 48 points [-]

"However, there is something they value more than a man's life: a trowel."

"Why a trowel?"

"If a bricklayer drops his trowel, he can do no more work until a new one is brought up. For months he cannot earn the food that he eats, so he must go into debt. The loss of a trowel is cause for much wailing. But if a man falls, and his trowel remains, men are secretly relieved. The next one to drop his trowel can pick up the extra one and continue working, without incurring debt."

Hillalum was appalled, and for a frantic moment he tried to count how many picks the miners had brought. Then he realized. "That cannot be true. Why not have spare trowels brought up? Their weight would be nothing against all the bricks that go up there. And surely the loss of a man means a serious delay, unless they have an extra man at the top who is skilled at bricklaying. Without such a man, they must wait for another one to climb from the bottom."

All the pullers roared with laughter. "We cannot fool this one," Lugatum said with much amusement.

Ted Chiang, Tower of Babylon

Comment author: Salemicus 03 September 2013 07:20:38PM *  15 points [-]

[This claim] is like the thirteenth stroke of a crazy clock, which not only is itself discredited but casts a shade of doubt over all previous assertions.

A. P. Herbert, Uncommon Law.

Comment author: Eliezer_Yudkowsky 04 September 2013 05:03:45AM 22 points [-]

Caution in applying such a principle seems appropriate. I say this because I've long since lost track of how often I've seen on the Internet, "I lost all respect for X when they said [perfectly correct thing]."

Comment author: wedrifid 13 September 2013 12:07:39AM 4 points [-]

Caution in applying such a principle seems appropriate. I say this because I've long since lost track of how often I've seen on the Internet, "I lost all respect for X when they said [perfectly correct thing]."

I don't lose all respect for X based on one thing they say, but I do increase my respect in them if the controversial or difficult things they say are correct and I conserve expected evidence.

Comment author: Salemicus 04 September 2013 01:36:55PM 15 points [-]

I agree. It strengthens your point to note that, although the quote is normally used seriously, the author intended it mischievously. In context, the "thirteenth stroke" is a defendant, who has successfully rebutted all the charges against him, making the additional claim that "this [is] a free country and a man can do what he likes if he does nobody any harm."

This "crazy" claim convinces the judge to convict him anyway.

Comment author: Zvi 04 September 2013 11:57:17AM 7 points [-]

For most people, is it necessarily wrong to lose all respect for someone in response to a true statement? Most people are respecting things other than truth, and the point "anyone respectable would have known not to say that" can remain perfectly valid.

Comment author: Salemicus 03 September 2013 07:11:37PM *  15 points [-]

A man who has made up his mind on a given subject twenty-five years ago and continues to hold his political opinions after he has been proved to be wrong is a man of principle; while he who from time to time adapts his opinions to the changing circumstances of life is an opportunist.

A. P. Herbert, Uncommon Law.

Comment author: Nomad 03 September 2013 05:28:00PM *  24 points [-]

The “I blundered and lost, but the refutation was lovely!” scenario is something lovers of truth and beauty can appreciate.

Jeremy Silman

Comment author: Cthulhoo 03 September 2013 10:49:52AM 53 points [-]

In some species of Anglerfish, the male is much smaller than the female and incapable of feeding independently. To survive he must smell out a female as soon as he hatches. He bites into her releasing an enzime which fuses him to her permanently. He lives off her blood for the rest of his life, providing her with sperm whenever she needs it. Females can have multiple males attached. The morale is simple: males are parasites, women are sluts. Ha! Just kidding! The moral is don't treat actual animal behavior like a fable. Generally speaking, animals have no interest in teaching you anything.

Oglaf (Original comic NSFW)

Comment author: sixes_and_sevens 04 September 2013 12:39:28AM 17 points [-]

How have I been reading Oglaf for so long without knowing about the epilogues?

Comment author: Fronken 12 September 2013 02:28:04PM 1 point [-]

... the what.

Ahh I just finished that.

Comment author: FiftyTwo 07 September 2013 08:10:51PM 6 points [-]

For anyone unaware, SMBC has an additional joke panel when you mouse over the red button at the bottom

Comment author: MugaSofer 07 September 2013 10:14:35PM 1 point [-]

Actually, you have to click it now. Just a heads up to anyone reading this and trying to find them.

Comment author: PhilGoetz 07 September 2013 08:55:17PM 0 points [-]

AAAARGH!!! Why do they keep it secret?

That's almost as annoying as that you have to know the name of Zach's wife to create an account and comment, when for a long time the name of Zach's wife was not findable either on the website or via Google.

(I don't remember her name.)

Thank you very much.

Comment author: MugaSofer 07 September 2013 10:17:42PM 2 points [-]

I ... was not aware it was even possible to comment on SMBC.

AAAARGH!!! Why do they keep it secret?

It's an example of failing to update traditions after their original purpose has eroded, for the record. It was originally a reward for voting, which is why SMBC fans still refer to it as a "votey". The voting atrophied, while creating the reward became part of his routine.

Comment author: Cyan 07 September 2013 09:34:11PM 0 points [-]

Kelly Weinersmith.

Comment author: Kawoomba 07 September 2013 09:07:13PM 2 points [-]

It's usually the funnest panel, too.

Comment author: Eliezer_Yudkowsky 04 September 2013 04:58:31AM 16 points [-]

...oh crap, I'm going to have to reread the whole thing, aren't I.

Comment author: David_Gerard 12 September 2013 10:58:29AM 1 point [-]

And the mouseovers. And the alt text, which is different again.

Comment author: accolade 26 September 2013 04:53:43AM *  2 points [-]

And the mock ads at the bottom.

ETA: Explanation: Sometimes the banner at the bottom will contain an actual (randomized) ad, but many of the comics have their own funny mock ad associated. (When I noticed this, I went through all the ones I had already read again, to not miss out on that content.)

(I thought I'd clarify this, because this comment got downvoted - possibly because the downvoter misunderstood it as sarcasm?)

Comment author: NancyLebovitz 13 September 2013 12:43:20PM 0 points [-]

What's the difference between a mouseover and an alt text?

Comment author: tut 13 September 2013 02:50:40PM *  0 points [-]

Mouseover is javascript EDIT: or CSS and shows up when you hover your pointer over some trigger area. Alt text is plain HTML and shows up when the image (or whatever it is alt text for) doesn't load.

Comment author: wedrifid 14 September 2013 09:50:00AM 1 point [-]

Mouseover is javascript and shows up when you hover your pointer over some trigger area.

Javascript is not actually required. CSS handles it.

Comment author: David_Gerard 13 September 2013 07:51:27PM 2 points [-]

No, mouseover is TITLE= and alt text ls ALT=. Mouseover doesn't rely on Javascript. Alt text is specifically for putting in place of an image; it used to be used for mouseovers as well, but then TITLE= came in for that.

Comment author: NancyLebovitz 13 September 2013 05:06:25PM 5 points [-]

How do you get alt text to appear if the image loads? Read source?

Comment author: linkhyrule5 13 September 2013 06:54:41PM 2 points [-]

Yup.

Comment author: Dreaded_Anomaly 13 September 2013 04:15:04PM 2 points [-]

There's also title text (often called a tool tip) which appears when you hover the mouse over an image, but is a plain HTML feature.

Comment author: RobbBB 04 September 2013 05:52:32AM 2 points [-]

bahahahaha

Comment author: Wes_W 04 September 2013 05:20:52AM 15 points [-]

Nah, the wiki makes it much easier.

Comment author: jimmy 03 September 2013 01:47:02AM 16 points [-]

"To know thoroughly what has caused a man to say something is to understand the significance of what he has said in its very deepest sense." -Willard F. Day

Comment author: Pavitra 04 September 2013 05:41:30PM -1 points [-]

On the other hand, one should consider not only what was said, but also what should have been said.

Comment author: shminux 02 September 2013 11:54:59PM 5 points [-]

The idea that God would have an inadequate computer strikes me as somewhat blasphemous

Peter Shor replying in the comment section of Scott Aaronson's blog post Firewalls.

Comment author: Stabilizer 02 September 2013 08:57:21PM *  34 points [-]

Don't ask what they think. Ask what they do.

My rule has to do with paradigm shifts—yes, I do believe in them. I've been through a few myself. It is useful if you want to be the first on your block to know that the shift has taken place. I formulated the rule in 1974. I was visiting the Stanford Linear Accelerator Center (SLAC) for a weeks to give a couple of seminars on particle physics. The subject was QCD. It doesn't matter what this stands for. The point is that it was a new theory of sub-nuclear particles and it was absolutely clear that it was the right theory. There was no critical experiment but the place was littered with smoking guns. Anyway, at the end of my first lecture I took a poll of the audience. "What probability would you assign to the proposition 'QCD is the right theory of hadrons.'?" My socks were knocked off by the answers. They ranged from .01 percent to 5 percent. As I said, by this time it was a clear no-brainer. The answer should have been close to 100 percent. The next day I gave my second seminar and took another poll. "What are you working on?" was the question. Answers: QCD, QCD, QCD, QCD, QCD,........ Everyone was working on QCD. That's when I learned to ask "What are you doing?" instead of "what do you think?"

I saw exactly the same phenomenon more recently when I was working on black holes. This time it was after a string theory seminar, I think in Santa Barbara. I asked the audience to vote whether they agreed with me and Gerard 't Hooft or if they thought Hawking’s ideas were correct. This time I got a 50-50 response. By this time I knew what was going on so I wasn't so surprised. Anyway I later asked if anyone was working on Hawking's theory of information loss. Not a single hand went up. Don't ask what they think. Ask what they do.

-Leonard Susskind, Susskind's Rule of Thumb

Comment author: lukeprog 05 September 2013 09:01:49PM 7 points [-]

Great quote.

Unfortunately, we find ourselves in a world where the world's policy-makers don't just profess that AGI safety isn't a pressing issue, they also aren't taking any action on AGI safety. Even generally sharp people like Bryan Caplan give disappointingly lame reasons for not caring. :(

Comment author: private_messaging 14 September 2013 08:41:28AM *  3 points [-]

Why won't you update towards the possibility that they're right and you're wrong?

This model should rise up much sooner than some very low prior complex model where you're a better truth finder about this topic but not any topic where truth-finding can be tested reliably*, and they're better truth finders about topics where truth finding can be tested (which is what happens when they do their work), but not this particular topic.

(*because if you expect that, then you should end up actually trying to do at least something that can be checked because it's the only indicator that you might possibly be right about the matters that can't be checked in any way)

Why are the updates always in one direction only? When they disagree, the reasons are "lame" according to yourself, which makes you more sure everyone's wrong. When they agree, they agree and that makes you more sure you are right.

Comment author: lukeprog 14 September 2013 03:19:49PM 7 points [-]

This model should rise up much sooner than some very low prior complex model where you're a better truth finder about this topic...

It's not so much that I'm a better truth finder, it's that I've had the privilege of thinking through the issues as a core component of my full time job for the past two years, and people like Caplan only raise points that have been accounted for in my model for a long time. Also, I think the most productive way to resolve these debates is not to argue the meta-level issues about social epistemology, but to have the object-level debates about the facts at issue. So if Caplan replies to Carl's comment and my own, then we can continue the object-level debate, otherwise... the ball's in his court.

Why are the updates always in one direction only? When they disagree, the reasons are "lame" according to yourself, which makes you more sure everyone's wrong. When they agree, they agree and that makes you more sure you are right.

This doesn't appear to be accurate. E.g. Carl & Paul changed my mind about the probability of hard takeoff. And when have I said that some public figure agreeing with me made me more sure I'm right? See also my comments here.

If I mention a public figure agreeing with me, it's generally not because this plays a significant role in my own estimates, it's because other people think there's a stronger correlation between social status and correctness than I do.

Comment author: private_messaging 14 September 2013 04:33:31PM *  2 points [-]

It's not so much that I'm a better truth finder, it's that I've had the privilege of thinking through the issues as a core component of my full time job for the past two years, and people like Caplan only raise points that have been accounted for in my model for a long time.

Yes, but why Caplan did not see it fit to think about the issue for a significant time, and you did?

There's also the AI researchers who have had the privilege of thinking about relevant subjects for a very long time, education, and accomplishments which verify that their thinking adds up over time - and who are largely the actual source for the opinions held by the policy makers.

By the way, note that the usual method of rejection of wrong ideas, is not even coming up with wrong ideas in the first place, and general non-engagement of wrong ideas. This is because the space of wrong ideas is much larger than the space of correct ideas.

What I expect to see in the counter-factual world where the AI risk is a big problem, is that the proponents of the AI risk in that hypothetical world have far more impressive and far more relevant accomplishments and credentials.

but to have the object-level debates about the facts at issue.

The first problem with highly speculative topics is that great many arguments exist in favour of either opinion on a speculative topic. The second problem is that each such argument relies on a huge number of implicit or explicit assumptions that are likely to be violated due to their origin as random guesses. The third problem is that there is no expectation that the available arguments would be a representative sample of the arguments in general.

This doesn't appear to be accurate. E.g. Carl & Paul changed my mind about the probability of hard takeoff.

Hmm, I was under the impression that you weren't a big supporter of the hard takeoff to begin with.

If I mention a public figure agreeing with me, it's generally not because this plays a significant role in my own estimates, it's because other people think there's a stronger correlation between social status and correctness than I do.

Well, your confidence should be increased by the agreement; there's nothing wrong with that. The problem is when it is not balanced by the expected decrease by disagreement.

Comment author: lukeprog 14 September 2013 05:01:19PM *  1 point [-]

What I expect to see in the counter-factual world where the AI risk is a big problem, is that the proponents of the AI risk in that hypothetical world have far more impressive and far more relevant accomplishments and credentials.

There are a great many differences in our world model, and I can't talk through them all with you.

Maybe we could just make some predictions? E.g. do you expect Stephen Hawking to hook up with FHI/CSER, or not? I think... oops, we can't use that one: he just did. (Note that this has negligible impact on my own estimates, despite him being perhaps the most famous and prestigious scientist in the world.)

Okay, well... If somebody takes a decent survey of mainstream AI people (not AGI people) about AGI timelines, do you expect the median estimate to be earlier or later than 2100? (Just kidding; I have inside information about some forthcoming surveys of this type... the median is significantly sooner than 2100.)

Okay, so... do you expect more or fewer prestigious scientists to take AI risk seriously 10 years from now? Do you expect Scott Aaronson and Peter Norvig, within 25 years, to change their minds about AI timelines, and concede that AI is fairly likely within 100 years (from now) rather than thinking that it's probably centuries or millennia away? Or maybe you can think of other predictions to make. Though coming up with crisp predictions is time-consuming.

Comment author: private_messaging 14 September 2013 05:25:14PM *  0 points [-]

Well, I too expect some form of something that we would call "AI", before 2100. I can even buy into some form of accelerating progress, albeit the progress would be accelerating before the "AI" due to the tools using relevant technologies, and would not have that sharp of a break. I even do agree that there is a certain level of risk involved in all the future progress including progress of the software.

I have a sense you misunderstood me. I picture this parallel world where legitimate, rational inferences about the AI risk exist, and where this risk is worth working at in 2013 and stands out among the other risks, as well as any other pre-requisites for making MIRI worthwhile hold. And in this imaginary world, I expect massively larger support than "Steven Hawkins hooked up with FHI" or what ever you are outlining here.

You do frequently lament that the AI risk is underfunded, under-supported, and there's under-awareness about it. In the hypothetical world, this is not the case and you can only lament that the rational spending should be 2 billions rather than 1 billion.

edit: and of course, my true rejection is that I do not actually see rational inferences leading there. The imaginary world stuff is just a side-note to explain how non-experts generally look at it.

edit2: and I have nothing against FHI's existence and their work. I don't think they are very useful, or address any actual safety issues which may arise, though, but with them I am fairly certain they aren't doing any harm either (Or at least, the possible harm would be very small). Promoting the idea that AI is possible within 100 years, however, is something that increases funding for AI all across the board.

Comment author: lukeprog 14 September 2013 05:58:49PM *  8 points [-]

I have a sense you misunderstood me. I picture this parallel world where legitimate, rational inferences about the AI risk exist, and where this risk is worth working at in 2013 and stands out among the other risks, as well as any other pre-requisites for making MIRI worthwhile hold. And in this imaginary world, I expect massively larger support than "Steven Hawkins hooked up with FHI" or what ever you are outlining here.

Right, this just goes back to the same disagreement in our models I was trying to address earlier by making predictions. Let me try something else, then. Here are some relevant parts of my model:

  1. I expect most highly credentialed people to not be EAs in the first place.
  2. I expect most highly credentialed people to not be familiar with the arguments for caring about the far future.
  3. I expect most highly credential people to be mostly just aware of risks they happen to have heard about (e.g. climate change, asteroids, nuclear war), rather than attempting a systematic review of risks (e.g. by reading the GCR volume).
  4. I expect most highly credentialed people to respond fairly well when actuarial risk is easily calculated (e.g. asteroid risk), and not-so-well when it's more difficult to calculate (e.g. many insurance companies went bankrupt after 9/11).
  5. I expect most highly credentialed people to have spent little time on explicit calibration training.
  6. I expect most highly credentialed people to not systematically practice debiasing like some people practice piano.
  7. I expect most highly credentialed people to know very little about AI, and very little about AI risk.
  8. I expect that in general, even those highly credentialed people who intuitively think AI risk is a big deal will not even contact the people who think about AI risk for a living in order to ask about their views and their reasons for them, due to basic VoI failure.
  9. I expect most highly credentialed people to have fairly reasonable views within their own field, but to often have crazy views "outside the laboratory."
  10. I expect most highly credentialed people to not have a good understanding of Bayesian epistemology.
  11. I expect most highly credentialed people to continue working on, and caring about, whatever their career has been up to that point, rather than suddenly switching career paths on the basis of new information and an EV calculation.
  12. I expect most highly credentialed people to not understand lots of pieces of "black swan epistemology" like this one and this one.
  13. etc.
Comment author: ciphergoth 15 September 2013 08:43:02AM 9 points [-]

Luke, why are you arguing with Dmytry?

Comment author: private_messaging 14 September 2013 06:47:41PM 1 point [-]

The question should not be about "highly credentialed" people alone, but about how they fare compared to people who are rather very low "credentialed".

In particular, on your list, I expect people with fairly low credentials to fare much worse, especially at identification of the important issues as well as on rational thinking. Those combine multiplicatively, making it exceedingly unlikely - despite the greater numbers of the credential-less masses - that people who lead the work on an important issue would have low credentials.

I expect most highly credentialed people to not be EAs in the first place.

What's EA? Effective altruism? If it's an existential risk, it kills everyone, selfishness suffices just fine.

e.g. many insurance companies went bankrupt after 9/11

Ohh, come on. That is in no way a demonstration that insurance companies in general follow faulty strategies, and especially is not a demonstration that you could do better.

I expect most highly credentialed people to not systematically practice debiasing like some people practice piano.

Indeed.

Comment author: [deleted] 14 September 2013 10:26:15PM *  3 points [-]

If it's an existential risk, it kills everyone, selfishness suffices just fine.

A selfish person protecting against existential risk builds a bunker and stocks it with sixty years of foodstuffs. That doesn't exactly help much.

Comment author: lukeprog 14 September 2013 06:54:18PM *  1 point [-]

In particular, on your list, I expect people with fairly low credentials to fare much worse

No doubt! I wasn't comparing highly credentialed people to low-credentialed people in general. I was comparing highly credentialed people to Bostrom, Yudkowsky, Shulman, etc.

Comment author: Stabilizer 09 September 2013 07:05:10AM 0 points [-]

After reading Robin's exposition of Bryan's thesis, I would disagree that his reasons are disappointingly lame.

Comment author: wedrifid 14 September 2013 09:41:53AM *  0 points [-]

After reading Robin's exposition of Bryan's thesis, I would disagree that his reasons are disappointingly lame.

Which could either indicate that the reasons are good or that your standards are lower than Luke's and so trigger no disappointment.

Comment author: lukeprog 09 September 2013 04:28:57PM 1 point [-]

Bryan is expressing a "standard economic intuition" but... did you see Carl's comment reply on Caplan's post, and also mine?

Comment author: private_messaging 13 September 2013 02:27:30PM *  -1 points [-]

I did see Eelco Hoogendoorn 's and it is absolutely spot on.

I'm hardly a fan of Caplan, but he has some Bayesianism right:

  1. Based on how things like this asymptote or fail altogether, he has a low prior for foom.

  2. He has low expectation of being able to identify in advance (without the work equivalent to the creation of the AI) exact mechanisms by which it is going to asymptote or fail, irrespective of whenever it does or does not asymptote or fail, so not knowing such mechanisms does not bother him a whole lot.

  3. Even assuming he is correct he expects a plenty of possible arguments against this position (which are reliant on speculations), as well as expects to see some arguers, because the space of speculative arguments is very huge. So such arguments are not going to move him anywhere.

People don't do that explicitly any more than someone who's playing football is doing Newtonian mechanics explicitly. Bayes theorem is no less fundamental than the laws of motion of the football.

Likewise for things like non-testability: nobody's doing anything explicitly, it is just the case that due to something you guys call "conservation of expected evidence" , when there is no possibility of evidence against a proposition, then a possibility of evidence in favour of the proposition would violate the Bayes theorem.

Comment author: Estarlio 13 September 2013 02:40:53PM 0 points [-]

when there is no possibility of evidence against a proposition, then a possibility of evidence in favour of the proposition would violate the Bayes theorem.

I'm not sure how you could have such a situation, given that absence of expected evidence is evidence of the absence. Do you have an example?

Comment author: private_messaging 13 September 2013 03:06:49PM *  0 points [-]

Well, the probabilities wouldn't be literally zero. What I mean is that lack of a possibility of strong evidence against something, and only a possibility of very weak evidence against it (via absence of evidence) implies that strong evidence in favour of it must be highly unlikely. Worse, such evidence just gets lost among the more probable 'evidence that looks strong but is not'.

Comment author: Estarlio 13 September 2013 04:39:30PM 3 points [-]

Ah, I think I follow you.

Absence of evidence isn't necessarily a weak kind of evidence.

If I tell you there's a dragon sitting on my head, and you don't see a dragon sitting on my head, then you can be fairly sure there's not a dragon on my head.

On the other hand, if I tell you I've buried a coin somewhere in my magical 1cm deep garden - and you dig a random hole and don't find it - not finding the coin isn't strong evidence that I've not buried one. However, there there's so much potential weak evidence against. If you've dug up all but a 1cm square of my garden - the coin's either in that 1cm or I'm telling porkies, and what are the odds that - digging randomly - you wouldn't have come across it by then? You can be fairly sure, even before digging up that square, that I'm fibbing.

Was what you meant analogous to one of those scenarios?