Rationality Quotes: April 2011

6 Post author: benelliott 04 April 2011 09:55AM

You all know the rules:

  • Please post all quotes separately, so that they can be voted up/down separately.  (If they are strongly related, reply to your own comments.  If strongly ordered, then go ahead and post them together.)
  • Do not quote yourself.
  • Do not quote comments/posts on LW/OB.
  • No more than 5 quotes per person per monthly thread, please.

Comments (384)

Comment author: RichardKennaway 04 April 2011 10:45:00AM 24 points [-]

I recently posted these in another thread, but I think they're worth putting here to stand on their own:

"Magic is just a way of saying 'I don't know.'"

Terry Pratchett, "Nation"

The essence of magic is to do away with underlying mechanisms. ... What makes the elephant disappear is the movement of the wand and the intent of the magician, directly. If there were any intervening processes, it would not be magic but just engineering. As soon as you know how the magician made the elephant disappear, the magic disappears and -- if you started by believing in magic -- the disappointment sets in.

William T. Powers (CSGNET mailing list, April 2005)

Comment author: soreff 04 April 2011 10:10:52PM 18 points [-]

Does that mean one can answer "Do you believe in magic?" with "No, but I believe in the existence of opaque proprietary APIs"?

Comment author: RichardKennaway 04 April 2011 11:08:19PM 1 point [-]

API's made by the superintelligent creators of this universe? Personally, no.

Comment author: soreff 05 April 2011 12:41:34AM *  4 points [-]

Actually, what I had in mind was Microsoft - though their products don't pass the "any sufficiently advanced technology is indistinguishable from magic" test. Opacity and incomprehensibility (the spell checker did what?) is within their grasp...

Comment author: David_Gerard 05 April 2011 09:43:05AM 6 points [-]

Worse: APIs grown by evolution. Evolution makes the worst BASIC spaghetti coder you ever heard of look like Don Knuth by comparison.

Comment author: cousin_it 04 April 2011 12:11:00PM 33 points [-]

People commonly use the word "procrastination" to describe what they do on the Internet. It seems to me too mild to describe what's happening as merely not-doing-work. We don't call it procrastination when someone gets drunk instead of working.

-- Paul Graham

Comment author: wedrifid 04 April 2011 01:03:35PM 18 points [-]

People commonly use the word "procrastination" to describe what they do on the Internet. It seems to me too mild to describe what's happening as merely not-doing-work. We don't call it procrastination when someone gets drunk instead of working.

What exactly would Paul Graham call reading Paul Graham essays online when I should be working?

Comment author: Gray 04 April 2011 03:51:04PM 0 points [-]

I'm thinking either "lazy" or "irresponsible".

Comment author: wiresnips 04 April 2011 05:35:44PM 0 points [-]

The question of which is kind of still there, though. Procrastination is lazy, but getting drunk at work is irresponsible.

Comment author: NickiH 04 April 2011 08:07:03PM 3 points [-]

It depends what your work is. If you're doing data entry then surfing the net is lazy. If you're driving a train and surfing the net on your phone then that's irresponsible.

Comment author: sketerpot 04 April 2011 05:43:35PM 8 points [-]

Perhaps the answer to that question lies in one or more of the following Paul Graham essays:

Disconnecting Distraction

Good and Bad Procrastination

P.S.: Bwahahahaha!

Comment author: SilasBarta 04 April 2011 03:34:46PM 4 points [-]

When it comes to learning on the internet (including, as wedrifid mentions, reading Graham's essays, but excluding e.g. porn and celebrity gossip), I'd say It's a lot less harmful and risky than being drunk, and probably helpful in a lot of ways. It's certainly not making huge strides toward accomplishing your life's goals, but it seems like a stretch to compare it to getting drunk.

Comment author: cousin_it 04 April 2011 04:03:09PM *  6 points [-]

I think PG's analogy referred to addictiveness, not harmfulness.

Comment author: childofbaud 04 April 2011 08:35:37PM 3 points [-]

Is it bad if you're addicted to good things?

Comment author: cousin_it 04 April 2011 08:41:17PM 2 points [-]

No, but in this case the addiction makes you worse off because surfing the net is worse than doing productive work.

Comment author: taryneast 05 April 2011 08:59:10AM *  4 points [-]

If it's getting in the way of other stuff you want/need to do, then yes. Otherwise probably no.

Comment author: Costanza 04 April 2011 08:11:01PM 7 points [-]

Okay, that quote has me upvoting and closing my LessWrong browser.

Comment author: David_Gerard 05 April 2011 09:40:37AM *  3 points [-]

And this just reminded me to check the time and realise i was 40 minutes late for logging into work (cough) LessWrong as memetic hazard!

Comment author: MBlume 05 April 2011 05:21:38PM 0 points [-]

PG has added specific hacks to HN to help people who don't want it to become a memetic hazard. Is it possible we should do the same to LW?

Comment author: David_Gerard 05 April 2011 08:40:36PM -1 points [-]

I find HN to be a stream of excessively tasty brain candy. What particular hacks are you thinking of? Is there a list?

Comment author: Zack_M_Davis 05 April 2011 09:19:34PM *  10 points [-]

MBlume may be referring to the "noprocrast" feature:

the latest version of Hacker News has a feature to let you limit your use of the site. There are three new fields in your profile, noprocrast, maxvisit, and minaway. (You can edit your profile by clicking on your username.) Noprocrast is turned off by default. If you turn it on by setting it to "yes," you'll only be allowed to visit the site for maxvisit minutes at a time, with gaps of minaway minutes in between. The defaults are 20 and 180, which would let you view the site for 20 minutes at a time, and then not allow you back in for 3 hours. You can override noprocrast if you want, in which case your visit clock starts over at zero.

Best wishes, the Less Wrong Reference Desk.

Comment author: Risto_Saarelma 04 April 2011 01:03:01PM 30 points [-]

My friend, Tony, does prop work in Hollywood. Before he was big and famous, he would sell jewelry and such at Ren Faires and the like. One day I'm there, shooting the shit with him, when a guy comes up and looks at some of the crystals that Tony is selling. he finally zeroes in on one and gets all gaga over the bit of quartz. He informs Tony that he's never seen such a strong power crystal. Tony tells him it a piece of quartz. The buyer maintains it is an amazing power crystal and demands to know the price. Tony looks him over for a second, then says "If it's just a piece of quartz, it's $15. If it's a power crystal, it's $150. Which is is?" The buyer actually looked a bit sheepish as he said quietly "quartz", gave Tony his money and wandered off. I wonder if he thought he got the better of Tony.

-- genesplicer on Something Awful Forums, via

Comment author: Desrtopa 04 April 2011 01:43:12PM 9 points [-]

Part of me wants to say that it was foolish of Tony to take so much less money than he could have gotten simply for getting the guy to profess that it was a piece of quartz rather than a power crystal, but I'm not sure I would feel comfortable exploiting a guy's delusions to that degree either.

Comment author: zaph 04 April 2011 02:27:26PM *  10 points [-]

I thank Tony for not taking the immediately self-benefiting path of profit and instead doing his small part to raise the sanity waterline.

Comment author: Giles 04 April 2011 03:10:37PM *  13 points [-]

Was the buyer sane enough to realise that it probably wasn't a power crystal, or just sane enough to realise that if he pretended it wasn't a power crystal he'd save $135?

Is that amount of raising-the-sanity waterline worth $135 to Tony?

I would guess it's guilt-avoidance at work here.

(EDIT: your thanks to Tony are still valid though!)

Comment author: childofbaud 04 April 2011 08:55:09PM *  7 points [-]

And with that in mind, how would it have affected the sanity waterline if Tony had donated that $135 to an institution that's pursuing the improvement of human rationality?

Comment author: Eliezer_Yudkowsky 05 April 2011 04:35:44AM 40 points [-]

Look, sometimes you've just got to do things because they're awesome.

Comment author: DanielLC 05 April 2011 12:25:47AM 5 points [-]

I think he would have been better off taking the money and donating it to a good charity.

Comment author: benelliott 04 April 2011 03:57:44PM 4 points [-]

There's no guarantee the guy would have bought it at all for $150. The impression I get is that this was ultimately a case of belief in belief, Tony knew he couldn't get much more than $15 and just wanted to win the argument.

Comment author: Desrtopa 04 April 2011 04:04:28PM 2 points [-]

I doubt he would have bought it for $150, but after making a big deal of its properties as a power crystal, he'd be limited in his leverage to haggle it down; he'd probably have taken it for three times the asking price if not ten.

Comment author: NancyLebovitz 04 April 2011 03:58:53PM 35 points [-]

I wonder if the default price was more like $10.

Comment author: Giles 04 April 2011 06:10:57PM 19 points [-]

Wow, anchoring! That one didn't even occur to me!

Comment author: NihilCredo 05 April 2011 09:49:45PM 13 points [-]

Note to self: do not buy stuff from Nancy Lebovitz.

Comment author: Tiiba 06 April 2011 02:27:01AM *  4 points [-]

Better yet, don't go gaga. And use anchoring to your advantage - before haggling, talk about something you got for free.

Comment author: Yvain 05 April 2011 11:36:38PM *  16 points [-]

Story kind of bothers me. Yeah, you can get someone to pretend not to believe something by offering a fiscal reward, but that doesn't prove anything.

If I were a geologist and correctly identified the crystal as the rare and valuable mineral unobtainite which I had been desperately seeking samples of, but Tony stubbornly insisted it was quartz - and if Tony then told me it was $150 if it was unobtainite but $15 if it was quartz - I'd call it quartz too if it meant I could get my sample for cheaper. So what?

Comment author: Alicorn 05 April 2011 11:42:31PM 11 points [-]

I think the interesting part of the story is that it caused the power crystal dude to shut up about power crystals when he'd previously evinced interest in telling everyone about them. I don't think you could get the same effect for $135 from a lot of, say, missionaries.

Comment author: Dorikka 06 April 2011 03:29:18PM 4 points [-]

And then the guy walks away trying to prevent himself from bursting out with laughter at the fact that he just managed to get an incredibly good deal on a strong power crystal that Tony, who had clearly not been educated in such things, mistakenly believed was simple quartz.

Comment author: Nominull 04 April 2011 01:35:51PM *  43 points [-]

On the plus side, bad things happening to you does not mean you are a bad person. On the minus side, bad things will happen to you even if you are a good person. In the end you are just another victim of the motivationless malice of directed acyclic causal graphs.

-Nobilis RPG 3rd edition

Comment author: Eliezer_Yudkowsky 04 April 2011 04:22:09PM 7 points [-]

...that was written by a Less Wrong reader. Or if not, someone who independently reinvented things to well past the point where I want to talk to them. Do you know the author?

Comment author: JoshuaZ 04 April 2011 04:29:21PM 9 points [-]

The author of most of the Nobilis work is Jenna K. Moran. I'm unsure if this remark is independent of LW or not. The Third Edition (where that quote is from) was published this year, so it is possible that LW influenced it.

Comment author: HonoreDB 04 April 2011 05:36:14PM *  4 points [-]

Heh, I clicked the link to see when she took over Nobilis from Rebecca Borgstrom, only to find that she took over more than that from her.

Edit: Also, serious memetic hazard warning with regard to her fiction blog, which is linked from the article.

Comment author: novalis 04 April 2011 08:57:02PM *  22 points [-]

I'm not sure it's a memetic hazard, but this post is one of the most Hofstadterian things outside of Hofstadter

Until this moment, I had always assumed that Eliezer had read 100% of all fiction.

Comment author: David_Gerard 04 April 2011 04:35:35PM *  2 points [-]

Or if not, someone who independently reinvented things to well past the point where I want to talk to them.

The memes are getting out there! (Hopefully.)

Comment author: Larks 05 April 2011 01:19:13PM 7 points [-]

No, hopefully they were re-discovered. We can improve our publicity skills, but we can't make ideas easier to independantly re-invent.

Comment author: David_Gerard 05 April 2011 04:53:18PM *  -1 points [-]

I think them surviving as spreading memes is pretty good, if the information is transmitted without important errors creeping in. Though yes, reinventability is good (and implies the successful spread of prerequisite memes).

Comment author: Larks 05 April 2011 05:10:34PM 4 points [-]

Oh yeah, both are good, but like good evidential decision theorists we should hope for re-invention.

Comment author: Vaniver 05 April 2011 05:00:02PM 2 points [-]

No, hopefully they were re-discovered. We can improve our publicity skills, but we can't make ideas easier to independantly re-invent.

Really? If meme Z is the result of meme X and Y colliding, then it seems like spreading X and Y makes it easier to independently re-invent Z.

Comment author: Larks 05 April 2011 05:11:14PM 1 point [-]

Yes - by 'independently' I mean 'unaffected by any publicity work we might do'.

Comment author: Sniffnoy 05 April 2011 11:28:59PM 8 points [-]

Or just someone else who read Pearl, no?

Comment author: Tyrrell_McAllister 06 April 2011 03:32:43AM *  5 points [-]

...that was written by a Less Wrong reader. Or if not, someone who independently reinvented things to well past the point where I want to talk to them. Do you know the author?

Hasn't using DAGs to talk about causality long been a staple of the philosophy and computer science of causation? The logical positivist philosopher Hans Reichenbach used directed acyclic graphs to depict causal relationships between events in his book The Direction of Time (1956). (See, e.g., p. 37.)

A little searching online also turned up this 1977 article in Proc Annu Symp Comput Appl Med Care. From p. 72:

When a set of cause and effect relationships between states is specified, the resulting structure is a network, or directed acyclic graph of states.

That article came out around the time of Pearl's first papers, and it doesn't cite him. Had his ideas already reached that level of saturation?

ETA: I've looked a little more closely at the 1977 paper, which is entitled "Problems in the Design of Knowledge Bases for Medical Consultation". It appears to completely lack the idea of performing surgery on the DAGs, though I may have missed something. Here is a longer quote from the paper (p. 72):

Many states may occur simultaneously in any disease process. A state thus defined may be viewed as a qualitative restriction on a state variable as used in control systems theory. It does not correspond to one of the mutually exclusive states that could be used to describe a probabilistic system.

[...]

When a set of cause and effect relationships between states is specified, the resulting structure is a network, or directed acyclic graph of states.

The mappings between nodes n_i of the causal net are of n_i -- a_{ij} --> n_j where a_{ij} is the strength of causation (interpreted in terms of its frequency of occurrence) and n_i and n_j are states which are summarized by English language statements. This rule is interpreted as: state n_i causes state n_j, independent of other events, with frequency a_{ij}. Starting states are also assigned a frequency measure indicating a prior or starting frequency. The levels of causation are represented by numerical values, fractions between zero and one, which correspond to qualitative ranges such as: sometimes, often, usually, or always.

So, when it comes to demystifying causation, there is still a long distance from merely using DAGs to using DAGs in the particularly insightful way that Pearl does.

Comment author: Mycroft65536 04 April 2011 02:03:38PM 45 points [-]

Luck is statistics taken personally.

Penn Jellete

Comment author: HonoreDB 04 April 2011 05:19:36PM 3 points [-]

Upvoted. Also, Jillette.

Comment author: Mycroft65536 05 April 2011 03:55:22AM 0 points [-]

Damn! I googled for spelling and everything =)

Comment author: Apprentice 04 April 2011 03:17:38PM 14 points [-]

Virtually everything in science is ultimately circular, so the main thing is just to make the circles as big as possible.

Richard D. Janda and Brian D. Joseph, 2003, The Handbook of Historical Linguistics, p. 111.

Comment author: Davidmanheim 04 April 2011 05:17:56PM 5 points [-]

"Does it contain any abstract reasoning concerning quantity or number? No. Does it contain any experimental reasoning concerning matter of fact and existence? No. Commit it then to the flames, for it can contain nothing but sophistry and illusion."

-Hume, An Inquiry Concerning Human Understanding

Comment author: Johnicholas 04 April 2011 05:26:09PM 5 points [-]

Doesn't that mean "An Inquiry Concerning Human Understanding" should be committed to the flames? I didn't notice much numerical or experimental reasoning in it.

Comment author: [deleted] 04 April 2011 07:41:38PM *  4 points [-]

The quote is somewhat experimental, but we'd have to ignore its advice to find out if it was correct.

Comment author: benelliott 04 April 2011 10:29:07PM *  1 point [-]

I would say that advice from an experienced practitioner in a given field falls into a broad definition of "experimental reasoning", since at some stage they probably tried several approaches and found out the hard way which one worked.

Comment author: wedrifid 05 April 2011 09:04:43AM 2 points [-]

Personally I enjoy illusions - some of them look pretty. I'm keeping them.

Comment author: HonoreDB 04 April 2011 05:26:20PM 27 points [-]

Part of the potential of things is how they break.

Vi Hart, How To Snakes

Comment author: Manfred 04 April 2011 06:25:55PM 10 points [-]

Vi Hart is so dang awesome.

Comment author: sixes_and_sevens 04 April 2011 07:14:22PM 4 points [-]

"Man, it seems like everyone has a triangle these days..."

Comment author: Emile 04 April 2011 08:19:24PM 16 points [-]

"But these two snakes can't talk because this one speaks in parseltongue and that one speaks in Python"

Damn, why didn't I discover those before ...

Comment author: Kutta 04 April 2011 05:29:14PM *  18 points [-]

The correct question to ask about functions is not „What is a rule?” or „What is an association?” but „What does one have to know about a function in order to know all about it?” The answer to the last question is easy – for each number x one needs to know the number f(x) (…)

– M. Spivak: Calculus

Comment author: Kutta 04 April 2011 05:30:05PM 9 points [-]

Theology is the effort to explain the unknowable in terms of the not worth knowing.

– Mencken, quoted in Pinker: How the Mind Works

Comment author: Kutta 04 April 2011 05:33:04PM *  7 points [-]

Wisdom is easy: just find someone who trusts someone who trusts someone who trusts someone who knows the truth.

– Steven Kaas

Comment author: Jonathan_Graehl 06 April 2011 01:51:24AM 1 point [-]

I really don't see the point. All I'm getting out of this is: "knowing the truth is hard".

Comment author: Kutta 06 April 2011 10:24:55AM 2 points [-]

Plus the notion that in the current world when you know the truth with some satisfactory accuracy, most of the time you get to know it not firsthand but via a chain of people. Therefore it might be said that evaulating people's trustworthiness is in the same league of importance as interpreting and analysing data yet untouched by people.

Also, to nitpick, if you find a chain of people full of very trustworthy people, knowing the truth could be relatively easy.

Comment author: endoself 04 April 2011 06:44:35PM 22 points [-]

Most people would rather die than think; many do.

– Bertrand Russell

Comment author: AndrewM 04 April 2011 07:20:13PM 25 points [-]

We are built to be effective animals, not happy ones.

-Robert Wright, The Moral Animal

Comment author: taserian 04 April 2011 07:47:05PM 16 points [-]

On perseverance:

It's a little like wrestling a gorilla. You don't quit when you're tired, you quit when the gorilla is tired.

-- Robert Strauss

(Although the reference I found doesn't say which Robert Strauss it was)

I think it goes well with the article Make an Extraordinary Effort.

Comment author: Desrtopa 04 April 2011 08:13:24PM *  15 points [-]

I kind of feel like a scenario is not a great starting point for talking about perseverance when it's likely to result in your immediately getting your arms ripped off.

There are times when it's important to persevere, and times when it's important to know what not to try in the first place.

Comment author: benelliott 04 April 2011 10:21:55PM 28 points [-]

And there are times when you don't get to choose whether or not you wrestle the gorilla.

Comment author: dares 04 April 2011 07:52:14PM 11 points [-]

“In life as in poker, the occasional coup does not necessarily demonstrate skill and superlative performance is not the ability to eliminate chance, but the capacity to deliver good outcomes over and over again. That is how we know Warren Buffett is a skilled investor and Johnny Chan a skilled poker player.” — John Kay, Financial Times

Comment author: RichardKennaway 04 April 2011 08:16:37PM *  4 points [-]

He who pours out thanks for a favourable verdict runs the risk of seeming to betray not only a bad conscience, but also a poor idea of the judge's office.

Francis Paget, preface to the 2nd ed. of "The Spirit of Discipline", 1906
http://www.archive.org/details/thespiritofdisc00pageuoft

The book also contains material on accidie (the Introductory Essay and the preface to the seventh edition), which is probably how I came across it.

Comment author: Confringus 04 April 2011 08:39:08PM *  13 points [-]

"Isn't it enough to see that a garden is beautiful without having to believe that there are fairies at the bottom of it too?"

Douglas Adams

This quote defines my approach to science and philosophy; a phenomenon can be wondrous on its own merit, it need not be magical or extraordinary to have value.

Comment author: Raemon 05 April 2011 03:02:51AM 2 points [-]

Is this from a particular book, or something he said randomly?

Comment author: Confringus 05 April 2011 03:24:56AM 2 points [-]

I imagine it is from one of his books but I came across it in the introduction to The God Delusion by Richard Dawkins. Oddly enough the Hitchhiker series is absolutely full of satirical quotes which can be applied to rationality.

Comment author: ata 05 April 2011 05:33:01AM 3 points [-]

It's from the first Hitchhiker's Guide to the Galaxy book.

Comment author: Raemon 05 April 2011 05:47:49AM 2 points [-]

Really? What's the context?

Comment author: HonoreDB 05 April 2011 08:08:04AM 8 points [-]

Zaphod thinks they're on a mythic quest to find the lost planet Magrathea. They've found a lost planet alright, orbiting twin stars, but Ford still doesn't believe.

As Ford gazed at the spectacle of light before them excitement burnt inside him, but only the excitement of seeing a strange new planet; it was enough for him to see it as it was. It faintly irritated him that Zaphod had to impose some ludicrous fantasy onto the scene to make it work for him. All this Magrathea nonsense seemed juvenile. Isn't it enough to see that a garden is beautiful without having to believe that there are fairies at the bottom of it too?

Comment author: Raemon 05 April 2011 11:13:08AM 1 point [-]

Thanks.

Comment author: MBlume 05 April 2011 05:23:10PM 5 points [-]

Of course, in context, they are in fact orbiting the lost planet of Magrathea.

Comment author: James_K 07 April 2011 05:56:43AM 0 points [-]

Still, Ford's position was entirely reasonable ex ante.

Comment author: DanielVarga 04 April 2011 09:06:57PM 63 points [-]

It is not really a quote, but a good quip from an otherwise lame recent internet discussion:

Matt: Ok, for all of the people responding above who admit to not having a soul, I think this means that it is morally ok for me to do anything I want to you, just as it is morally ok for me to turn off my computer at the end of the day. Some of us do have souls, though.

Igor: Matt - I agree that people who need a belief in souls to understand the difference between killing a person and turning off a computer should just continue to believe in souls.

Comment author: David_Gerard 05 April 2011 09:33:24AM 8 points [-]

This is, of course, pretty much the right answer to anyone who asserts that without God, they could just kill anyone they wanted.

Comment author: matt1 05 April 2011 06:31:38PM *  -2 points [-]

Of course, my original comment had nothing to do with god. It had to do with "souls", for lack of a better term as that was the term that was used in the original discussion (suggest reading the original post if you want to know more---basically, as I understand the intent it simply referred to some hypothetical quality that is associated with consciousness that lies outside the realm of what is simulable on a Turing machine). If you think that humans are nothing but Turing machines, why is it morally wrong to kill a person but not morally wrong to turn off a computer? Please give a real answer...either provide an answer that admits that humans cannot be simulated by Turing machines, or else give your answer using only concepts relevant to Turing machines (don't talk about consciousness, qualia, hopes, whatever, unless you can precisely quantify those concepts in the language of Turing machines). And in the second case, your answer should allow me to determine where the moral balance between human and computers lies....would it be morally bad to turn off a primitive AI, for example, with intelligence at the level of a mouse?

Comment author: [deleted] 05 April 2011 07:18:41PM 68 points [-]

If you think that humans are nothing but Turing machines, why is it morally wrong to kill a person but not morally wrong to turn off a computer?

Your question has the form:

If A is nothing but B, then why is it X to do Y to A but not to do Y to C which is also nothing but B?

This following question also has this form:

If apple pie is nothing but atoms, why is it safe to eat apple pie but not to eat napalm which is also nothing but atoms?

And here's the general answer to that question: the molecules which make up apple pie are safe to eat, and the molecules which make up napalm are unsafe to eat. This is possible because these are not the same molecules.

Now let's turn to your own question and give a general answer to it: it is morally wrong to shut off the program which makes up a human, but not morally wrong to shut off the programs which are found in an actual computer today. This is possible because these are not the same programs.

At this point I'm sure you will want to ask: what is so special about the program which makes up a human, that it would be morally wrong to shut off the program? And I have no answer for that. Similarly, I couldn't answer you if you asked me why the molecules of apple pie are safe to eat and the those of napalm are not.

As it happens, chemistry and biology have probably advanced to the point at which the question about apple pie can be answered. However, the study of mind/brain is still in its infancy, and as far as I know, we have not advanced to the equivalent point. But this doesn't mean that there isn't an answer.

Comment author: Alicorn 05 April 2011 07:37:52PM 3 points [-]

I love this comment. Have a cookie.

Comment author: cousin_it 05 April 2011 07:41:38PM 3 points [-]

Agreed. Constant, have another one on me. Alicorn, it's ironic that the first time I saw this reply pattern was in Yvain's comment to one of your posts.

Comment author: Clippy 05 April 2011 07:43:55PM 1 point [-]

Why not napalm?

Comment author: NickiH 05 April 2011 08:10:20PM 16 points [-]

what is so special about the program which makes up a human, that it would be morally wrong to shut off the program?

We haven't figured out how to turn it back on again. Once we do, maybe it will become morally ok to turn people off.

Comment author: Laoch 05 April 2011 11:11:28PM 4 points [-]

Doesn't general anesthetic count? I thought that was the turning off of the brain. I was completely "out" when I had it administered to me.

Comment author: David_Gerard 05 April 2011 11:14:56PM 0 points [-]

And people don't worry about that because it's one people are used to the idea of coming back from, which fits the expressed theory.

Comment author: Desrtopa 05 April 2011 11:17:56PM *  4 points [-]

It certainly doesn't put a halt to brain activity. You might not be aware of anything that's going on while you're under, or remember anything afterwards (although some people do,) but that doesn't mean that your brain isn't doing anything. If you put someone under general anesthetic under an electroencephalogram, you'd register plenty of activity.

Comment author: Laoch 06 April 2011 08:24:54AM 1 point [-]

Ah yes, didn't think of that. Even while I'm concious my brain is doing things I'm/it's not aware of.

Comment author: NancyLebovitz 06 April 2011 11:34:22AM 5 points [-]

Because people are really annoying, but we need to be able to live with each other.

We need strong inhibitions against killing each other-- there are exceptions (self-defense, war), but it's a big win if we can pretty much trust each other not to be deadly.

We'd be a lot more cautious about turning off computers if they could turn us off in response.

None of this is to deny that turning off a computer is temporary and turning off a human isn't. Note that people are more inhibited about destroying computers (though much less so than about killing people) than they are about turning computers off.

Comment author: matt1 05 April 2011 08:35:49PM *  5 points [-]

This is a fair answer. I disagree with it, but it is fair in the sense that it admits ignorance. The two distinct points of view are that (mine) there is something about human consciousness that cannot be explained within the language of Turing machines and (yours) there is something about human consciousness that we are not currently able to explain in terms of Turing machines. Both people at least admit that consciousness has no explanation currently, and absent future discoveries I don't think there is a sure way to tell which one is right.

I find it hard to fully develop a theory of morality consistent with your point of view. For example, would it be wrong to (given a computer simulation of a human mind) run that simulation through a given painful experience over and over again? Let us assume that the painful experience has happened once...I just ask whether it would be wrong to rerun that experience. After all, it is just repeating the same deterministic actions on the computer, so nothing seems to be wrong about this. Or, for example, if I make a backup copy of such a program, and then allow that backup to run for a short period of time under slightly different stimuli, at which point does that copy acquire an existence of its own, that would make it wrong to delete that copy in favor of the original? I could give many other similar questions, and my point is not that your point of view denies a morality, but rather that I find it hard to develop a full theory of morality that is internally consistent and that matches your assumptions (not that developing a full theory of morality under my assumptions is that much easier).

Among professional scientists and mathematicians, I have encountered both viewpoints: those who hold it obvious to anyone with even the simplest knowledge that Turing machines cannot be conscious, and those who hold that the opposite it true. Mathematicians seem to lean a little more toward the first viewpoint than other disciplines, but it is a mistake to think that a professional, world-class research level, knowledge of physics, neuroscience, mathematics, or computer science necessarily inclines one towards the soulless viewpoint.

Comment author: matt1 05 April 2011 10:14:11PM *  1 point [-]

btw, I'm fully aware that I'm not asking original questions or having any truly new thoughts about this problem. I just hoped maybe someone would try to answer these old questions given that they had such confidence in their beliefs.

Comment author: pjeby 06 April 2011 04:02:20PM 10 points [-]

I just hoped maybe someone would try to answer these old questions given that they had such confidence in their beliefs.

This website has an entire two-year course of daily readings that precisely identifies which parts are open questions, and which ones are resolved, as well as how to understand why certain of your questions aren't even coherent questions in the first place.

This is why you're in the same position as a creationist who hasn't studied any biology - you need to actually study this, and I don't mean, "skim through looking for stuff to argue with", either.

Because otherwise, you're just going to sit there mocking the answers you get, and asking silly questions like why are there still apes if we evolved from apes... before you move on to arguments about why you shouldn't have to study anything, and that if you can't get a simple answer about evolution then it must be wrong.

However, just as in the evolutionary case, just as in the earth-being-flat case, just as in the sun-going-round-the-world case, the default human intuitions about consciousness and identity are just plain wrong...

And every one of the subjects and questions you're bringing up, has premises rooted in those false intuitions. Until you learn where those intuitions come from, why our particular neural architecture and evolutionary psychology generates them, and how utterly unfounded in physical terms they are, you'll continue to think about consciousness and identity "magically", without even noticing that you're doing it.

This is why, in the world at large, these questions are considered by so many to be open questions -- because to actually grasp the answers requires that you be able to fully reject certain categories of intuition and bias that are hard-wired into human brains

(And which, incidentally, have a large overlap with the categories of intuition that make other supernatural notions so intuitively appealing to most human beings.)

Comment author: novalis 06 April 2011 12:34:08AM 0 points [-]

What's wrong with Dennett's explanation of consciousness?

Comment author: matt1 06 April 2011 12:55:21AM 1 point [-]

sorry, not familiar with that. can it be summarized?

Comment author: novalis 06 April 2011 01:45:33AM *  0 points [-]
Comment author: RobinZ 06 April 2011 12:48:47PM 0 points [-]

There is a Wikipedia page, for what it's worth.

Comment author: scav 06 April 2011 12:40:00PM 6 points [-]

I find it hard to fully develop a theory of morality consistent with your point of view.

I am sceptical of your having a rigorous theory of morality. If you do have one, I am sceptical that it would be undone by accepting the proposition that human consciousness is computable.

I don't have one either, but I also don't have any reason to believe in the human meat-computer performing non-computable operations. I actually believe in God more than I believe in that :)

Comment author: sark 05 April 2011 09:44:29PM 4 points [-]

Hmm, I don't happen to find your argument very convincing. I mean, what it does is to pay attention to some aspect of the original mistaken statement, then find another instance sharing that aspect which is transparently ridiculous.

But is this sufficient? You can model the statement "apples and oranges are good fruits" in predicate logic as "for all x, Apple(x) or Orange(x) implies Good(x)" or in propositional logic as "A and O" or even just "Z". But it should really depend on what aspect of the original statement you want to get at. You want a model which captures precisely those aspects you want to work with.

So your various variables actually confused the hell outta me there. I was trying to match them up with the original statement and your reductio example. All the while not really understanding which was relevant to the confusion. It wasn't a pleasant experience :(

It seems to me much simpler to simply answer: "Turing machine-ness has no bearing on moral worth". This I think gets straight to the heart of the matter, and isolates clearly the confusion in the original statement.

Or further guess at the source of the confusion, the person was trying to think along the lines of: "Turing machines, hmm, they look like machines to me, so all Turing machines are just machines, like a sewing machine, or my watch. Hmm, so humans are Turing machines, but by my previous reasoning this implies humans are machines. And hmm, furthermore, machines don't have moral worth... So humans don't have moral worth! OH NOES!!!"

Your argument seems like one of those long math proofs which I can follow step by step but cannot grasp its overall structure or strategy. Needless to say, such proofs aren't usually very intuitively convincing.

(but I could be generalizing from one example here)

Comment author: matt1 05 April 2011 10:06:51PM *  -1 points [-]

No, I was not trying to think along those lines. I must say, I worried in advance that discussing philosophy with people here would be fruitless, but I was lured over by a link, and it seems worse than I feared. In case it isn't clear, I'm perfectly aware what a Turing machine is; incidentally, while I'm not a computer scientist, I am a professional mathematical physicist with a strong interest in computation, so I'm not sitting around saying "OH NOES" while being ignorant of the terms I'm using. I'm trying to highlight one aspect of an issue that appears in many cases: if consciousness (meaning whatever we mean when we say that humans have consciousness) is possible for Turing machines, what are the implications if we do any of the obvious things? (replaying, turning off, etc...) I haven't yet seen any reasonable answer, other than 1) this is too hard for us to work out, but someday perhaps we will understand it (the original answer, and I think a good one in its acknowledgment of ignorance, always a valid answer and a good guide that someone might have thought about things) and 2) some pointless and wrong mocking (your answer, and I think a bad one). edit to add: forgot, of course, to put my current guess as to most likely answer, 3) that consciousness isn't possible for Turing machines.

Comment author: matt1 05 April 2011 10:19:38PM 1 point [-]

btw, I'm fully aware that I'm not asking original questions or having any truly new thoughts about this problem. I just hoped maybe someone would try to answer these old questions given that they had such confidence in their beliefs.

Comment author: pjeby 06 April 2011 12:04:48AM *  8 points [-]

if consciousness (meaning whatever we mean when we say that humans have consciousness) is possible for Turing machines,

This is the part where you're going astray, actually. We have no reason to think that human beings are NOT Turing-computable. In other words, human beings almost certainly are Turing machines.

Therefore, consciousness -- whatever we mean when we say that -- is indeed possible for Turing machines.

To refute this proposition, you'd need to present evidence of a human being performing an operation that can't be done by a Turing machine.

Understanding this will help "dissolve" or "un-ask" your question, by removing the incorrect premise (that humans are not Turing machines) that leads you to ask your question.

That is, if you already know that humans are a subset of Turing machines, then it makes no sense to ask what morally justifies treating them differently than the superset, or to try to use this question as a way to justify taking them out of the larger set.

IOW, (the set of humans) is a subset of (the set of turing machines implementing consciousness), which in turn is a proper subset of (the set of turing machines). Obviously, there's a moral issue where the first two subsets are concerned, but not for (the set of turing machines not implementing consciousness).

In addition, there may be some issues as to when and how you're doing the turning off, whether they'll be turned back on, whether consent is involved, etc... but the larger set of "turing machines" is obviously not relevant.

I hope that you actually wanted an answer to your question; if so, this is it.

(In the event you wish to argue for another answer being likely, you'll need to start with some hard evidence that human behavior is NOT being Turing-computable... and that is a tough road to climb. Essentially, you're going to end up in zombie country.)

Comment author: ArisKatsaris 06 April 2011 12:48:55AM 0 points [-]

To refute this proposition, you'd need to present evidence of a human being performing an operation that can't be done by a Turing machine.

That's quite easy: I can lift a rock, a Turing machine can't. A Turing machine can only manipulate symbols on a strip of tape, it can't do anything else that's physical.

Your claim that consciousness (whatever we mean when we say that) is possible for Turing machines, rests on the assumption that consciousness is about computation alone, not about computation+some unidentified physical reaction that's absent to pure Turing machines resting in a box on a table.

That consciousness is about computation alone may indeed end up true, but it's as yet unproven.

Comment author: pjeby 06 April 2011 12:55:09AM *  0 points [-]

I can lift a rock, a Turing machine can't. A Turing machine can only manipulate symbols on a strip of tape, it can't do anything else that's physical.

So... you support euthanasia for quadriplegics, then, or anyone else who can't pick up a rock? Or people who are so crippled they can only communicate by reading and writing braille on a tape, and rely on other human beings to feed them and take care of them?

Your claim that consciousness (whatever we mean when we say that) is possible for Turing machines, rests on the assumption that consciousness is about computation alone, not about computation+some unidentified physical reaction that's absent to pure Turing machines resting in a box on a table.

This "unidentified physical reaction" would also need to not be turing-computable to have any relevance. Otherwise, you're just putting forth another zombie-world argument.

At this point, we have no empirical reason to think that this unidentified mysterious something has any existence at all, outside of a mere intuitive feeling that it "must" be so.

And so, all we have are thought experiments that rest on using slippery word definitions to hide where the questions are being begged, presented as intellectual justification for these vague intuitions... like arguments for why the world must be flat or the sun must go around the earth, because it so strongly looks and feels that way.

(IOW, people try to prove that their intuitions or opinions must have some sort of physical form, because those intuitions "feel real". The error arises from concluding that the physical manifestation must therefore exist "out there" in the world, rather than in their own brains.)

Comment author: ArisKatsaris 06 April 2011 01:12:22AM *  0 points [-]

This "unidentified physical reaction" would also need to not be turing-computable to have any relevance. Otherwise, you're just putting forth another zombie-world argument.

A zombie-world seems extremely improbable to have evolved naturally, (evolved creatures coincidentally speaking about their consciousness without actually being conscious), but I don't see why a zombie-world couldn't be simulated by a programmer who studied how to compute the effects of consciousness, without actually needing to have the phenomenon of consciousness itself.

The same way you don't need to have an actual solar system inside your computer, in order to compute the orbits of the planets -- but it'd be very unlikely to have accidentally computed them correctly if you hadn't studied the actual solar system.

At this point, we have no empirical reason to think that this unidentified mysterious something has any existence at all, outside of a mere intuitive feeling that it "must" be so.

Do you have any empirical reason to think that consciousness is about computation alone? To claim Occam's razor on this is far from obvious, as the only examples of consciousness (or talking about consciousness) currently concern a certain species of evolved primate with a complex brain, and some trillions of neurons, all of which have have chemical and electrical effects, they aren't just doing computations on an abstract mathematical universe sans context.

Unless you assume the whole universe is pure mathematics, so there's no difference between the simulation of a thing and the thing itself. Which means there's no difference between the mathematical model of a thing and the thing itself. Which means the map is the territory. Which means Tegmark IV.

And Tegmark IV is likewise just a possibility, not a proven thing.

Comment author: matt1 06 April 2011 01:04:16AM 0 points [-]

thanks. my point exactly.

Comment author: Gray 06 April 2011 04:09:12PM 2 points [-]

That's quite easy: I can lift a rock, a Turing machine can't. A Turing machine can only manipulate symbols on a strip of tape, it can't do anything else that's physical.

I think you're trivializing the issue. A Turing machine is an abstraction, it isn't a real thing. The claim that a human being is a Turing machine means that, in the abstract, a certain aspect of human beings can be modeled as a Turing machine. Conceptually, it might be the case, for instance, that the universe itself can be modeled as a Turing machine, in which case it is true that a Turing machine can lift a rock.

Comment author: AlephNeil 06 April 2011 07:26:04PM 7 points [-]

That's quite easy: I can lift a rock, a Turing machine can't.

That sounds like a parody of bad anti-computationalist arguments. To see what's wrong with it, consider the response: "Actually you can't lift a rock either! All you can do is send signals down your spinal column."

That consciousness is about computation alone may indeed end up true, but it's as yet unproven.

What sort of evidence would persuade you one way or the other?

Comment author: Vladimir_Nesov 06 April 2011 09:18:12PM 2 points [-]

Read the first part of ch.2 of "Good and Real".

Comment author: matt1 06 April 2011 12:59:04AM 0 points [-]

You wrote: "This is the part where you're going astray, actually. We have no reason to think that human beings are NOT Turing-computable. In other words, human beings almost certainly are Turing machines."

at this stage, you've just assumed the conclusion. you've just assumed what you want to prove.

"Therefore, consciousness -- whatever we mean when we say that -- is indeed possible for Turing machines."

having assumed that A is true, it is easy to prove that A is true. You haven't given an argument.

"To refute this proposition, you'd need to present evidence of a human being performing an operation that can't be done by a Turing machine."

It's not my job to refute the proposition. Currently, as far as I can tell, the question is open. If I did refute it, then my (and several other people's) conjecture would be proven. But if I don't refute it, that doesn't mean your proposition is true, it just means that it hasn't yet been proven false. Those are quite different things, you know.

Comment author: pjeby 06 April 2011 01:32:04AM 0 points [-]

at this stage, you've just assumed the conclusion. you've just assumed what you want to prove.

No - what I'm pointing out is that the question "what are the ethical implications for turing machines" is the same question as "what are the ethical implications for human beings" in that case.

It's not my job to refute the proposition. Currently, as far as I can tell, the question is open.

Not on Less Wrong, it isn't. But I think I may have misunderstood your situation as being one of somebody coming to Less Wrong to learn about rationality of the "Extreme Bayesian" variety; if you just dropped in here to debate the consciousness question, you probably won't find the experience much fun. ;-)

I did refute it, then my (and several other people's) conjecture would be proven. But if I don't refute it, that doesn't mean your proposition is true, it just means that it hasn't yet been proven false. Those are quite different things, you know.

Less Wrong has different -- and far stricter -- rules of evidence than just about any other venue for such a discussion.

In particular, to meaningfully partake in this discussion, the minimum requirement is to understand the Mind Projection Fallacy at an intuitive level, or else you'll just be arguing about your own intuitions... and everybody will just tune you out.

Without that understanding, you're in exactly the same place as a creationist wandering into an evolutionary biology forum, without understanding what "theory" and "evidence" mean, and expecting everyone to disprove creationism without making you read any introductory material on the subject.

In this case, the introductory material is the Sequences -- especially the ones that debunk supernaturalism, zombies, definitional arguments, and the mind projection fallacy.

When you've absorbed those concepts, you'll understand why the things you're saying are open questions are not even real questions to begin with, let alone propositions to be proved or disproved! (They're actually on a par with creationists' notions of "missing links" -- a confusion about language and categories, rather than an argument about reality.)

I only replied to you because I though perhaps you had read the Sequences (or some portion thereof) and had overlooked their application in this context (something many people do for a while until it clicks that, oh yeah, rationality applies to everything).

So, at this point I'll bow out, as there is little to be gained by discussing something when we can't even be sure we agree on the proper usage of words.

Comment author: nshepperd 06 April 2011 02:38:19AM 6 points [-]

Well, how about this: physics as we know it can be approximated arbitrarily closely by a computable algorithm (and possibly computed directly as well, although I'm less sure about that. Certainly all calculations we can do involving manipulation of symbols are computable). Physics as we know it also seems to be correct to extremely precise degrees anywhere apart from inside a black hole.

Brains are physical things. Now when we consider that thermal noise should have more of an influence than the slight inaccuracy in any computation, what are the chances a brain does anything non-computable that could have any relevance to consciousness? I don't expect to see black holes inside brains, at least.

In any case, your original question was about the moral worth of turing machines, was it not? We can't use "turing machines can't be conscious" as excuse not to worry about those moral questions, because we aren't sure whether turing machines can be conscious. "It doesn't feel like they should be" isn't really a strong enough argument to justify doing something that would result in, for example, the torture of conscious entities if we were incorrect.

So here's my actual answer to your question: as a rule of thumb, act as if any simulation of "sufficient fidelity" is as real as you or I (well, multiplied by your probability that such a simulation would be conscious, maybe 0.5, for expected utilities). This means no killing, no torture, etc.

'Course, this shouldn't be a practical problem for a while yet, and we may have learned more by the time we're creating simulations of "sufficient fidelity".

Comment author: Nominull 06 April 2011 12:23:14AM 1 point [-]

If you think 1 is the correct answer, you should be aware that this website is for people who do not wait patiently for a someday where we might have an understanding. One of the key teachings of this website is to reach out and grab an understanding with your own two hands. And you might add a 4 to that list, "death threats", which does not strike me as the play either.

Comment author: matt1 06 April 2011 01:02:17AM *  4 points [-]

You should be aware that in many cases, the sensible way to proceed is to be aware of the limits of your knowledge. Since the website preaches rationality, it's worth not assigning probabilities of 0% or 100% to things which you really don't know to be true or false. (btw, I didn't say 1) is the right answer, I think it's reasonable, but I think it's 3) )

And sometimes you do have to wait for an answer. For a lesson from math, consider that Fermat had flat out no hope of proving his "last theorem", and it required a couple hundred years of apparently unrelated developments to get there....one could easily give a few hundred examples of that sort of thing in any hard science which has a long enough history.

Comment author: Nominull 06 April 2011 03:31:01AM 6 points [-]

Uh I believe you will find that Fermat in fact had a truly marvelous proof of his last theorem? The only thing he was waiting on was the invention of a wider margin.

Comment author: [deleted] 06 April 2011 03:45:07AM 0 points [-]

I wonder how much the fame of Fermat's Last Theorem is due to the fact that, (a) he claimed to have found a proof, and (b) nobody was able to prove it. Had he merely stated it as a conjecture without claiming that he had proven it, would anywhere near the same effort have been put into proving it?

Comment author: TheOtherDave 06 April 2011 02:04:33PM 8 points [-]

Little-known non-fact: there were wider margins available at the time, but it was not considered socially acceptable to use them for accurate proofs, or more generally for true statements at all; they were merely wide margins for error.

Comment author: Kyre 06 April 2011 06:18:23AM 4 points [-]

Can you expand on why you expect human moral intuition to give reasonably clear answers when applied to situations involving conscious machines ?

Comment author: KrisC 06 April 2011 06:43:48AM 3 points [-]

what is so special about the program which makes up a human, that it would be morally wrong to shut off the program?

Is it sufficient to say that humans are able to consider the question? That humans possess an ability to abstract patterns from experience so as to predict upcoming events, and that exercise of this ability leads to a concept of self as a future agent.

Is it necessary that this model of identity incorporate relationships with peers? I think so but am not sure. Perhaps it is only necessary that the ability to abstract be recursive.

Comment author: HonoreDB 06 April 2011 06:45:13AM 4 points [-]

I like Constant's reply, but it's also worth emphasizing that we can't solve scientific problems by interrogating our moral intuitions. The categories we instinctively sort things into are not perfectly aligned with reality.

Suppose we'd evolved in an environment with sophisticated 2011-era artificially intelligent Turing-computable robots--ones that could communicate their needs to humans, remember and reward those who cooperated, and attack those who betrayed them. I think it's likely we'd evolve to instinctively think of them as made of different stuff than anything we could possibly make ourselves, because that would be true for millions of years. We'd evolve to feel moral obligations toward them, to a point, because that would be evolutionarily advantageous, to a point. Once we developed philosophy, we might take this moral feeling as evidence that they're not Turing-computable--after all, we don't have any moral obligations to a mere mass of tape.

Comment author: DanielVarga 06 April 2011 10:09:59AM 2 points [-]

Hi Matt, thanks for dropping by. Here is an older comment of mine that tries to directly address what I consider the hardest of your questions: How to distinguish from the outside between two computational processes, one conscious, the other not. I'll copy it here for convenience. Most of the replies to you here can be safely considered Less Wrong consensus opinion, but I am definitely not claiming that about my reply.

I start my answer with a Minsky quote:

"Consciousness is overrated. What we call consciousness now is a very imperfect summary in one part of the brain of what the rest is doing." - Marvin Minsky

I believe with Minsky that consciousness is a very anthropocentric concept, inheriting much of the complexity of its originators. I actually have no problem with an anthropocentric approach to consciousness, so I like the following intuitive "definition": X is conscious if it is not silly to ask "what is it like to be X?". The subtle source of anthropocentrism here, of course, is that it is humans who do the asking. As materialists, we just can't formalize this intuitive definition without mapping specific human brain functions to processes of X. In short, we inherently need human neuroscience. So it is not too surprising that we will not find a nice, clean decision procedure to distinguish between two computational processes, one conscious the other not.

Most probably you are not happy with this anthropocentric approach. Then you will have to distill some clean, mathematically tractable concept from the messy concept of consciousness. If you agree with Hofstadter and Minsky, then you will probably reach something related to self-reflection. This may or may not work, but I believe that you will lose the spirit of the original concept during such a formalization. Your decision procedure will probably give unexpected results for many things: various simple, very unintelligent computer programs, hive minds, and maybe even rooms full of people.

This ends my old comment, and I will just add a footnote related to ethical implications. With HonoreDB, I can in principle imagine a world with cooperating and competing agents, some conscious, others not, but otherwise having similar negotiating power. I believe that the ethical norms emerging in this imagined world would not even mention consciousness. If you want to build an ethical system for humans, you can "arbitrarily" decide that protecting consciousness is a terminal value. Why not? But if you want to build a non-anthropocentric ethical system, you will see that the question of consciousness is orthogonal to its issues.

Comment author: David_Gerard 06 April 2011 11:46:35AM 0 points [-]

Of course, my original comment had nothing to do with god.

No indeed. However, the similarity in assuming a supernatural explanation is required for morality to hold struck me.

Comment author: David_Gerard 05 April 2011 04:49:35PM 6 points [-]
Comment author: mtraven 04 April 2011 10:26:26PM *  1 point [-]

The best education consists in immunizing people against systematic attempts at education.

-- Paul Feyerabend

Comment author: David_Gerard 05 April 2011 09:28:12AM *  2 points [-]

This one could do with expansion and/or contextualisation. A quick Google only turns up several pages of just the bare quote (including on a National Institue of Health .gov page!) - what was the original source? Anyone?

Comment author: mtraven 06 April 2011 02:44:57AM 2 points [-]

Well, I deliberately left out the source because I didn't think it would play well in this Peoria of thought -- it's from his book of essays Farewell to Reason. Link to gbooks with some context.

Comment author: [deleted] 04 April 2011 10:49:38PM *  0 points [-]

We ought to identify and empathize with the physical and moral order of the universe, whatever that may be, and we should help others do the same.

--William T. Vollmann

Comment author: cousin_it 05 April 2011 09:19:18AM 9 points [-]

moral order of the universe

There's no such thing.

Comment author: RichardKennaway 05 April 2011 11:32:33AM 1 point [-]

moral order of the universe

The moral order is within us.

Comment author: moshez 05 April 2011 12:55:49PM 3 points [-]

And we are within the universe! So that all works out nicely.

Comment author: RichardKennaway 05 April 2011 01:58:10PM 1 point [-]

We're only a small part of it, though. The rest is "the motivationless malice of directed acyclic causal graphs".

Comment author: moshez 05 April 2011 02:04:44PM 3 points [-]

How do you measure "small"? Us humans had a disproprotionate effect on our immediate surroundings, and that effect is going to continue throughout our lightcone if everything goes according to plan.

Comment author: Mycroft65536 05 April 2011 02:41:41PM 8 points [-]

... if everything goes according to plan.

I think you're supposed to laugh evilly there.

Mwahahahaha

Comment author: khafra 05 April 2011 12:18:46PM 4 points [-]

The 3 downvotes this had when I entered the thread seem rather harsh, considering it could be rephrased as "think like reality." The questionable part is that the universe has a moral order, but a charitable reading of the quote will not demand that it means "a moral order independent of human minds."

Comment author: Jonathan_Graehl 06 April 2011 01:40:58AM 1 point [-]

We should all agree to say the same words, without too much concern for what they mean?

Comment author: CronoDAS 04 April 2011 11:29:10PM 34 points [-]

From a forum signature:

The fool says in his heart, "There is no God." --Psalm 14:1

It is a fool's prerogative to utter truths that no one else will speak. --Neil Gaiman, Sandman 3:3:6

Comment author: David_Gerard 05 April 2011 09:27:30AM 3 points [-]

Even my theist girlfriend laughed out loud at that one :-)

Comment author: childofbaud 05 April 2011 12:07:54AM *  8 points [-]

This one's for you, Clippy:

The specialist makes no small mistakes while moving toward the grand fallacy.

—Marshall McLuhan

Comment author: Matt_Duing 05 April 2011 02:13:41AM *  16 points [-]

The most important relic of early humans is the modern mind.

-Steven Pinker

Comment author: mispy 05 April 2011 03:08:38AM 15 points [-]

Our imagination is stretched to the utmost, not, as in fiction, to imagine things which are not really there, but just to comprehend those things which are there.

-- Richard Feynman

(I don't think he originally meant this in the context of overcoming cognitive bias, but it seems to apply well to that too.)

Comment author: Normal_Anomaly 06 April 2011 12:22:37AM 7 points [-]

I think it was originally meant in the context of joy in the merely real.

Comment author: Risto_Saarelma 05 April 2011 05:48:11AM 32 points [-]

But, there's another problem, and that is the fact that statistical and probabilistic thinking is a real damper on "intellectual" conversation. By this, I mean that there are many individuals who wish to make inferences about the world based on data which they observe, or offer up general typologies to frame a subsequent analysis. These individuals tend to be intelligent and have college degrees. Their discussion ranges over topics such as politics, culture and philosophy. But, introduction of questions about the moments about the distribution, or skepticism as to the representativeness of their sample, and so on, tends to have a chilling affect on the regular flow of discussion. While the average human being engages mostly in gossip and interpersonal conversation of some sort, the self-consciously intellectual interject a bit of data and abstraction (usually in the form of jargon or pithy quotations) into the mix. But the raison d'etre of the intellectual discussion is basically signaling and cuing; in other words, social display. No one really cares about the details and attempting to generate a rigorous model is really beside the point. Trying to push the N much beyond 2 or 3 (what you would see in a college essay format) will only elicit eye-rolling and irritation.

-- Razib Khan

Comment author: wedrifid 05 April 2011 05:54:23AM *  2 points [-]

But, there's another problem, and that is the fact that statistical and probabilistic thinking is a real damper on "intellectual" conversation.

It would also be fair to say that being intellectual can often be a dampener of conversation. I say this to emphasize that the problem isn't statistics or probabilistic thinking - but rather forcing rigour in general, particularly when in the form of challenging what other people say.

Comment author: Nisan 05 April 2011 05:47:00PM 2 points [-]

I usually use the word "intellectual" to refer to someone who talks about ideas, not necessarily in an intelligent way.

Comment author: djcb 05 April 2011 10:30:05AM *  3 points [-]

Make no mistake about it: Computers process numbers - not symbols. We measure our understanding (and control) by the extent to which we can arithmetize an activity.

-- Alan Perlis

Since I discovered them through SICP, I always liked the 'Perlisims' -- many of his Epigrams in Programming are pretty good. There's a hint of Searle/Chinese Room in this particular quote, but he turns it around by implying that in the end, the symbols are numbers (or that's how I read it).

Comment author: KenChen 05 April 2011 01:58:17PM *  21 points [-]

Hofstadter's Law: It always takes longer than you expect, even when you take into account Hofstadter's Law.

– Douglas Hofstadter, Gödel, Escher, Bach: An Eternal Golden Braid

Comment author: DSimon 06 April 2011 09:22:39PM 1 point [-]

Doesn't that spiral out to infinity?

Comment author: Manfred 06 April 2011 09:40:09PM 10 points [-]

It can just asymptotically approach the right value. It's probably more metaphorical, though.

Comment author: HonoreDB 06 April 2011 09:51:43PM 13 points [-]

It always takes longer than you expect, even when you take into account the limit of infinite applications of Hofstadter's Law.

Comment author: ata 06 April 2011 10:02:17PM 8 points [-]

Even further:

Hofstadter's Law+: It always takes longer than you expect, even when you take into account the limit of infinite applications of Hofstadter's Law+.

Comment author: [deleted] 07 April 2011 01:01:16AM 8 points [-]

For all ordinal numbers n, define Hodstadter's n-law as "It always takes longer than you expect, even when you take into account Hofstadter's m-law for all m < n."

Comment author: ata 07 April 2011 01:13:46AM *  7 points [-]

For all natural numbers n, define L_n as the nth variation of Hofstadter's Law that has been or will be posted in this thread. Theorem: As n approaches infinity, L_n converges to "Everything ever takes an infinite amount of time."

Comment author: Eliezer_Yudkowsky 07 April 2011 05:56:38AM 17 points [-]

Actually it takes longer than that.

Comment author: RobinZ 05 April 2011 05:04:14PM 33 points [-]

Should we then call the original replicator molecules 'living'? Who cares? I might say to you 'Darwin was the greatest man who has ever lived', and you might say 'No, Newton was', but I hope we would not prolong the argument. The point is that no conclusion of substance would be affected whichever way our argument was resolved. The facts of the lives and achievements of Newton and Darwin remain totally unchanged whether we label them 'great' or not. Similarly, the story of the replicator molecules probably happened something like the way I am telling it, regardless of whether we choose to call them 'living'. Human suffering has been caused because too many of us cannot grasp that words are only tools for our use, and that the mere presence in the dictionary of a word like 'living' does not mean it necessarily has to refer to something definite in the real world. Whether we call the early replicators living or not, they were the ancestors of life; they were our founding fathers.

Richard Dawkins, The Selfish Gene.

(cf. Disguised Queries.)

Comment author: ewang 05 April 2011 05:57:27PM *  6 points [-]

Clevinger exclaimed to Yossarian in a voice rising and falling in protest and wonder. "It's a complete reversion to primitive superstition. They're confusing cause and effect. It makes as much sense as knocking on wood or crossing your fingers. They really believe that we wouldn't have to fly that mission tomorrow if someone would only tiptoe up to the map in the middle of the night and move the bomb line over Bologna. Can you imagine? You and I must be the only rational ones left." In the middle of the night Yossarian knocked on wood, crossed his fingers, and tiptoed out of his tent to move the bomb line up over Bologna.

Joseph Heller (Catch-22)

Comment author: wnoise 05 April 2011 09:38:36PM 1 point [-]

A bit more context for those who haven't read Catch-22 would probably help.

Comment author: ewang 06 April 2011 07:02:05AM *  2 points [-]

I don't think anything else could be added that deepens the understanding of the quote, besides the fact that moving the bomb line actually works because Corporal Kolodny (who is obviously a corporal named Kolodny) can't distinguish between cause and effect either.

Comment author: CronoDAS 05 April 2011 06:25:31PM *  25 points [-]

A fable:

In Persia many centuries ago, the Sufi mullah or holy man Nasruddin was arrested after preaching in the great square in front of the Shah's palace. The local clerics had objected to Mullah Nasruddin's unorthodox teachings, and had demanded his arrest and execution as a heretic. Dragged by palace guards to the Shah's throne room, he was sentenced immediately to death.

As he was being taken away, however, Nasruddin cried out to the Shah: "O great Shah, if you spare me, I promise that within a year I will teach your favourite horse to sing!"

The Shah knew that Sufis often told the most outrageous fables, which sounded blasphemous to many Muslims but which were nevertheless intended as lessons to those who would learn. Thus he had been tempted to be merciful, anyway, despite the demands of his own religious advisors. Now, admiring the audacity of the old man, and being a gambler at heart, he accepted his proposal.

The next morning, Nasruddin was in the royal stable, singing hymns to the Shah's horse, a magnificent white stallion. The animal, however, was more interested in his oats and hay, and ignored him. The grooms and stablehands all shook their heads and laughed at him. "You old fool", said one. "What have you accomplished by promising to teach the Shah's horse to sing? You are bound to fail, and when you do, the Shah will not only have you killed - you'll be tortured as well, for mocking him!"

Nasruddin turned to the groom and replied: "On the contrary, I have indeed accomplished much. Remember, I have been granted another year of life, which is precious in itself. Furthermore, in that time, many things can happen. I might escape. Or I might die anyway. Or the Shah might die, and his successor will likely release all prisoners to celebrate his accession to the throne".

"Or...". Suddenly, Nasruddin smiled. "Or, perhaps, the horse will learn to sing".

The original source of this fable seems to be lost to time. This version was written by Idries Shah.

Comment author: Sniffnoy 05 April 2011 10:58:55PM 0 points [-]

Huh, and here I had assumed Niven and Pournelle made that up since it wasn't in Herodotus like they claimed.

Comment author: RobinZ 06 April 2011 12:56:02PM 0 points [-]

Where was it in Niven and Pournelle? I first saw it in The Cross Time Engineer.

Comment author: Tripitaka 06 April 2011 01:04:05PM 2 points [-]

In "the gripping hand" it is used as an example for a crazy eddy plan, that could actually work.

Comment author: nhamann 05 April 2011 09:22:48PM 24 points [-]

True heroism is minutes, hours, weeks, year upon year of the quiet, precise, judicious exercise of probity and care—with no one there to see or cheer.

— David Foster Wallace, The Pale King

Comment author: TylerJay 05 April 2011 09:40:03PM 11 points [-]

The north went on forever. Tyrion Lannister knew the maps as well as anyone, but a fortnight on the wild track that passed for the kingsroad up here had brought home the lesson that the map was one thing and the land quite another.

--George R. R. Martin A Game of Thrones

Comment author: dares 06 April 2011 12:19:53AM 2 points [-]

Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away.

—Antoine de Saint-Exupéry

Comment author: bcoburn 06 April 2011 05:03:58AM 2 points [-]

This one really needs to have been applied to itself, "short is good" is way better.

(also this was one of EY's quotes in the original rationality quotes set, http://lesswrong.com/lw/mx/rationality_quotes_3/ )

Comment author: CronoDAS 06 April 2011 06:24:09AM 1 point [-]

Maybe it's shorter in French?

Comment author: komponisto 06 April 2011 06:35:47AM 5 points [-]

Compare:

Il semble que la perfection soit atteinte non quand il n'y a plus rien à ajouter, mais quand il n'y a plus rien à retrancher.

So, no.

Comment author: dares 06 April 2011 12:35:43PM 0 points [-]

New here, sorry for the redundancy. I probably should have guessed that such a popular quote had been used.

Comment author: dares 06 April 2011 12:37:19PM 3 points [-]

Also, "short is good" would narrow this quotes focus considerably.

Comment author: [deleted] 07 April 2011 01:06:47AM 3 points [-]

Perfection is lack of excess.

Comment author: childofbaud 07 April 2011 03:46:10AM 6 points [-]

A domain-specific interpretation of the same concept:

"The real hero of programming is the one who writes negative code."

—Douglas McIlroy

Comment author: Dreaded_Anomaly 06 April 2011 03:27:01AM *  27 points [-]

Complex problems have simple, easy to understand wrong answers.

— Grossman's Law

Comment author: Confringus 07 April 2011 02:55:11AM 2 points [-]

Is there a law that states that all simple problems have complex, hard to understand answers? Moravec's paradox sort of covers it but it seems that principle should have its own label.

Comment author: Nominull 06 April 2011 03:40:18AM 24 points [-]

using the word “science” in the same way you’d use the word “alakazam” doesn’t count as being smarter

-Kris Straub, Chainsawsuit artist commentary

Comment author: bisserlis 06 April 2011 05:14:07AM 0 points [-]

Son, you’re a body, son. That quick little scientific-prodigy’s mind she’s so proud of and won’t quit twittering about: son, it’s just neural spasms, those thoughts in your mind are just the sound of your head revving, and head is still just body, Jim. Commit this to memory. Head is body. Jim, brace yourself against my shoulders here for this hard news, at ten: you’re a machine a body an object, Jim, no less than this rutilant Montclair, this coil of hose here or that rake there for the front yard’s gravel or sweet Jesus this nasty fat spider flexing in its web over there up next to the rake-handle, see it?

Infinite Jest, page 159

Comment author: Tiiba 06 April 2011 05:26:59AM 6 points [-]

I will repost a quote that I posted many moons ago on OB, if you don't mind. I don't THINK this breaks the rules too badly, since that post didn't get its fair share of karma. Here's the first time: http://lesswrong.com/lw/uj/rationality_quotes_18/nrt

"He knew well that fate and chance never come to the aid of those who replace action with pleas and laments. He who walks conquers the road. Let his legs grow tired and weak on the way - he must crawl on his hands and knees, and then surely, he will see in the night a distant light of hot campfires, and upon approaching, will see a merchants' caravan; and this caravan will surely happen to be going the right way, and there will be a free camel, upon which the traveler will reach his destination. Meanwhile, he who sits on the road and wallows in despair - no matter how much he cries and complains - will evoke no compassion in the soulless rocks. He will die in the desert, his corpse will become meat for foul hyenas, his bones will be buried in hot sand. How many people died prematurely, and only because they didn't love life strongly enough! Hodja Nasreddin considered such a death humiliating for a human being.

"No" - said he to himself and, gritting his teeth, repeated wrathfully: "No! I won't die today! I don't want to die!""

Comment deleted 06 April 2011 07:13:37AM *  [-]
Comment author: Unnamed 06 April 2011 06:00:09PM 3 points [-]
Comment author: atucker 06 April 2011 07:17:13AM 14 points [-]

There are these two young fish swimming along and they happen to meet an older fish swimming the other way, who nods at them and says "Morning, boys. How's the water?" And the two young fish swim on for a bit, and then eventually one of them looks over at the other and goes "What the hell is water?"

~ Story, used most famously in David Foster Wallace's Commencement Address at Kenyon College

Comment author: newerspeak 06 April 2011 12:25:05PM *  9 points [-]

Bertrand Russell, in his Autobiography records that his rather fearsome Puritan grandmother:

gave me a Bible with her favorite texts written on the fly-leaf. Among these was "Thou shalt not follow a multitude to do evil." Her emphasis upon this text led me in later life to be not afraid of belonging to small minorities.

It's rather affecting to find the future hammer of the Christians being "confirmed" in this way. It also proves that sound maxims can appear in the least probable places.

-- Christopher Hitchens, Letters to a Young Contrarian

Comment author: Alicorn 07 April 2011 03:08:53AM *  73 points [-]

When confronting something which may be either a windmill or an evil giant, what question should you be asking?

There are some who ask, "If we do nothing, and that is an evil giant, can we afford to be wrong?" These people consider themselves to be brave and vigilant.

Some ask "If we attack it wrongly, can we afford to pay to replace a windmill?" These people consider themselves cautious and pragmatic.

Still others ask, "With the cost of being wrong so high in either case, shouldn't we always definitively answer the 'windmill vs. giant' question before we act?" And those people consider themselves objective and wise.

But only a tiny few will ask, "Isn't the fact that we're giving equal consideration to the existence of evil giants and windmills a warning sign of insanity in ourselves?"

It's hard to find out what these people consider themselves, because they never get invited to parties.

-- PartiallyClips, "Windmill"

Comment author: JGWeissman 07 April 2011 03:13:04AM 16 points [-]

But only a tiny few will ask, "Isn't the fact that we're giving equal consideration to the existence of evil giants and windmills a warning sign of insanity in ourselves?"

And then there's the fact that we are giving much more consideration to the existence of evil giants than to the existence of good giants.

Comment author: wedrifid 07 April 2011 04:16:09AM 1 point [-]

Best quote I've seen in a long time!

Comment author: James_K 07 April 2011 05:24:48AM 2 points [-]

That is truly incredible, I regret only that I have but one upvote to give.

Comment author: [deleted] 07 April 2011 05:36:49AM 7 points [-]

Nancy Lebovitz came across this too.