Rationality Quotes January 2013

6 Post author: katydee 02 January 2013 05:23PM

Happy New Year! Here's the latest and greatest installment of rationality quotes. Remember:

  • Please post all quotes separately, so that they can be voted up/down separately.  (If they are strongly related, reply to your own comments.  If strongly ordered, then go ahead and post them together.)
  • Do not quote yourself
  • Do not quote comments/posts on LessWrong or Overcoming Bias
  • No more than 5 quotes per person per monthly thread, please

Comments (604)

Comment author: Mycroft65536 05 March 2013 02:29:55AM 1 point [-]

If you're commited to rationality, then you're putting your belief system at risk every day. Any day you might acquire more information and be forced to change you belief system, and it could be very unpleasant and be very disturbing.

--Michael Huemer

Comment author: deathpigeon 01 February 2013 02:48:08AM 1 point [-]

A straight line may be the shortest distance between two points, but it is by no means the most interesting.

The Third Doctor

Comment author: Eliezer_Yudkowsky 01 February 2013 12:59:43AM 3 points [-]

Where there's smoke, there's fire... unless someone has a smoke machine.

-- thedaveoflife

Comment author: hairyfigment 02 February 2013 02:41:05AM -1 points [-]

Where there's smoke, there's a chemical reaction of some kind. Unless it's really someone blowing off steam.

Comment author: Manfred 08 May 2013 05:44:48AM 0 points [-]

Or someone with an aerosolizer.

Comment author: pragmatist 31 January 2013 10:05:56AM *  0 points [-]

Perhaps the day will come when philosophy can be discussed in terms of investigation rather than controversies, and philosophers, like scientists, be known by the topics they study rather than by the views they hold.

Nelson Goodman

Comment author: lukeprog 30 January 2013 10:38:17PM 9 points [-]

Mendel’s concept of the laws of genetics was lost to the world for a generation because his publication did not reach the few who were capable of grasping and extending it; and this sort of catastrophe is undoubtedly being repeated all about us, as truly significant attainments become lost in the mass of the inconsequential.

Vannevar Bush

Comment author: [deleted] 30 January 2013 12:10:07PM 11 points [-]

Whenever you can, count.

--Sir Francis Galton

Comment author: Qiaochu_Yuan 30 January 2013 12:10:51AM *  1 point [-]

-

Comment author: Vaniver 30 January 2013 12:27:58AM 2 points [-]
Comment author: Qiaochu_Yuan 30 January 2013 12:36:16AM *  0 points [-]

Oy. I did not go down far enough to check whether this had been posted already. Thanks.

Comment author: GLaDOS 29 January 2013 07:34:17PM 10 points [-]

I notice with some amusement, both in America and English literature, the rise of a new kind of bigotry. Bigotry does not consist in a man being convinced he is right; that is not bigotry, but sanity. Bigotry consists in a man being convinced that another man must be wrong in everything, because he is wrong in a particular belief; that he must be wrong, even in thinking that he honestly believes he is right.

-G. K. Chesterton

Comment author: shminux 29 January 2013 04:51:56PM 3 points [-]

When they realized they were in a desert, they built a religion to worship thirstiness.

SMBC comics: a metaphor for deathism.

Comment author: IlyaShpitser 29 January 2013 06:46:53PM *  6 points [-]

While I am a fan of SMBC, in this case he's not doing existentialism justice (or not understanding existentialism). Existentialism is not the same thing as deathism. Existentialism is about finding meaning and responsibility in an absurd existence. While mortality is certainly absurd, biological immortality will not make existential issues go away. In fact, I suspect it will make them stronger..


edit: on the other hand, "existentialist hokey-pokey" is both funny and right on the mark!

Comment author: shminux 29 January 2013 07:37:22PM *  0 points [-]

I don't see how this strip can be considered to be about existentialism.

EDIT: Actually, I'm no longer sure what the strip is about. It obviously starts with Camus' absurdism, but then switches from his anti-nihilist argument against suicide in an absurd world to a potential critique of... what? nihilism? absurdism? as a means of resolving the cognitive dissonance of having a finite lifespan while wanting to live forever... Or does it? Zack Weiner can be convoluted at times.

Comment author: DaFranker 29 January 2013 08:12:10PM *  0 points [-]

[Meta]

I don't see why the parent was downvoted.

Is it seriously being downvoted just because it called to attention an inference that was not obvious, but seemed obvious to some who had studied a certain topic X?

Comment author: TimS 29 January 2013 08:29:11PM *  4 points [-]

Not my downvote. But if you don't know enough about existentialism to recognize Camus is a central early figure, then you don't know enough about existentialism to comment about whether a particular philosophical point invokes existentialism accurately.

If we replaced "Camus" with "J.S. Mill" and "existentialism" with "consequentialism," the error might be clearer.

In short, it isn't an error to miss the reference, but it is an error to challenge someone who explains the reference. (And currently, the karma for the two posts by shminux correctly reflect this difference - with the challenge voted much lower)

Comment author: DaFranker 29 January 2013 09:11:20PM *  1 point [-]

But if you don't know enough about existentialism to recognize Camus is a central early figure, then you don't know enough about existentialism to comment about whether a particular philosophical point invokes existentialism accurately.

Errh... does not follow.

I care about the central early figures of any topic about as much as I care about the size of the computer monitor used by the person who contributed the most to the reddit codebase.

(edit: To throw in an example, I spent several months in the dark a while back doing bayesian inference while completely missing references to / quotes from Thomas Bayes. Yes, literally, that bad. So forgive me if I wouldn't have caught your reference to consequentialism if you hadn't explicitly stated that as what "J.S. Mill" was linked to.)

In short, it isn't an error to miss the reference, but it is an error to challenge someone who explains the reference.

The later explanation (in response to said "challenge") was necessary for me to understand why someone was talking about existentialism at all in the first place, so the first comment definitely did not make the reference any more obvious or explained (to me, two-place) than it was beforehand.

The "challenge" is actually not obvious to me either. When I re-read the comment, I see someone mentioning that they're missing the information that says "This strip is about existentialism".

If any statement of the form "X is not obvious to me" is considered a challenge to those for whom it is obvious, then I would argue that the agents doing this considering have missed the point of the inferential distance articles. To go meta, this previous sentence is what I would consider a challenge.

Comment author: IlyaShpitser 29 January 2013 09:17:57PM *  7 points [-]

I care about the central early figures of any topic about as much as I care about the size of the computer monitor used by the person who contributed the most to the reddit codebase.

I think this is a mistake, and a missed chance to practice the virtue of scholarship. Lesswrong could use much more scholarship, not less, in my opinion. The history of the field often gives more to think about than the modern state of the field.

Progress does not obey the Markov property.

Comment author: shminux 29 January 2013 09:44:23PM *  4 points [-]

The history of the field often gives more to think about than the modern state of the field.

Maybe more to think, but less value to the mastery the field, at least in the natural sciences (philosophy isn't one). You can safely delay learning about the history of discovery of electromagnetism, or linear algebra, or the periodic table until after you master the concepts. Apparently in philosophy it's somehow the other way around, you have to learn the whole history first. What a drag.

Comment author: DaFranker 29 January 2013 09:39:53PM *  2 points [-]

[Obligatory disclaimer: This is not a challenge.]

I think this is a mistake, and a missed chance to practice the virtue of scholarship.

I honestly don't see how or why.

I already have a rather huge list of things I want to do scholarship with, and I don't see any use I could have for knowledge about the persons behind these things I want to study. Knowing a name for the purposes of searching for more articles written under this name is useful, knowing a name to know the rate of accuracy of predictions made by this name is useful, and often the "central early figures" in a field will coincide with at least one of these or some other criteria for scholarly interest.

I hear Galileo is also a central early figure for something related to stars or stellar motion or heliocentrism or something. Something about stellar bodies, probably. This seems entirely screened off (so as to make knowledge about Galileo useless to me) by other knowledge I have from other sources about other things, like newtonian physics and relativity and other cool things.

Studying history is interesting, studying the history of some things is also interesting, but the central early figures of some field are only nodes in a history, and relevant to me proportionally to their relevance to the parts of said history that carry information useful for me to remember after having already propagated the effects of this through my belief network.

Once I've done updates on my model based on what happened historically, I usually prefer forgetting the specifics of the history, as I tend to remember that I already learned about this history anyway (which means I won't learn it again, count it again, and break my mind even more later on).

So... I don't see where knowledge about the people comes in, or why it's a good opportunity to learn more. Am I cheating by already having a list of things to study and a large collection of papers to read?

To rephrase, if the information gained by knowing the history of something can be screened off by a more compact or abstract model, I prefer the latter.

Comment author: IlyaShpitser 29 January 2013 10:32:09PM *  2 points [-]

To rephrase, if the information gained by knowing the history of something can be screened off by a more compact or abstract model, I prefer the latter.

That's fine if you are trying to do economics with your time. But it sounded to me from the comment that you didn't care as well. Actually the economics is nontrivial here, because different bits of the brain engage with the formal material vs the historic context.

I think an argument for learning a field (even a formal/mathematical field) as a living process evolving through time, rather than the current snapshot really deserves a separate top level post, not a thread reply.

My personal experience trying to learn math the historic way and the snapshot way is that I vastly prefer the former. Perhaps I don't have a young, mathematically inclined brain. History provides context for notational and conceptual choices, good examples, standard motivating problems that propelled the field forward, lessons about dead ends and stubborn old men, and suggests a theory of concepts as organically evolving and dying, rather than static. Knowledge rooted in historic context is much less brittle.

For example, I wrote a paper with someone about what a "confounder" is. * People have been using that word probably for 70 years without a clear idea of what it means, and the concept behind it for maybe 250 more (http://jech.bmj.com/content/65/4/297.full.pdf+html). In the course of writing the paper we went through maybe half a dozen historic definitions people actually put forth (in textbooks and such), all but one of them "wrong." Probably our paper is not the last word on this. Actually "confounder" as a concept is mostly dying, to be replaced by "confounding" (much clearer, oddly). Even if we agree that our paper happens to be the latest on the subject, how much would you gain by reading it, and ignoring the rest? What if you read one of the earlier "wrong" definitions and nothing else?

You can't screen off, because history does not obey the Markov property.

  • This is "analytic philosophy," I suppose, and in danger of running afoul of Luke's wrath!
Comment author: BerryPick6 29 January 2013 09:50:08PM 0 points [-]

Am I cheating by already having a list of things to study and a large collection of papers to read?

Not really, but only because the example you gave was Astronomy. If we're talking specifically about Existentialism (although I guess the conversation has progressed a bit passed that) I'm not entirely sure how one would come up with a list of readings and concepts without turning to the writings of the Central Figures (I'm not even sure it's legitimate to call Camus an 'early' thinker, since the Golden Age of Existentialism was definitely when he and Sartre were publishing.)

I would very much agree with your assessment for many if not most scientific fields, but in this particular instance, I happen to disagree that disregarding the Central Figures won't hurt your knowledge and understanding of the topic.

Comment author: IlyaShpitser 29 January 2013 07:41:32PM *  3 points [-]

It quotes Camus, the father of existentialism. It quotes from "The Myth of Sisyphus," one of the founding texts of existentialism. The invitation to live and create in the desert (e.g. invitation to find your own meaning, responsibility, and personal integrity without a God or without objective meaning in the world) is the existential answer to the desert of nihilism. Frankly, I am not sure how you can think the strip is about anything else. What do you think existentialism is?


A more accurate pithy summary of existentialism is this: "When they realized they were in a desert, they built water condensators out of sand."


"Beyond the reach of God" is existential.

Comment author: BerryPick6 29 January 2013 07:51:24PM 1 point [-]

SMBC has also featured a bunch of other strips about existentialism, leading me to suspect he has studied it in some capacity. Notably, here, here, here, here and here.

Comment author: IlyaShpitser 29 January 2013 07:54:20PM 2 points [-]

http://www.smbc-comics.com/index.php?db=comics&id=1595#comic

That's relativism, not existentialism. I mean he's trying to entertain, not be a reliable source about anything. Like wikipedia :).

Comment author: BerryPick6 29 January 2013 08:15:19PM 1 point [-]

Yeah, the third one I linked too isn't really existentialism either now that I think about it...

Comment author: OrphanWilde 29 January 2013 07:02:54PM 0 points [-]

Existentialism is just one branch of nihilistic philosophy, one which specifically attempts to address the issues inherent in nihilism.

Comment author: IlyaShpitser 29 January 2013 07:15:18PM *  2 points [-]

I think it is more accurate to describe existentialism as a reaction to nihilism, not a branch of nihilism. Camus opposed nihilism. It is true that he (and other existentialists) took nihilism very seriously indeed.

Comment author: OrphanWilde 29 January 2013 10:08:06PM 0 points [-]

Retracted, sorry - figured out where the disconnect was coming from after reading your other comments, which led to confusion, which led me to try to identify the source. I was interpreting nihilism itself to be the theology of the desert, so your comment didn't make any sense; rereading the comic I realized I had missed the connection between the "Take that!" and the "And yet". It felt to me like an Objectivist complaining that a critic of free market philosophy didn't understand Ayn Rand; taking a generalized point and interpreting it very specifically.

I don't think Camus opposed nihilism, though, I think he opposed the commonly-held philosophic ramifications of nihilism. Existentialism isn't a rejection of nihilism, it's a development of it, or at least that's what it looks like to me, as somebody who finds nihilism to be similar to an argument about what angels look like (given that I'm also an atheist). "What's our purpose?" "What's purpose?" - which is to say, I find the philosophy to be an answer, "Nothing!", in searching for a question. Existentialism replaces the answer with "What you make of it" (broadly speaking, as it's hard to actually pin down any concretes in existentialism, which is an umbrella term for a bunch of loosely-related concepts), but never really identifies the question.

Trivially, you could say the question is "What's the meaning of life?", or something deep-sounding like that, but what is the question really asking? The only meaningful question to my mind is "What should I do with my life?", which doesn't really require deep philosophy.

I'm a lifelong atheist. To me the "Purpose of life" question, as it pertains to atheists, is a concept imported from religion - that we can have a purpose - which lacks the referent which made that concept meaningful - a god or gods, being an entity or entities which can assign such purpose. Nihilism just seems confused, to me, and existentialism is an attempt to address a confused question. Which may or may not make me existentialist, depending on exactly which existentialism you call existentialism.

Comment author: FiftyTwo 29 January 2013 01:18:52PM 4 points [-]

But I've never seen the Icarus story as a lesson about the limitations of humans. I see it as a lesson about the limitations of wax as an adhesive.

Randall Munroe

Comment author: Oscar_Cunningham 30 January 2013 11:53:04PM 3 points [-]
Comment author: Jay_Schweikert 29 January 2013 06:30:59PM 2 points [-]

And to think, I was just getting on to post this quote myself!

Comment author: Carwajalca 29 January 2013 11:21:28AM 26 points [-]

"I've never seen the Icarus story as a lesson about the limitations of humans. I see it as a lesson about the limitations of wax as an adhesive."

-- Randall Munroe, in http://what-if.xkcd.com/30/ (What-if xkcd, Interplanetary Cessna)

Comment author: Eliezer_Yudkowsky 29 January 2013 09:29:29AM 16 points [-]

"A stupid person can make only certain, limited types of errors. The mistakes open to a clever fellow are far broader. But to the one who knows how smart he is compared to everyone else, the possibilities for true idiocy are boundless."

-- Steven Brust, spoken by Vlad, in Iorich

Comment author: shminux 31 January 2013 12:11:36AM *  3 points [-]

to the one who knows how smart he is compared to everyone else

Seems to describe well the founder of this forum. I wonder if this quote resonates with a certain personal experience of yours.

Comment author: NancyLebovitz 28 January 2013 02:50:42PM 2 points [-]

I don't think change can be planned. It can only be recognized.

jad abumrad, a video about the development of Radio Lab and the amount of fear involved in doing original work

Comment author: shminux 27 January 2013 10:02:32PM 4 points [-]

even though you can’t see or hear them at all, a person’s a person, no matter how small.

Dr. Seuss

Comment author: Mass_Driver 25 January 2013 10:00:41PM 16 points [-]

I once heard a story about the original writer of the Superman Radio Series. He wanted a pay rise, his employers didn't want to give him one. He decided to end the series with Superman trapped at the bottom of a well, tied down with kryptonite and surrounded by a hundred thousand tanks (or something along these lines). It was a cliffhanger. He then made his salary demands. His employers refused and went round every writer in America, but nobody could work out how the original writer was planning to have Superman escape. Eventually the radio guys had to go back to him and meet his wage demands. The first show of the next series began "Having escaped from the well, Superman hurried to..." There's a lesson in there somewhere, but I've no idea what it is.

-http://writebadlywell.blogspot.com/2010/05/write-yourself-into-corner.html

I would argue that the lesson is that when something valuable is at stake, we should focus on the simplest available solutions to the puzzles we face, rather than on ways to demonstrate our intelligence to ourselves or others.

Comment author: Fronken 29 January 2013 03:31:45PM 4 points [-]

Story ... too awesome ... not to upvote ...

not sure why its rational, though.

Comment author: RichardKennaway 26 January 2013 07:39:06PM 1 point [-]

I think this is an updating of the cliché from serial adventure stories for boys, where an instalment would end with a cliffhanger, the hero facing certain death. The following instalment would resolve the matter by saying "With one bound, Jack was free." Whether those exact words were ever written is unclear from Google, but it's a well-known form of lazy plotting. If it isn't already on TVTropes, now's your chance.

Comment author: CCC 29 January 2013 08:30:59AM 1 point [-]

Wouldn't that fall under "Cliffhanger Copout"?

Comment author: Desrtopa 29 January 2013 03:22:34AM *  3 points [-]

Did you just create that redlink? That's not the standard procedure for introducing new tropes, and if someone did do a writeup on it, it would probably end up getting deleted. New tropes are supposed to be introduced as proposals on the YKTTW (You Know That Thing Where) in order to build consensus that they're legitimate tropes that aren't already covered, and gather enough examples for a proper launch. You could add it as a proposal there, but the title is unlikely to fly under the current naming policy.

Pages launched from cold starts occasionally stick around (my first page contribution from back when I was a newcomer and hadn't learned the ropes is still around despite my own attempts to get it cutlisted,) but bypassing the YKTTW is frowned upon if not actually forbidden.

Comment author: RichardKennaway 29 January 2013 07:30:56AM *  2 points [-]

I didn't make any edits to TVTropes -- the page that it looks like I'm linking to doesn't actually exist. But I wasn't aware of YKTTW.

ETA: Neither is their 404 handler, that turns URLs for nonexistent pages into invitations to create them. As a troper yourself, maybe you could suggest to TVTropes that they change it?

Comment author: Desrtopa 29 January 2013 03:05:14PM 2 points [-]

If you're referring to what I think you are, that's more of a feature than a bug, since works pages don't need to go through the YKTTW. We get a lot more new works pages than new trope pages, so as long as the mechanics for creating either are the same, it helps to keep the process streamlined to avoid too much inconvenience.

Comment author: Nornagest 29 January 2013 04:06:38AM 0 points [-]

To be fair, that kind of flies in the face of standard wiki practice. Not Invented Here isn't defined in the main namespace, but the entire site probably counts as self-demonstration.

Comment author: Kindly 26 January 2013 07:58:39PM 2 points [-]

I believe that Cliffhanger Copout refers to the same thing. The Harlan Ellison example in particular is worth reading.

Comment author: CronoDAS 26 January 2013 07:16:17PM 5 points [-]

Speaking of writing yourself into a corner...

According to TV Tropes, there was one show, "Sledge Hammer", which ended its first season with the main character setting off a nuclear bomb while trying to defuse it. They didn't expect to be renewed for a second season, so when they were, they had a problem. This is what they did:

Previously on Sledge Hammer:
[scene of nuclear explosion]
Tonight's episode takes place five years before that fateful explosion.

Comment author: Eliezer_Yudkowsky 26 January 2013 06:59:02PM 1 point [-]

There's so many different ways that story couldn't possibly be true...

(EDIT: Ooh, turns out that the Superman Radio program was the one that pulled off the "Clan of the Fiery Cross" punch against the KKK.)

Comment author: GLaDOS 24 January 2013 09:34:00PM 3 points [-]

The dissident temperament has been present in all times and places, though only ever among a small minority of citizens. Its characteristic, speaking broadly, is a cast of mind that, presented with a proposition about the world, has little interest in where that proposition originated, or how popular it is, or how many powerful and credentialed persons have assented to it, or what might be lost in the way of property, status, or even life, in denying it. To the dissident, the only thing worth pondering about the proposition is, is it true? If it is, then no king’s command can falsify it; and if it is not, then not even the assent of a hundred million will make it true.

--John Derbyshire

Comment author: ygert 24 January 2013 09:50:16PM 8 points [-]

Wile this is all very inspiring, is it true? Yes, truth in and of itself is something that many people value, but what this quote is claiming is that there are a class of people (that he calls "dissidents") that specifically value this above and beyond anything else. It seems a lot more likely to me that truth is something that all or most people value to one extent or another, and as such, sometimes if the conditions are right people will sacrifice stuff to achieve it, just like for any other thing they value.

Comment author: [deleted] 21 January 2013 05:20:59PM 11 points [-]

Person 1: "I don't understand how my brain works. But my brain is what I rely on to understand how things work." Person 2: "Is that a problem?" Person 1: "I'm not sure how to tell."

-Today's xkcd

Comment author: [deleted] 18 January 2013 04:56:34AM 0 points [-]

Always do the right thing.

-The mayor, in "do the right thing"

Comment author: DanArmak 19 January 2013 11:32:18AM 1 point [-]

I think the bigger problem is that people mostly disagree on what the right thing to do is.

Comment author: [deleted] 19 January 2013 06:13:06PM 2 points [-]

I still find it useful to play it back in my head to remind myself to actually think whether what I'm doing is right "nyan, always do the right thing".

I think that we agree on enough that if people "did the right thing" it would be better than the current situation, if not perfect.

Comment author: MugaSofer 25 January 2013 11:09:43AM 0 points [-]

In fairness, people aren't great at deciding what the right thing is, but I still agree with you; most people are not wrong about most things. For example, boycotts would work. So well.

OTOH, every abortion clinic would be bombed before the week was out; terrorist attacks would probably go up generally, as would revenge killings. You could argue those would have positive net impacts (since terrorists would presumably stop once their demands are met? I think?) but it's certainly not one-sided.

Comment author: Eugine_Nier 25 January 2013 01:49:12AM 2 points [-]

I think that we agree on enough that if people "did the right thing" it would be better than the current situation, if not perfect.

That's not at all clear.

Comment author: Qiaochu_Yuan 24 January 2013 11:23:53PM 1 point [-]

I think that we agree on enough that if people "did the right thing" it would be better than the current situation, if not perfect.

Unclear. Some people have very bad ideas about what constitutes the right thing and their impact might not be canceled out.

Comment author: HalMorris 18 January 2013 03:56:21PM *  1 point [-]

Funny, I was thinking for the last few days or weeks of "Do the right thing!" as a sort of summary of deontology. It's all very well if you know what the right thing is. Another classic expression is "Let justice be done though the heavens may fall" (see http://en.wikipedia.org/wiki/Fiat_justitia_ruat_caelum), apparently most famously said by the English Jurist Lord Mansfield when reversing the conviction of John Wilkes for libel while, it seems, riots and demonstrations were going on in the streets (my very brief research indicates he did not say it in the case that outlawed slavery in the British homeland long before even the British abolished elsewhere -- though a book on that case is titled "Though the Heavens may fall" -- the fact that he made that remark and that decision just made it too tempting to conflate them).

Some examples in the Bible pointedly illustrate "do the right thing" (in the sense of whatever God says is right -- though in this case, "right" clearly isn't in any conflict with "the Heavens"). I.e. Abraham: Sacrifice your son to me (ha ha just kidding/testing you), or Joshua "Run around the walls of Jericho blowing horns and the walls will fall down". These are extreme cases of "Right is right, never mind how you'd imagine it would turn out -- with hour tiny human mind).

Personally, since I am not an Objectivist, or a fundamentalist, or one who talks with God, I don't fully trust any set of rules I may currently have as to what "is right", though I trust them enough to get through most days. Nor am I a perfect consequentialist since I don't perfectly trust my ability to predict outcomes.

An awful lot of examples given to justify consequentialism are extremely contrived, like "ticking bomb" scenarios to justify torture. Unfortunately many of us have seen these scenarios all too often in fiction (e.g. "24"), where they are quite common because they furnish such exciting plot points. Then they are on a battlefield in the real world which does not follow scriptwriter logic, and they imagine they are living such a heroic moment, which gets them to do something wrong and stupid.

In my opinion the best course is some of both. If I find myself, say, as a policeman, thinking that by shooting this guy though it really isn't self-defence but I can sell it as such, I will rid the world of a bad actor who'd probably kill two people, then I suspect the best course is to fall back on the manual which says I'm not justified in shooting him in this situation. Similarly if I think by this or that unethical action I'll increase the chance of the right person being elected to some important office On the other hand, if on some occasion I believe that by lying I will prevent some calamity then I might lie. There is no guarantee that we'll get it right, and we'll have to face the consequences if we're wrong.

The worst thing, I think, is to think we've figured it all out and know exactly how to be get it right all the time.

Comment author: NancyLebovitz 17 January 2013 03:43:04AM 17 points [-]

I keep coming back to the essential problem that in our increasingly complex society, we are actually required to hold very firm opinions about highly complex matters that require analysis from multiple fields of expertise (economics, law, political science, engineering, others) in hugely complex systems where we must use our imperfect data to choose among possible outcomes that involve significant trade offs. This would be OK if we did not regard everyone who disagreed with us as an ignorant pinhead or vile evildoer whose sole motivation for disagreeing is their intrinsic idiocy, greed, or hatred for our essential freedoms/people not like themselves. Except that there actually are LOTS of ignorant pinheads and vile evildoers whose sole motivation etc., or whose self-interest is obvious to everyone but themselves.

osewalrus

Comment author: simplicio 22 January 2013 04:09:07PM -2 points [-]

I keep coming back to the essential problem that in our increasingly complex society, we are actually required to hold very firm opinions about highly complex matters that require analysis from multiple fields of expertise (economics, law, political science, engineering, others) in hugely complex systems where we must use our imperfect data to choose among possible outcomes that involve significant trade offs.

A possible partial solution to this problem.

Comment author: Nornagest 17 January 2013 04:01:49AM *  9 points [-]

I try to get around this by assuming that self-interest and malice, outside of a few exceptional cases, are evenly distributed across tribes, organizations, and political entities, and that when I find a particularly self-interested or malicious person that's evidence about their own personality rather than about tribal characteristics. This is almost certainly false and indeed requires not only bad priors but bad Bayesian inference, but I haven't yet found a way to use all but the narrowest and most obvious negative-valence concepts to predict group behavior without inviting more bias than I'd be preventing.

Comment author: Jay_Schweikert 16 January 2013 06:25:32PM *  0 points [-]

[After analyzing the hypothetical of an extra, random person dying every second.] All in all, the losses would be dramatic, but not devastating to our species as a whole. And really, in the end, the global death rate is 100%—everyone dies.

. . . or do they? Strictly speaking, the observed death rate for the human condition is something like 93%—that is, around 93% of all humans have died. This means the death rate among humans who were not members of The Beatles is significantly higher than the 50% death rate among humans who were.

--Randall Munroe, "Death Rates"

Comment author: Alicorn 16 January 2013 06:13:25PM *  5 points [-]

"My baby is dead. Six months old and she's dead."
"Take solace in the knowledge that this is all part of the Corn God's plan."
"Your god's plan involves dead babies?"
"If you're gonna make an omelette, you're gonna have to break a few children."
"I'm not entirely sure I want to eat that omelette."

-- Scenes From A Multiverse

Comment author: Eugine_Nier 17 January 2013 12:55:41AM 2 points [-]

This works equally well as an argument against utilitarianism, which I'm guessing may be your intent.

Comment author: MugaSofer 20 January 2013 03:36:56PM *  -1 points [-]

Nah, it's just a cheap shot at the theists.

EDIT: not sure about the source, but the way it's edited ...

Comment author: Qiaochu_Yuan 18 January 2013 05:04:03AM 2 points [-]

I have no idea what people mean when they say they are against utilitarianism. My current interpretation is that they don't think people should be VNM-rational, and I haven't seen a cogent argument supporting this. Why isn't this quote just establishing that the utility of babies is high?

Comment author: [deleted] 19 January 2013 04:11:57PM 2 points [-]

I have no idea what people mean when they say they are against utilitarianism.

I find these criticisms by Vladimir_M to be really superb.

Comment author: Qiaochu_Yuan 19 January 2013 07:23:14PM 0 points [-]

Okay. So none of that is an argument against VNM-rationality, it's an argument against a bunch of other ideas that have historically been attached to the label "utilitarian," right? The main thing I got out of that post is that utilitarianism is hard, not that it's wrong.

Comment author: [deleted] 19 January 2013 07:56:07PM 1 point [-]

I don't know what you have in mind by your allusion to Morgenstern-von Neumann. The theorem is descriptive, right? It says you can model a certain broad class of decision-making entities as maximizing a utility function. What is VNM-rationality, and what does it mean to argue for it or against it?

If your goal is "to do the greatest good for the greatest number," or a similar utilitarian goal, I am not sure how the VNM theorem helps you.

What do you think of the "interpersonal utility comparison" problem? Vladimir_M regards it as something close to a defeater of utilitarianism.

Comment author: Qiaochu_Yuan 19 January 2013 08:35:48PM *  1 point [-]

I don't know what you have in mind by your allusion to Morgenstern-von Neumann. The theorem is descriptive, right? It says you can model a certain broad class of decision-making entities as maximizing a utility function. What is VNM-rationality, and what does it mean to argue for it or against it?

"People should aim to be VNM-rational." I think of this as a weak claim, which is why I didn't understand why people appeared to be arguing against it. I concluded that they probably weren't, and instead meant something else by utilitarianism, which is why I switched to a different term.

If your goal is "to do the greatest good for the greatest number," or a similar utilitarian goal, I am not sure how the VNM theorem helps you.

Yes, that's why I think of "people should aim to be VNM-rational" as a weak claim and didn't understand why people appeared to be against it.

What do you think of the "interpersonal utility comparison" problem? Vladimir_M regards it as something close to a defeater of utilitarianism.

It seems like a very hard problem, but nobody claimed that ethics was easy. What does Vladimir_M think we should be doing instead?

Comment author: Eugine_Nier 21 January 2013 12:35:31AM *  1 point [-]

"People should aim to be VNM-rational."

What definition of "should" are you using here? Do you mean that people deontologically should aim to be VNM-rational? Or do you mean that people should be VNM-rational in order to maximize some (which?) utility function?

Comment author: [deleted] 19 January 2013 09:24:23PM 1 point [-]

"People should aim to be VNM-rational."

Can you spell this out a little more?

What does Vladimir_M think we should be doing instead?

I don't know. I think this comment reveals a lot of respect for what you might call "folk ethics," i.e. the way normal people do it.

Comment author: Qiaochu_Yuan 19 January 2013 09:39:22PM 1 point [-]

Can you spell this out a little more?

"People should aim for their behavior to satisfy the VNM axioms." I'm not sure how to get more precise than this.

Comment author: [deleted] 19 January 2013 10:10:16PM 1 point [-]

"People should aim for their behavior to satisfy the VNM axioms."

OK. But this seems funny to me as a moral prescription. In fact a standard premise of economics is that people's behavior does satisfy the VNM axioms, or at least that deviations from them are random and cancel each other out at large scales. That's sort of the point of the VNM theorem: you can model people's behavior as though they were maximizing something, even if that's not the way an individual understands his own behavior.

Even if you don't buy that premise, it's hard for me to see why famous utilitarians like Bentham or Singer would be pleased if people hewed more closely to the VNM axioms. Couldn't they do so, and still make the world worse by valuing bad things?

If your goal is "to do the greatest good for the greatest number," or a similar utilitarian goal, I am not sure how the VNM theorem helps you.

Yes, that's why I think of "people should aim to be VNM-rational" as a weak claim and didn't understand why people appeared to be against it.

Is "people should aim for their behavior to satisfy the VNM axioms" all that you meant originally by utilitarianism? From what you've written elsewhere in this thread it sounds like you might mean something more, but I could be misunderstanding.

Comment author: CarlShulman 18 January 2013 05:56:34AM 1 point [-]

A bounded utility function that places a lot of value on signaling/being "a good person" and desirable associate, getting some "warm glow" and "mostly doing the (deontologically) right thing" seems like a pretty good approximation.

Comment author: Eugine_Nier 18 January 2013 05:30:57AM *  1 point [-]

Well, Alicorn is a deontologist.

In any case, as an ultafinitist you should know the problems with the VNM theorem.

Comment author: Qiaochu_Yuan 18 January 2013 05:58:20AM *  4 points [-]

I also have no idea what people mean when they say they are deontologists. I've read Alicorn's Deontology for Consequentialists and I still really have no idea. My current interpretation is that a deontologist will make a decision that makes everything worse if it upholds some moral principle, which just seems like obviously a bad idea to me. I think it's reasonable to argue that deontology and virtue ethics describe heuristics for carrying out moral decisions in practice, but heuristics are heuristics because they break down, and I don't see a reasonable way to judge which heuristics to use that isn't consequentialist / utilitarian.

Then again, it's quite likely that my understanding of these terms doesn't agree with their colloquial use, in which case I need to find a better word for what I mean by consequentialist / utilitarian. Maybe I should stick to "VNM-rational."

I also didn't claim to be an ultrafinitist, although I have ultrafinitist sympathies. I haven't worked through the proof of the VNM theorem yet in enough detail to understand how infinitary it is (although I intend to).

Comment author: Eugine_Nier 18 January 2013 07:26:51AM 1 point [-]

My current interpretation is that a deontologist will make a decision that makes everything worse if it upholds some moral principle, which just seems like obviously a bad idea to me.

Taboo "make everything worse".

At the very least I find it interesting how rarely an analogous objection against VNM-utiliterians with different utility functions is raised. It's almost as if many of the "VNM-utiliterians" around here don't care what it means to "make everything worse" as long as one avoids doing it, and avoids doing it following the mathematically correct decision theory.

I also didn't claim to be an ultrafinitist, although I have ultrafinitist sympathies. I haven't worked through the proof of the VNM theorem yet in enough detail to understand how infinitary it is (although I intend to).

Well the continuity axiom in the statement certainly seems dubious from an ultafinitist point of view.

Comment author: Qiaochu_Yuan 18 January 2013 08:08:05AM 1 point [-]

Taboo "make everything worse".

Have worse consequences for everybody, where "everybody" means present and future agents to which we assign moral value. For example, a sufficiently crazy deontologist might want to kill all such agents in the name of some sacred moral principle.

At the very least I find it interesting how rarely an analogous objection against VNM-utiliterians with different utility functions is raised. It's almost as if many of the "VNM-utiliterians" around here don't care what it means to "make everything worse" as long as one avoids doing it, and avoids doing it following the mathematically correct decision theory.

Rarely? Isn't this exactly what we're talking about when we talk about paperclip maximizers?

Comment author: Eugine_Nier 19 January 2013 09:16:46AM 1 point [-]

Have worse consequences for everybody, where "everybody" means present and future agents to which we assign moral value.

When I asked you to taboo "makes everything worse", I meant taboo "worse" not taboo "everything".

Comment author: Qiaochu_Yuan 19 January 2013 09:54:28AM *  1 point [-]

You want me to say something like "worse with respect to some utility function" and you want to respond with something like "a VNM-rational agent with a different utility function has the same property." I didn't claim that I reject deontologists but accept VNM-rational agents even if they have different utility functions from me. I'm just trying to explain that my current understanding of deontology makes it seem like a bad idea to me, which is why I don't think it's accurate. Are you trying to correct my understanding of deontology or are you agreeing with it but disagreeing that it's a bad idea?

Comment author: Eugine_Nier 21 January 2013 12:28:41AM 1 point [-]

You want me to say something like "worse with respect to some utility function" and you want to respond with something like "a VNM-rational agent with a different utility function has the same property."

No, I'm going to respond by asking you "with respect to which utility function?" and "why should I care about that utility function?"

Comment author: [deleted] 18 January 2013 07:26:59PM 0 points [-]

Have worse consequences for everybody, where "everybody" means present and future agents to which we assign moral value.

You've assumed vague-utilitarianism here, which weakens your point. I would taboo "make everything worse" as "less freedom, health, fun, awesomeness, happyness, truth, etc", where the list refers to all the good things, as argued in the metaethcis sequence.

Comment author: Eugine_Nier 19 January 2013 09:21:11AM -2 points [-]

You've assumed vague-utilitarianism here, which weakens your point. I would taboo "make everything worse" as "less freedom, health, fun, awesomeness, happyness, truth, etc"

Nice try. The problem with your definition is that freedom, for example, is fundamentally a deontological concept. If you don't agree, I challenge you to give a non-deontological definition.

Comment author: Qiaochu_Yuan 19 January 2013 09:56:13AM 1 point [-]

What is a deontological concept and what is a non-deontological concept?

Comment author: Kindly 18 January 2013 02:07:31PM 0 points [-]

For example, a sufficiently crazy deontologist might want to kill all such agents in the name of some sacred moral principle.

A sufficiently crazy consequentialist might want to kill all such agents because he's scared of what the voices in his head might otherwise do. Your argument is not an argument at all.

And if the sacred moral principle leads to the deontologist killing everyone, that is a pretty terrible moral principle. Usually they're not like that. Usually the "don't kill people if you can help it" moral principle tends to be ranked pretty high up there to prevent things like this from happening.

Comment author: Qiaochu_Yuan 18 January 2013 07:34:10PM 1 point [-]

to prevent things like this from happening.

Smells like consequentialist reasoning. Look, if I had a better example I would give it, but I am genuinely not sure what deontologists think they're doing if they don't think they're just using heuristics that approximate consequentialist reasoning.

Comment author: [deleted] 18 January 2013 05:16:29AM 8 points [-]

I aspire to be VNM rational, but not a utilitarian.

It's all very confusing because they both use the word "utility" but they seem to be different concepts. "Utilitarianism" is a particular moral theory that (depending on the speaker) assumes consequentialism, linearish aggregation of "utility" between people, independence and linearity of utility function components, utility is proportional to "happyness" or "well-being" or preference fulfillment, etc. I'm sure any given utilitarian will disagree with something in that list, but I've seen all of them claimed.

VNM utility only assumes that you assign utilities to possibilities consistently, and that your utilities aggregate by expectation. It also assumes consequentialism in some sense, but it's not hard to make utility assignments that aren't really usefully described as consequentialist.

I reject "utilitarianism" because it is very vague, and because I disagree with many of its interpretations.

Comment author: Qiaochu_Yuan 18 January 2013 06:11:21AM 0 points [-]

Thanks for the explanation. Reading through the Wikipedia article on utilitarianism, it seems like this is one of those words that has been muddled by the presence of too many authors using it. In the future I guess I should refer to the concept I had in mind as VNM-utilitarianism.

Comment author: Sniffnoy 18 January 2013 06:43:05AM 2 points [-]

Probably best not to refer to it with the word "utilitarianism", since it isn't a form of that. Calling it "consequentialism" is arguably enough, since (making appropriate asumptions about what a rational agent must do) a rational consequentialist must use a VNM utility function. But I guess not everyone does in fact agree with those assumptions, so perhaps "utility-function based consequentialism". Or perhaps "VNM-consequentialism".

Comment author: TsviBT 17 January 2013 03:03:43AM 1 point [-]

Huh? How so?

Comment author: Eugine_Nier 19 January 2013 09:22:49AM 1 point [-]

Replace the "corn god" in the quote with a sufficiently rational utiliterian agent.

Comment author: Wei_Dai 20 January 2013 03:33:53AM 0 points [-]

To make sure I understand, do you mean that a sufficiently rational utilitarian agent may decide to kill a 6 month old baby if it decides that would serve its goal of maximizing aggregate utility, and if I'm pretty sure that no 6 month old baby should ever be intentionally killed, I would conclude that utilitarianism is probably wrong?

Comment author: Alicorn 17 January 2013 02:39:36AM 1 point [-]

I hadn't actually thought of that, but that could be part of why I liked the quote.

Comment author: [deleted] 16 January 2013 03:57:24PM 3 points [-]

Well, we have a pretty good test for who was stronger. Who won? In the real story, overdogs win.

--Mencius Moldbug, here

I can't overemphasise how much I agree with this quote as a heuristic.

Comment author: shminux 16 January 2013 05:53:32PM *  4 points [-]

As I noted in my other comment, he redefined the terms underdog/overdog to be based on poteriors, not priors, effectively rendering them redundant (and useless as a heuristic).

Comment author: Kindly 16 January 2013 11:11:26PM 2 points [-]

Most of the time, priors and posteriors match. If you expect the posterior to differ from your prior in a specific direction, then change your prior.

And thus, you should expect 99% of underdogs to lose and 99% of overdogs to win. If all you know is that a dog won, you should be 99% confident the dog was an overdog. If the standard narrative reports the underdog winning, that doesn't make the narrative impossible, but puts a burden of implausibility on it.

Comment author: gwern 17 January 2013 01:58:35AM 1 point [-]

If you expect the posterior to differ from your prior in a specific direction, then change your prior.

How should I change my prior if I expect it to change in the specific directions either up or down - but not the same?

Comment author: khafra 17 January 2013 07:09:28PM 3 points [-]

Fat tailed distributions make the rockin' world go round.

Comment author: gwern 17 January 2013 07:21:13PM 2 points [-]

They don't even have to be fat-tailed; in very simple examples you can know that on the next observation, your posterior will either be greater or lesser but not the same.

Here's an example: flipping a biased coin in a beta distribution with a uniform prior, and trying to infer the bias/frequency. Obviously, when I flip the coin, I will either get a heads or a tails, so I know after my first flip, my posterior will either favor heads or tails, but not remain unchanged! There is no landing-on-its-edge intermediate 0.5 coin. Indeed, I know in advance I will be able to rule out 1 of 2 hypotheses: 100% heads and 100% tails.

But this isn't just true of the first observation. Suppose I flip twice, and get heads then tails; so the single most likely frequency is 1/2 since that's what I have to date. But now we're back to the same situation as in the beginning: we've managed to accumulative evidence against the most extreme biases like 99% heads, so we have learned something from the 2 flips, but we're back in the same situation where we expect the posterior to differ from the prior in 2 specific directions but cannot update the prior: the next flip I will either get 2/3 or 1/3 heads. Hence, I can tell you - even before flipping - that 1/2 must be dethroned in favor of 1/3 or 2/3!

Comment author: shminux 17 January 2013 08:56:19PM *  -1 points [-]

For coin bias estimate, as for most other things, the self-consistent updating procedure follows maximum likelihood.

Comment author: [deleted] 17 January 2013 09:10:07PM 2 points [-]

Max liklihood tells you which is most likely, which is mostly meaningless without further assumptions. For example, if you wanted to bet on what the next flip would be, a max liklihood method won't give you the right probability.

Comment author: [deleted] 18 January 2013 04:13:38PM *  1 point [-]

Yes.

OTOH, the expected value of the beta distribution with parameters a and b happens to equal the mode of the beta distribution with parameters a - 1 and b - 1, so maximum likelihood does give the right answer (i.e. the expected value of the posterior) if you start from the improper prior B(0, 0).

(IIRC, the same thing happens with other types of distributions, if you pick the ‘right’ improper prior (i.e. the one Jaynes argues for in conditions of total ignorance for totally unrelated reasons) for each. I wonder if this has some particular relevance.)

Comment author: [deleted] 17 January 2013 08:30:45PM *  2 points [-]

And yet if you add those two posterior distributions, weighted by your current probability of ending up with each, you get your prior back. Magic!

(Witch burners don't get their prior back when they do this because they expect to update in the direction of "she's a witch" in either case, so when they sum over probable posteriors, they get back their real prior which says "I already know that she's a witch", the implication being "the trial has low value of information, let's just burn her now".)

Comment author: gwern 17 January 2013 08:34:53PM 1 point [-]

Yup, sure does. Which is a step toward the right idea Kindly was gesturing at.

Comment author: Qiaochu_Yuan 16 January 2013 11:19:48PM *  6 points [-]

And thus, you should expect 99% of underdogs to lose and 99% of overdogs to win. If all you know is that a dog won, you should be 99% confident the dog was an overdog.

Second statement assumes that the base rate of underdogs and overdogs is the same. In practice I would expect there to be far more underdogs than overdogs.

Comment author: Kindly 16 January 2013 11:54:12PM 1 point [-]

Good point. I was thinking of underdog and overdog as relative, binary terms -- in any contest, one of two dogs is the underdog, and the other is the overdog. If that's not the case, we can expect to see underdogs beating other underdogs, for instance, or an overdog being up against ten underdogs and losing to one of them.

Comment author: GLaDOS 16 January 2013 06:36:09PM *  2 points [-]

I consider this an uncharitable reading, I've read the article twice and I still understood him much as Konkvistador and Athrelon have.

Comment author: Oligopsony 16 January 2013 05:40:46PM *  4 points [-]

I suppose this is a hilariously obvious thing to say, but I wonder how much leftism Marcion Mugwump has actually read. We're completely honest about the whole power-seizing thing. It's not some secret truth.

(Okay, some non-Marxist traditions like anarchism have that whole "people vs. power" thing. But they're confused.)

Comment author: [deleted] 16 January 2013 07:11:47PM 1 point [-]

but I wonder how much leftism Marcion Mugwump has actually read.

Ehm... what?

I suppose this is a hilariously obvious thing to say

Yes but as a friend reminded me recently, saying obvious things can be necessary.

Comment author: [deleted] 16 January 2013 05:35:38PM 4 points [-]

The heuristic is great, but that article is horrible, even for Moldbug.

Comment author: TimS 16 January 2013 06:47:28PM 8 points [-]

I agree. For example:

"Civil disobedience" is no more than a way for the overdog to say to the underdog: I am so strong that you cannot enforce your "laws" upon me.

This statement is obviously true. But it sure would be useful to have a theory that predicted (or even explained) when a putative civil disobedience would and wouldn't work that way.

Obviously, willing to use overwhelming violence usually defeats civil disobedience. But not every protest wins, and it is worth trying to figure out why - if for no other reason than figuring out if we could win if we protested something.

Comment author: hairyfigment 29 January 2013 06:13:09PM -1 points [-]

is no more than

This statement is obviously true.

I see no way to interpret it that would make it true. Civil disobedience serves to provoke a response that will - alone among crises that we know about - decrease people's attitudes of obedience or submission to "traditional" authority. In the obvious Just-So Story, leaders who will use violence against people who pose no threat might also kill you.

We would expect this Gandhi trick to fail if the authorities get little-to-none of their power from the attitude in question. The nature of their response must matter as well. (Meanwhile, as you imply, I don't know how Moldbug wants us to detect strength. My first guess would be that he wants his chosen 'enemies' to appear strong so that he can play underdog.)

Comment author: TimS 29 January 2013 07:34:49PM 0 points [-]

I don't think we are disagreeing on substance. "Underdog" and similar labels are narrative labels, not predictive labels. I interpreted Moldbug as saying that treating narrative labels as predictive labels is likely to lead one to make mistaken predictions and / or engage in hindsight bias. This is a true statement, but not a particularly useful one - it's a good first step, but not a complete analysis.

Thus, the extent to which Moldbug treats the statement as complete analysis is error.

Comment author: shminux 16 January 2013 05:56:05PM 0 points [-]

How is it great? How would you use this "heuristic"?

Comment author: [deleted] 16 January 2013 06:02:37PM *  2 points [-]

I hadn't read your comment before I posted this. I assumed it meant what the terms usually mean, and lacked moldbuggerian semantics. In that sense, it would be a warning against rooting for the (usual) underdog, which is certainly a bias I've found myself wandering into in the past.

In retrospect I was somewhat silly for assuming Moldbug would use a word to mean what it actually means.

Comment author: [deleted] 16 January 2013 06:39:48PM 0 points [-]

I have read his comment and the article. Knowing Moldbug's style I agree with GLaDOS on the interpretation. I may be wrong in which case interpret the quote in line with my interpretation rather than original meaning.

Comment author: RobertLumley 16 January 2013 12:53:23AM *  1 point [-]

“Our vision is inevitably contracted, and the whole horizon may contain much which will compose a very different picture.”

Cheney Bros v. Doris Silk Corporation, New York Circuit Court of Appeals, Second Circuit

Comment author: arborealhominid 16 January 2013 12:03:35AM 0 points [-]

Whenever I'm about to do something, I think, "Would an idiot do that?" And if they would, then I do not do that thing.

-Dwight K. Schrute

Comment author: [deleted] 16 January 2013 12:56:00AM 4 points [-]

Reversed stupidity is not intelligence. Would an idiot drink when they're thirsty? Yes they would.

Comment author: arborealhominid 16 January 2013 02:47:29AM 1 point [-]

Extremely good point! I liked this quote because I thought it was a funny way to describe taking the outside view, but you're completely right that it advocates reversed stupidity (at least when taken completely literally).

Comment author: Desrtopa 16 January 2013 01:50:42AM 1 point [-]

Also a duplicate, about which I made roughly the same comment the first time it was posted.

Comment author: Baruta07 15 January 2013 03:06:25AM -2 points [-]

He shook his head. "No, for the purposes of this discussion, Asuka... only I have the power to decide humanity's fate. And I refuse that power to give it back to them. Humanity is made of neither heaven or hell; that with freedom of choice and honor, as though the maker and molder of itself... that they may fashion themselves in whatever form they shall prefer. People, individuals, are not single things but always tip from order to chaos and back again. Those with order are needed for stability. Those who espouse chaos bring change. Only humanity may balance humanity. If a God should be needed, only that nothing from without should threaten that free choice. The maker should be the first servant, just as the mother would care without hesitation for her child

-Shinji & Warhammer Xover.

Comment author: MugaSofer 15 January 2013 10:17:22AM -2 points [-]

How is this a rationality quote?

Also, I haven't read the text in question, but I for one would be very wary about letting Warhammer!humanity "fashion themselves in whatever form they shall prefer."

Comment author: foret 14 January 2013 09:52:52PM *  7 points [-]

In reference to Occam's razor:

"Of course giving an inductive bias a name does not justify it."

--from Machine Learning by Tom M. Mitchell

Interesting how a concept seems more believable if it has a name...

Comment author: robertskmiles 25 April 2013 05:09:10PM 0 points [-]

Or less. Sometimes an assumption is believed implicitly, and it's not until it has a name that you can examine it at all.

Comment author: simplicio 14 January 2013 08:45:04PM 1 point [-]

lacanthropy, n. The transformation, under the influence of the full moon, of a dubious psychological theory into a dubious social theory via a dubious linguistic theory.

(Source: Dennettations)

Comment author: MixedNuts 14 January 2013 08:53:25PM 1 point [-]

Is there a reason you're quoting this, or are you just being humeorous?

Comment author: simplicio 14 January 2013 09:38:04PM 0 points [-]

I thought it was quite Witty.

Comment author: cousin_it 14 January 2013 09:12:06AM *  -2 points [-]

Treating life as a hackable problem seems to be a recipe for despair.

-- Tloewald on HN, commenting on Aaron Swartz's suicide.

Comment author: simplicio 14 January 2013 06:29:27PM 4 points [-]

Any reason whatsoever to think that this particular characteristic contributed wholly or partly to Swartz' suicide, other than its being a known & salient fact about Swartz?

Comment author: cousin_it 14 January 2013 07:43:18PM *  4 points [-]

One of my favorite threads on HN is an analysis of David Foster Wallace's suicide in light of his famous "fish and water" speech. It's hard to summarize, but do read it, especially Cushman's comment. Then juxtapose it with the list at the end of this post from Aaron's "Raw Nerve" series, and you'll understand what I was getting at with the above quote. In short, treating life as a hackable problem seems to make people be deliberately harsh and "realistic" with themselves in order to cause change. That can make you unhappy (it happened to me), or if you're already predisposed to depression, that can make it much worse.

Comment author: simplicio 14 January 2013 08:19:28PM 2 points [-]

That makes more sense, thanks for the explanation. Still not entirely sure I buy the causal link, in either case.

Comment author: John_Maxwell_IV 14 January 2013 05:10:40AM 13 points [-]

I guess my point here is that part of the reason I stayed in Mormonism so long was that the people arguing against Mormonism were using such ridiculously bad arguments. I tried to find the most rigorous reasoning and the strongest research that opposed LDS theology, but the best they could come up with was stuff like horses in the Book of Mormon. It's so easy for a Latter-Day Saint to simply write the horse references off as either a slight mistranslation or a gap in current scientific knowledge that that kind of "evidence" wasn't worth the time of day to me. And for every horse problem there was something like Hugh Nibley's "Two Shots in the Dark" or Eugene England's work on Lehi's alleged travels across Saudi Arabia, apologetic works that made Mormon historical and theological claims look vaguely plausible. There were bright, thoughtful people on both sides of the Mormon apologetics divide, but the average IQ was definitely a couple of dozen points higher in the Mormon camp.

http://www.exmormon.org/whylft18.htm

Comment author: Bakkot 15 January 2013 07:55:10PM 5 points [-]

This is part of why it's important to fight against all bad arguments everywhere, not just bad arguments on the other side.

Comment author: John_Maxwell_IV 16 January 2013 12:09:30PM *  2 points [-]

Another interpretation: Try to figure out which side has more intelligent defenders and control for that when evaluating arguments. (On the other hand, the fact that all the smart people seem to believe X should probably be seen as evidence too...)

Yes, argument screens off authority, but that assumes that you're in a universe where it's possible to know everything and think of everything, I suspect. If one side is much more creative about coming up with clever arguments in support of itself (much better than you), who should you believe if the clever side also has all the best arguments?

Comment author: peuddO 01 February 2013 09:51:06PM *  0 points [-]

That's just not very correct. There are no external errors in measuring probability, seeing as the unit and measure comes from internal processes. Errors in perceptions of reality and errors in evaluating the strength of an argument will invariably come from oneself, or alternatively from ambiguity in the argument itself (which would make it a worse argument anyway).

Intelligent people do make bad ideas seem more believable and stupid people do make good ideas seem less believable, but you can still expect the intelligent people to be right more often. Otherwise, what you're describing as intelligence... ain't. That doesn't mean you should believe something just because a smart person said it - just that you shouldn't believe it less.

It's going back to the entire reverse stupidity thing. Trying to make yourself unbiased by compensating in the opposite direction doesn't remove the bias - you're still adjusting from the baseline it's established.

On a similar note, I may just have given you an uncharitable reading and assumed you meant something you didn't. Such a misunderstanding won't adjust the truth of what I'm saying about what I'd be reading into your words, and it won't adjust the truth of what you were actually trying to say. Even if there's a bias on my part, it skews perception rather than reality.

Comment author: Wei_Dai 16 January 2013 01:15:21PM 2 points [-]

Another interpretation: Try to figure out which side has more intelligent defenders and control for that when evaluating arguments.

Isn't the real problem here that the author of the quote was asking the wrong question, namely "Mormonism or non-Mormon Christianity?" when he should have been asking "Theism or atheism?" I don't see how controlling for which side had the more intelligent defenders in the former debate would have helped him better get to the truth. (I mean that may well be the right thing to do in general, but this doesn't seem to be a very good example for illustrating it.)

Comment author: blashimov 30 January 2013 01:41:01AM 0 points [-]

That may be too much to ask for. Besides, if the horse evidence had worked, you'd be forced to turn around and apply it to Jesus...it may not have worked for her, but it has worked on some theists.

Comment author: [deleted] 13 January 2013 11:40:58PM 2 points [-]

If I could offer one piece of advice to young people thinking about their future, it would be this: Don't preconceive. Find out what the opportunities are.

--Thomas Sowell

Comment author: [deleted] 18 January 2013 04:50:43AM 1 point [-]

That's hopelessly vague. Advice is hard enough to absorb even if you understand it.

Comment author: [deleted] 13 January 2013 11:39:37PM 0 points [-]

Intellectuals may like to think of themselves as people who "speak truth to power" but too often they are people who speak lies to gain power.

--Thomas Sowell

Comment author: hairyfigment 29 January 2013 06:52:26PM 1 point [-]
Comment author: [deleted] 29 January 2013 08:03:12PM 2 points [-]

Thank you!

Comment author: simplicio 14 January 2013 07:39:11PM 0 points [-]

Seems like an implausible view of the motivations of said intellectuals. Otherwise, agreed.

Comment author: [deleted] 18 January 2013 04:54:00AM 1 point [-]

an implausible view of the motivations

The first clause was a statement about what they think, not really about motivations, and quite plausible anyway.

The second statement was about what they do. Related to "adaptation executors not fitness maximizers".

Comment author: RichardKennaway 13 January 2013 08:11:24PM 20 points [-]

"Just because you no longer believe a lie, does not mean you now know the truth."

Mark Atwood

Comment author: blashimov 13 January 2013 07:53:31AM *  11 points [-]

I have always had an animal fear of death, a fate I rank second only to having to sit through a rock concert. My wife tries to be consoling about mortality and assures me that death is a natural part of life, and that we all die sooner or later. Oddly this news, whispered into my ear at 3 a.m., causes me to leap screaming from the bed, snap on every light in the house and play my recording of “The Stars and Stripes Forever” at top volume till the sun comes up.

-Woody Allen EDIT: Fixed formatting.

Comment author: DaFranker 14 January 2013 03:13:06PM 1 point [-]

FWIW, it seems like whatever is parsing the markdown in these comments, whenever it sees a ">" for a quote at the beginning of a paragraph it'll keep reading until the next paragraph break, i.e. double-whitespace at the end of a line or two linebreaks.

Comment author: MugaSofer 13 January 2013 01:51:55PM *  0 points [-]

Formatting is broken. Great quote, though.

Comment author: ygert 14 January 2013 01:49:47PM *  1 point [-]

The irony of it... (Although your formatting is less broken, as your only mistake was missing out a single period.)

Comment author: MugaSofer 14 January 2013 02:51:49PM -1 points [-]
Comment author: SPLH 13 January 2013 07:34:40AM 6 points [-]

"De notre naissance à notre mort, nous sommes un cortège d’autres qui sont reliés par un fil ténu."

Jean Cocteau

("From our birth to our death, we are a procession of others whom a fine thread connects.")

Comment author: simplicio 13 January 2013 08:51:43PM *  1 point [-]
Comment author: Dorikka 13 January 2013 05:25:00AM 6 points [-]

"We are living on borrowed time and abiding by the law of probability, which is the only law we carefully observe. Had we done otherwise, we would now be dead heroes instead of surviving experts." –Devil's Guard

Comment author: [deleted] 12 January 2013 08:52:37PM *  9 points [-]

Unfortunately, this is how the brain works:

-- Sir! We are receiving information that conflicts with the core belief system!

-- Get rid of it.

Beatrice the Biologist

Comment author: Endovior 11 January 2013 09:39:26PM -2 points [-]

Machines aren't capable of evil. Humans make them that way.

-Lucca, Chrono Trigger

Comment author: earthwormchuck163 11 January 2013 10:03:38PM 4 points [-]

That line always bugged me, even when I was a little kid. It seems obviously false (especially in the in-game context).

I don't understand why this is a rationality quote at all; Am I missing something, or is it just because of the superficial similarity to some of EY's quotes about apathetic uFAIs?

Comment author: [deleted] 12 January 2013 11:40:50PM 2 points [-]

I'm not familiar with Chrono Trigger, but when I hear that sentiment in real life I take it to be a rebuttal to an argument against technology based on confusion between terminal and instrumental values. (Guns aren't intrinsically evil (i.e. there's no negative term in our utility function for how many guns exist in the world) even though they can be used to do evil, &c.)

Comment author: Qiaochu_Yuan 12 January 2013 11:58:19PM 1 point [-]

In Chrono Trigger this line is about a robot.

Comment author: Desrtopa 13 January 2013 12:41:15AM 0 points [-]
Comment author: Endovior 13 January 2013 01:26:31AM 4 points [-]

Well, that gets right to the heart of the Friendliness problem, now doesn't it? Mother Brain is the machine that can program, and she reprogrammed all the machines that 'do evil'. It is likely, then, that the first machine that Mother Brain reprogrammed was herself. If a machine is given the ability to reprogram itself, and uses that ability to make itself decide to do things that are 'evil', is the machine itself evil? Or does the fault lie with the programmer, for failing to take into account the possibility that the machine might change its utility function? It's easy to blame Mother Brain; she's a major antagonist in her timeline. It's less easy to think back to some nameless programmer behind the scenes, considering the problem of coding an intelligent machine, and deciding how much freedom to give it in making its own decisions.

In my view, Lucca is taking personal responsibility with that line. 'Machines aren't capable of evil', (they can't choose to do anything outside their programming). 'Humans make them that way', (so the programmer has the responsibility of ensuring their actions are moral). There are other interpretations, but I'd be wary of any view that shifts moral responsibility to the machine. If you, as a programmer, give up any of your moral responsibility to your program, then you're basically trying to absolve yourself of the consequences if anything goes wrong. "I gave my creation the capacity to choose. Is it my fault if it chose evil?" Yes, yes it is.

Comment author: Qiaochu_Yuan 11 January 2013 09:56:03PM 10 points [-]

Eh. Would you say that "humans aren't capable of evil. Evolution makes them that way"?

Comment author: [deleted] 12 January 2013 11:24:47PM 0 points [-]

But humans are sapient and have rebelled against evolution, whereas machines aren't and just do what humans tell them.

The problem is that this may change in the future, and a blog sponsored by the organization that's trying to prevent exactly that is probably not the right place to post such a quote.

Comment author: Endovior 13 January 2013 12:52:40AM 0 points [-]

My point in posting it was that UFAI isn't 'evil', it's badly programmed. If an AI proves itself unfriendly and does something bad, the fault lies with the programmer.

Comment author: Eliezer_Yudkowsky 12 January 2013 04:42:56PM 11 points [-]

I might, if I was a god talking to other gods. And if I was a gun talking to other guns, I'd tell them to shut up about humans and take responsibility for their own bullets.

Comment author: BerryPick6 12 January 2013 05:06:56PM *  3 points [-]

Would you say that "humans aren't capable of evil. Evolution makes them that way"?

-

I might, if I was a god talking to other gods.

I feel like a strange loop is now formed when humans say things like: "God isn't capable of evil. Our definition makes him that way."

Comment author: Kawoomba 11 January 2013 10:10:20PM 2 points [-]

"Evolution isn't capable of evil. Time made it that way."

Zugzwang: your turn!

Comment author: KnaveOfAllTrades 13 January 2013 08:41:18AM -1 points [-]
Comment author: [deleted] 12 January 2013 11:34:11PM *  0 points [-]

But evolution isn't sapient and... Or isn't it?

[Edited to remove potentially mind-killing example.]

Comment author: benelliott 12 January 2013 09:36:18AM *  3 points [-]

"Time isn't capable of evil, its not even an optimization process."

Comment author: [deleted] 12 January 2013 07:15:45PM 2 points [-]

[Time i]s not even an optimization process.

Yes it is. Its optimizand is entropy.

Epistemic status: interesting idea I think I've heard somewhere. Don't dare to ask me if I believe it myself or I will ask you to taboo words and you don't want to do that.

Comment author: [deleted] 11 January 2013 10:58:07PM 5 points [-]

"DO NOT MESS WITH TIME"

Comment author: airandfingers 11 January 2013 09:04:10PM *  5 points [-]

Most things that we and the people around us do constantly... have come to seem so natural and inevitable that merely to pose the question, 'Why are we doing this?' can strike us as perplexing - and also, perhaps, a little unsettling. On general principle, it is a good idea to challenge ourselves in this way about anything we have come to take for granted; the more habitual, the more valuable this line of inquiry.

-Alfie Kohn, "Punished By Rewards"

Comment author: Alicorn 11 January 2013 03:08:53AM 21 points [-]

He tells her that the earth is flat -
He knows the facts, and that is that.
In altercations fierce and long
She tries her best to prove him wrong.
But he has learned to argue well.
He calls her arguments unsound
And often asks her not to yell.
She cannot win. He stands his ground. The planet goes on being round.

--Wendy Cope, He Tells Her from the series ‘Differences of Opinion’

Comment author: Vaniver 11 January 2013 01:32:09AM 22 points [-]

Some may think these trifling matters not worth minding or relating; but when they consider that tho' dust blown into the eyes of a single person, or into a single shop on a windy day, is but of small importance, yet the great number of the instances in a populous city, and its frequent repetitions give it weight and consequence, perhaps they will not censure very severely those who bestow some attention to affairs of this seemingly low nature. Human felicity is produc'd not so much by great pieces of good fortune that seldom happen, as by little advantages that occur every day.

--Benjamin Franklin

Comment author: [deleted] 10 January 2013 08:57:13PM 6 points [-]

The generation of random numbers is too important to be left to chance. Robert R. Coveyou, Oak Ridge National Laboratory

Comment author: GLaDOS 10 January 2013 07:10:30PM *  12 points [-]

While truths last forever, taboos against them can last for centuries.

--"Sid" a commenter from HalfSigma's blog

Comment author: Jay_Schweikert 09 January 2013 04:39:07PM 8 points [-]

Suppose you've been surreptitiously doing me good deeds for months. If I "thank my lucky stars" when it is really you I should be thanking, it would misrepresent the situation to say that I believe in you and am grateful to you. Maybe I am a fool to say in my heart that it is only my lucky stars that I should thank—saying, in other words, that there is nobody to thank—but that is what I believe; there is no intentional object in this case to be identified as you.

Suppose instead that I was convinced that I did have a secret helper but that it wasn't you—it was Cameron Diaz. As I penned my thank-you notes to her, and thought lovingly about her, and marveled at her generosity to me, it would surely be misleading to say that you were the object of my gratitude, even though you were in fact the one who did the deeds that I am so grateful for. And then suppose I gradually began to suspect that I had been ignorant and mistaken, and eventually came to the correct realization that you were indeed the proper recipient of my gratitude. Wouldn't it be strange for me to put it this way: "Now I understand: you are Cameron Diaz!"

--Daniel Dennett, Breaking the Spell (discussing the differences between the "intentional object" of a belief and the thing-in-the-world inspiring that belief)

Comment author: MugaSofer 10 January 2013 08:47:51AM 0 points [-]

He's talking about God here, right?

Comment author: Jay_Schweikert 10 January 2013 03:04:06PM 0 points [-]

In large part, yes. This passage is in Dennett's chapter on "Belief in Belief," and he has an aside on the next page describing how to "turn an atheist into a theist by just fooling around with words" -- namely, that "if 'God' were just the name of whatever it is that produced all creatures great and small, then God might turn out to be the process of evolution by natural selection."

But I think there's also a more general rationality point about keeping track of the map-territory distinction when it comes to abstract concepts, and about ensuring that we're not confusing ourselves or others by how we use words.

Comment author: MugaSofer 11 January 2013 01:18:43PM -2 points [-]

That's what I thought. Thanks for explaining.

Comment author: DaFranker 10 January 2013 03:30:39PM 0 points [-]

Besides, none of it passes an ideological turing test with an overwhelming majority of God-believers. I tried it.

Comment author: Jay_Schweikert 10 January 2013 03:53:33PM 0 points [-]

Sorry, can you clarify what you mean here? None of what passes an ideological turing test? Are you saying something like "theists erroneously conclude that the proponents of evolution must believe in God because evolutionists believe that evolution is what produced all creatures great and small"? What exactly is the mistake that theists make on this point that would lead them to fail the ideological turing test?

Or, did I misunderstand you, and are you saying that people like Dennett fail the ideological turing test with theists?

Comment author: DaFranker 10 January 2013 04:44:36PM *  2 points [-]

Oh, sorry.

"if 'God' were just the name of whatever it is that produced all creatures great and small, then God might turn out to be the process of evolution by natural selection."

This, specifically, almost never passes an i-turing test IME. I've been called a "sick scientologist" (I assume they didn't know what "Scientology" really is) on account of the claim that if there is a "God", it's the process by which evolution or physics happens to work in our world.

Likewise, if I understand what Dennett is saying correctly, the things he's saying are not accepted by God-believers, namely that God could be any sort of metaphor or anthropomorphic representation of natural processes, or of the universe and its inner workings, or "fate" in the sense that "fate" and "free will" are generally understood (i.e. the dissolved explanation) by LWers, or some unknown abstract Great Arbiter of Chance and Probability.

(I piled in some of my own attempts in there, but all of the above was rejected time and time again in discussion with untrained theists, down to a single exception who converted to a theology-science hybrid later on and then, last I heard, doesn't really care about theological issues anymore because they seem to have realized that it makes no difference and intuitively dissolved their questions. Discussions with people who have thoroughly studied formal theology usually fare slightly better, but they also have a much larger castle of anti-epistemology to break down.)

Comment author: ikrase 11 January 2013 10:47:11PM 0 points [-]

Before I became a rationalist, I believed that there was no god, but there were souls, and they manifested through making quantum randomness nonrandom.

Comment author: [deleted] 11 January 2013 11:18:51PM 0 points [-]

Which part(s) of that set of beliefs did becoming a rationalist cause you to change?

Comment author: ikrase 12 January 2013 02:04:14AM 0 points [-]

This was long before Less Wrong.

I realized that lower-level discussions of free will were kind of pointless. I abandoned the eternal-springing hope that souls and psychic powers (Hey! Look! For some reason, all of air molecules just happened to be moving upward at the same time! It seems like this guy is a magnet for one in a 10^^^^^^^10 thermodynamic occurances! And they always help him!) could exist. I fully accepted the physical universe.

Comment author: somervta 01 February 2013 05:17:57AM *  1 point [-]

I get the concept of hyperbole, but this:

It seems like this guy is a magnet for one in a 10^^^^^^^10 thermodynamic occurances!>

Is ludicrously too far.

Comment author: MugaSofer 11 January 2013 01:21:34PM 0 points [-]

I've been called a "sick scientologist" (I assume they didn't know what "Scientology" really is) on account of the claim that if there is a "God", it's the process by which evolution or physics happens to work in our world.

Holy cow, you've tried this? Were you dealing with creationists?

Comment author: DaFranker 11 January 2013 02:39:33PM *  4 points [-]

I assume a significant amount of them were. I also tried subtly using God/Fate, God/Freewill, God/Physics, God/Universe, God/ConsciousMultiverse, and God/Chance, as interchangeable "redefinitions" (of course, on different samples each time) and was similarly called on it.

Incidentally, I can't confirm if this suggests a pattern (it probably does), but in one church I tried, for fun, combining all of them and just conflating all the meanings of all the above into "God", and then sometimes using the specific terms and/or God interchangeably when discussing a specific subset or idea. The more confused I made it, the more people I convinced and got to engage in self-reinforcing positive-affect dialogue. So if this was the only evidence available, I'd be forced to tentatively conclude: The more confused your usage of "God" is, the more it matches the religious usage, and the more it passes ideological turing tests!

(Spoiler: It does. The rest of my evidence confirms this.)

Comment author: TheOtherDave 11 January 2013 06:02:56PM 5 points [-]

This ought not be surprising. The more confused a concept is, the more freedom my audience has to understand it to mean whatever suits their purposes. In some audiences, this means it gets criticized more. In others, it gets accepted more uncritically.

Comment author: BerryPick6 11 January 2013 03:17:19PM 0 points [-]

This reminds me a lot of Spinoza's proof of God in Ethics, although I recognize that is probably partially due to personal biases of mine.

Comment author: MugaSofer 11 January 2013 03:07:03PM -1 points [-]

Hmm. I'm pretty sure that if I renamed some other confused idea "God" it wouldn't work so well. Or do you mean confusing?

I assume a significant amount of them were.

On it's own, that sounds like your assumption is based on the fact that they were religious, which is on the face of it absurd, so I'm guessing you have some evidence you declined to mention.

Incidentally, where does the term "ideological turing test" come from? I've never heard it before.

Comment author: Watercressed 13 January 2013 05:29:24AM *  2 points [-]

The term was coined by Brian Caplan here

Comment author: DaFranker 11 January 2013 03:22:58PM *  1 point [-]

Hmm. I'm pretty sure that if I renamed some other confused idea "God" it wouldn't work so well. Or do you mean confusing?

Yes, sorry. I was using the term "confused" in a slightly different manner from the one LWers are used to, and "confusing" fits better. Basically, "meaninglessly mysterious and deep-sounding" would be the more LW-friendly description, I think.

On it's own, that sounds like your assumption is based on the fact that they were religious, which is on the face of it absurd, so I'm guessing you have some evidence you declined to mention.

Ah, yes. Mostly the conversations and responses I got themselves gave me very strong impressions of creationism, and also some (rather unreliable, however, but still sufficient bayesian evidence) small-scale, local, privately-funded survey statistics about religion and beliefs.

To top that, most of the religious places and forums/websites I was visiting were found partially through the help of my at-the-time-girlfriend, whose family was very religious (and dogmatic) and creationist, so I suspect there probably was some effect there. I don't count this, though, because that would be double-counting (it's overridden by the "conversations with people" evidence)

Incidentally, where does the term "ideological turing test" come from? I've never heard it before.

No clue. I first saw it on LessWrong, and I think someone linked me to a wiki page about it when I asked what it meant, but I can't remember or find that instance.

Comment author: Jay_Schweikert 10 January 2013 05:16:22PM 0 points [-]

Ah, okay, thanks for clarifying. In case my initial reply to MugaSofer was misleading, Dennett doesn't really seem to be suggesting here that this is really what most theists believe, or that many theists would try to convert atheists with this tactic. It's more just a tongue-in-cheek example of what happens when you lose track of what concept a particular group of syllables is supposed to point at.

But I think there are a great many people who purport to believe in "God," whose concept of God really is quite close to something like the "anthropomorphic representation of natural processes, or of the universe and its inner workings." Probably not for those who identify with a particular religion, but most of the "spiritual but not religious" types seem to have something like this in mind. Indeed, I've had quite a few conversations where it became clear that someone couldn't tell me the difference between a universe where "God exists" and where "God doesn't exist."

Comment author: MugaSofer 11 January 2013 01:20:40PM -1 points [-]

In case my initial reply to MugaSofer was misleading, Dennett doesn't really seem to be suggesting here that this is really what most theists believe, or that many theists would try to convert atheists with this tactic.

For the record, I didn't interpret your comment that way.