Theism, Wednesday, and Not Being Adopted

56 Alicorn 27 April 2009 04:49PM

(Disclaimer: This post is sympathetic to a certain subset of theists.  I am not myself a theist, nor have I ever been one.  I do not intend to justify all varieties of theism, nor do I intend to justify much in the way of common theistic behavior.)

I'm not adopted.  You all believe me, right?  How do you think I came by this information, that you're confident in my statement?  The obvious and correct answer is that my parents told me so1.  Why do I believe them?  Well, they would be in a position to know the answer, and they have been generally honest and sincere in their statements to me.  A false belief on the subject could be hazardous to me, if I report inaccurate family history to physicians, and I believe that my parents have my safety in mind.  I know of the existence of adopted people; the possibility isn't completely absent from my mind - but I believe quite confidently that I am not among those people, because my parents say otherwise.

continue reading »

Two-Tier Rationalism

40 Alicorn 17 April 2009 07:44PM

Related to: Bayesians vs. Barbarians

Consequentialism1 is a catchall term for a vast number of specific ethical theories, the common thread of which is that they take goodness (usually of a state of affairs) to be the determining factor of rightness (usually of an action).  One family of consequentialisms that came to mind when it was suggested that I post about my Weird Forms of Utilitarianism class is called "Two-Tier Consequentialism", which I think can be made to connect interestingly to our rationalism goals on Less Wrong.  Here's a summary of two-tier consequentialism2.

(Some form of) consequentialism is correct and yields the right answer about what people ought to do.  But (this form of) consequentialism has many bad features:

  • It is unimplementable (because to use it correctly requires more calculation than anyone has time to do based on more information than anyone has time to gather and use).
  • It is "alienating" (because people trying to obey consequentialistic dictates find them very unlike the sorts of moral motivations they usually have, like "I want to be a nice person" or "so-and-so is my friend")3.
  • It is "integrity-busting" (because it can force you to consider alternatives that are unthinkably horrifying, if there is the possibility that they might lead to the "best" consequences).
  • It is "virtue-busting" (because it too often requires a deviation from a pattern of behavior that we consider to be an expression of good personal qualities that we would naturally hope and expect from good people).
  • It is prone to self-serving abuse (because it's easy, when calculating utilities, to "cook the books" and wind up with the outcome you already wanted being the "best" outcome).
  • It is "cooperation-busting" (because individuals don't tend to have an incentive to avoid free-riding when their own participation in a cooperative activity will neither make nor break the collective good).


To solve these problems, some consequentialist ethicists (my class focused on Railton and Hare) invented "two-tier consequentialism".  The basic idea is that because all of these bad features of (pick your favorite kind of) consequentialism, being a consequentialist has bad consequences, and therefore you shouldn't do it.  Instead, you should layer on top of your consequentialist thinking a second tier of moral principles called your "Practically Ideal Moral Code", which ought to have the following more convenient properties:

continue reading »

Practical rationality questionnaire

15 AnnaSalamon 16 April 2009 11:21PM

EDIT, 4/18:  I'm closing the survey.  I'll post analysis and a better anonymized version of the raw data in a day or so.  236 people responded; thanks very much to all who did.

For survey participants curious about the calibration questions, the answers are:

Number of republics the USSR broke up into, following the output of the cold war: 15.

The year in which the global population reached 1 billion: 1804.

The average percentage of a watermelon's weight that comes from water: 92.

 

continue reading »

Extreme Rationality: It's Not That Great

140 Yvain 09 April 2009 02:44AM

Related to: Individual Rationality is a Matter of Life and Death, The Benefits of Rationality, Rationality is Systematized Winning
But I finally snapped after reading: Mandatory Secret Identities

Okay, the title was for shock value. Rationality is pretty great. Just not quite as great as everyone here seems to think it is.

For this post, I will be using "extreme rationality" or "x-rationality" in the sense of "techniques and theories from Overcoming Bias, Less Wrong, or similar deliberate formal rationality study programs, above and beyond the standard level of rationality possessed by an intelligent science-literate person without formal rationalist training." It seems pretty uncontroversial that there are massive benefits from going from a completely irrational moron to the average intelligent person's level. I'm coining this new term so there's no temptation to confuse x-rationality with normal, lower-level rationality.

And for this post, I use "benefits" or "practical benefits" to mean anything not relating to philosophy, truth, winning debates, or a sense of personal satisfaction from understanding things better. Money, status, popularity, and scientific discovery all count.

So, what are these "benefits" of "x-rationality"?

A while back, Vladimir Nesov asked exactly that, and made a thread for people to list all of the positive effects x-rationality had on their lives. Only a handful responded, and most responses weren't very practical. Anna Salamon, one of the few people to give a really impressive list of benefits, wrote:

I'm surprised there are so few apparent gains listed. Are most people who benefited just being silent? We should expect a certain number of headache-cures, etc., just by placebo effects or coincidences of timing.

There have since been a few more people claiming practical benefits from x-rationality, but we should generally expect more people to claim benefits than to actually experience them. Anna mentions the placebo effect, and to that I would add cognitive dissonance - people spent all this time learning x-rationality, so it MUST have helped them! - and the same sort of confirmation bias that makes Christians swear that their prayers really work.

I find my personal experience in accord with the evidence from Vladimir's thread. I've gotten countless clarity-of-mind benefits from Overcoming Bias' x-rationality, but practical benefits? Aside from some peripheral disciplines1, I can't think of any.

Looking over history, I do not find any tendency for successful people to have made a formal study of x-rationality. This isn't entirely fair, because the discipline has expanded vastly over the past fifty years, but the basics - syllogisms, fallacies, and the like - have been around much longer. The few groups who made a concerted effort to study x-rationality didn't shoot off an unusual number of geniuses - the Korzybskians are a good example. In fact as far as I know the only follower of Korzybski to turn his ideas into a vast personal empire of fame and fortune was (ironically!) L. Ron Hubbard, who took the basic concept of techniques to purge confusions from the mind, replaced the substance with a bunch of attractive flim-flam, and founded Scientology. And like Hubbard's superstar followers, many of this century's most successful people have been notably irrational.

There seems to me to be approximately zero empirical evidence that x-rationality has a large effect on your practical success, and some anecdotal empirical evidence against it. The evidence in favor of the proposition right now seems to be its sheer obviousness. Rationality is the study of knowing the truth and making good decisions. How the heck could knowing more than everyone else and making better decisions than them not make you more successful?!?

This is a difficult question, but I think it has an answer. A complex, multifactorial answer, but an answer.

continue reading »

Whining-Based Communities

59 Eliezer_Yudkowsky 07 April 2009 08:31PM

Previously in seriesSelecting Rationalist Groups
Followup toRationality is Systematized Winning, Extenuating Circumstances

Why emphasize the connection between rationality and winning?  Well... that is what decision theory is for.  But also to place a Go stone to block becoming a whining-based community.

Let's be fair to Ayn Rand:  There were legitimate messages in Atlas Shrugged that many readers had never heard before, and this lent the book a part of its compelling power over them.  The message that it's all right to excel—that it's okay to be, not just good, but better than others—of this the Competitive Conspiracy would approve.

But this is only part of Rand's message, and the other part is the poison pill, a deadlier appeal:  It's those looters who don't approve of excellence who are keeping you down.  Surely you would be rich and famous and high-status like you deserve if not for them, those unappreciative bastards and their conspiracy of mediocrity.

If you consider the reasonableness-based conception of rationality rather than the winning-based conception of rationality—well, you can easily imagine some community of people congratulating themselves on how reasonable they were, while blaming the surrounding unreasonable society for keeping them down.  Wrapping themselves up in their own bitterness for reality refusing to comply with the greatness they thought they should have.

But this is not how decision theory works—the "rational" strategy adapts to the other players' strategies, it does not depend on the other players being rational.  If a rational agent believes the other players are irrational then it takes that expectation into account in maximizing expected utility.  Van Vogt got this one right: his rationalist protagonists are formidable from accepting reality swiftly and adapting to it swiftly, without reluctance or attachment.

continue reading »

Accuracy Versus Winning

12 John_Maxwell_IV 02 April 2009 04:47AM

Consider the problem of an agent who is offered a chance to improve their epistemic rationality for a price.  What is such an agent's optimal strategy?

A complete answer to this problem would involve a mathematical model to estimate the expected increase in utility associated with having more correct beliefs.  I don't have a complete answer, but I'm pretty sure about one thing: From an instrumental rationalist's point of view, to always accept or always refuse such offers is downright irrational.

And now for the kicker: You might be such an agent.

continue reading »

View more: Prev