Comment author: orthonormal 13 April 2009 09:46:19PM 1 point [-]

Exactly: the special case where the conditional probabilities are (practically) 0 or 1.

Comment author: robzahra 13 April 2009 10:07:16PM 0 points [-]

yes, exactly

Comment author: robzahra 13 April 2009 08:37:52PM *  0 points [-]

seconding timtyler and guysrinivasan--I think, but can't prove, that you need an induction principle to reach the anti-religion conclusion. See especially Occam's Razor and Inductive Bias. If someone wants to bullet point the reasons to accept an induction principle, that would be useful. Maybe I'll take a stab later. It ties into Solomonoff induction among other things.

EDIT---I've put some bullet points below which state the case for induction to the best of my knowledge.

Comment author: robzahra 13 April 2009 09:01:50PM *  22 points [-]

Why to accept an inductive principle:

  1. Finite agents have to accept an "inductive-ish" principle, because they can't even process the infinitely many consistent theories which are longer than the number of computations they have in which to compute, and therefore they can't even directly consider most of the long theories. Zooming out and viewing from the macro, this is extremely inductive-ish, though it doesn't decide between two fairly short theories, like Christianity versus string theory.

  2. Probabilities over all your hypotheses have to add to 1, and getting an extra bit of info allows you to rule out approximately half of the remaining consistent theories; therefore, your probability of a theory one bit longer being true ought to drop by that ratio. If your language is binary, this has the nice property that you can assign a 1-length hypothesis a probability of 1/2, a 2-length hypothesis a probability of 1/4, ... an n -length hypothesis a probability of 1/(2^n)...and you notice that 1/2+1/4+1/8 + ... + ~= 1. So the scheme fits pretty naturally.

  3. Under various assumptions, an agent does only a constant factor worse using this induction assumption versus any other method, making this seem not only less than arbitrary, but arguably, "universal".

  4. Ultimately, we could be wrong and our universe may not actually obey the Occam Prior. It appears we don't and can't even in principle have a complete response to religionists who are using solipsistic arguments. For example, there could be a demon making these bullet points seem reasonable to your brain, while they are in fact entirely untrue. However, this does not appear to be a good reason not to use Occam's razor.

  5. Related to (2)--you can't assign equal probability greater than 0 to each of the infinite number of theories consistent with your data, and still have your sums converge to 1 (because for any rational number R > 0, the sum of an infinite number of R's will diverge). So, you have to discount some hypotheses relative to others, and induction looks to be the simplest way to do this (One could say of the previous sentence, "meta-occam's razor supports occam's razor"). The burden of proof is on the religionist to propose a plausible alternative mapping, since the Occam mapping appears to satisy the fairly stringent desiderata.

  6. Further to (5), notice that to get the probability sum to converge to 1, and also to assign each of the infinite consistent hypotheses a probability greater than 0, most hypotheses need to have smaller probability than any fixed rational number. In fact, you need more than that, you actually need the probabilities to drop pretty fast, since 1/2 + 1/3 + 1/4 + .... + does not converge. On the other hand, you COULD have certain instances where you switch two theories around in their probability assignments (for example, you could arbitrarily say Christianity was more likely than string theory, even though Christianity is a longer theory), but for most of the theories, with increasing length you MUST drop your probability down towards 0 relatively fast to maintain the desiderata at all. To switch these probabilities only for particular theories you care about, while you also need and want to use the theory on other problems (including normal "common sense" intuitions, which are very well-explained by this framework), and you ALSO need to use it generally on this problem except for a few counter-examples you explicitly hard-code, seems incredibly contrived. You're better off just to go with occam's razor, unless some better alternative can be proposed.

Rob Zahra

Comment author: timtyler 13 April 2009 07:16:47PM -1 points [-]

Rationality is surely bigger than Bayes - since it incudes deductive reasoning.

Comment author: robzahra 13 April 2009 08:46:43PM *  1 point [-]

this can be viewed the other way around, deductive reasoning as a special case of Bayes

Comment author: GuySrinivasan 13 April 2009 08:11:49PM 0 points [-]

Related: rationality includes using Occam's Razor. Exactly which Razor we employ is in part determined empirically. If properties of your implied Razor are at odds with properties of empirically derived Razors, that may indicate a lack of rationality.

Metaphysical beliefs are still subject to the Razor. Right?

Comment author: robzahra 13 April 2009 08:37:52PM *  0 points [-]

seconding timtyler and guysrinivasan--I think, but can't prove, that you need an induction principle to reach the anti-religion conclusion. See especially Occam's Razor and Inductive Bias. If someone wants to bullet point the reasons to accept an induction principle, that would be useful. Maybe I'll take a stab later. It ties into Solomonoff induction among other things.

EDIT---I've put some bullet points below which state the case for induction to the best of my knowledge.

Comment author: AlexU 13 April 2009 02:36:18PM *  3 points [-]

I'm certainly not against using chunked concepts on here per se. But I think associating this community too closely with sci-fi/fantasy tropes could have deleterious consequences in the long run, as far as attracting diverse viewpoints and selling the ideas to people who aren't already pre-disposed to buying them. If Eliezer really wanted to proselytize by poeticizing, he should turn LW into the most hyper-rational, successful PUA community on the Internet, rather than the Star Wars-esque roleplaying game it seems to want to become.

Comment author: robzahra 13 April 2009 02:41:47PM *  1 point [-]

yes, what to call the chunk is a separate issue...I at least partially agree with you, but I'd want to hear what others have to say. The recent debate over the tone of the Twelve Virtues seems relevant.

Comment author: AlexU 13 April 2009 02:08:41PM *  3 points [-]

What the hell are the "dark arts"? Could we quit playing super-secret dress-up society around here for one day and just speak in plain English, using terms with known meanings?

Comment author: robzahra 13 April 2009 02:23:21PM *  3 points [-]

This is the Dark Side root link. In my opinion it's a useful chunked concept, though maybe people should be hyperlinking here when they use the term, to be more accessible to people who haven't read every post. At the very least, the FAQ builders should add this, if it's not there already.

Comment author: robzahra 12 April 2009 11:46:00PM *  5 points [-]

Some examples of what I think you're looking for:

  1. Vassar's proposed shift from saying "this is the best thing you can do" to "this is a cool thing you can do" because people's psychologies respond better to this
  2. Operant conditioning in general
  3. Generally, create a model of the other person, then use standard rationality to explore how to most efficiently change them. Obviously, the less wrong and overcoming bias knowledge base is very relevant for this.
Comment author: robzahra 12 April 2009 10:30:56PM *  7 points [-]

I mostly agree with your practical conclusion, however I don't see purchasing fuzzies and utilons separately as an instance of irrationality per se. As a rationalist, you should model the inside of your brain accurately and admit that some things you would like to do might actually be beyond your control to carry out. Purchasing fuzzies would then be rational for agents with certain types of brains. "Oh well, nobody's perfect" is not the right reason to purchase fuzzies; rather, upon reflection, this appears to be the best way for you to maximize utilons long term. Maybe this is only a language difference (you tell me), but I think it might be more than that.

Comment author: Eliezer_Yudkowsky 07 April 2009 03:03:23AM 1 point [-]

There's also the consideration of total time expenditures on my part. Since the main reason I don't respond at length to Goetz is his repeated behaviors that force me to expend large amounts of time or suffer penalties, elaborate time-consuming courtesies aren't a solution either.

Comment author: robzahra 07 April 2009 03:12:07PM 0 points [-]

Agreed

Comment author: PhilGoetz 06 April 2009 11:43:21PM 3 points [-]

You're probably right. But I'm still irritated that instead of EY saying, "I didn't say exactly what I meant", he is sticking to "Phil is stupid."

Comment author: robzahra 07 April 2009 02:19:16AM *  1 point [-]

If a gun were put to my head and I had to decide right now, I agree with your irritation. However, he did make an interesting point about public disrespect as a means of deterrence which deserves more thinking about. If that method looks promising after further inspection, we'd probably want to reconsider its application to this situation, though it's still unclear to me to what extent it applies in this case.

View more: Prev | Next