Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: ImmortalRationalist 20 July 2017 10:39:25AM 1 point [-]

Eliezer Yudkowsky wrote this article about the two things that rationalists need faith to believe in: That the statement "Induction works" has a sufficiently large prior probability, and that some single large ordinal that is well-ordered exists. Are there any ways to justify belief in either of these two things yet that do not require faith?

Comment author: hairyfigment 19 August 2017 10:41:39PM 0 points [-]

Not exactly. MIRI and others have research on logical uncertainty, which I would expect to eventually reduce the second premise to induction. I don't think we have a clear plan yet showing how we'll reach that level of practicality.

Justifying a not-super-exponentially-small prior probability for induction working feels like a category error. I guess we might get a kind of justification from better understanding Tegmark's Mathematical Macrocosm hypothesis - or, more likely, understanding why it fails. Such an argument will probably lack the intuitive force of 'Clearly the prior shouldn't be that low.'

Comment author: ImmortalRationalist 19 August 2017 02:11:10AM 0 points [-]

On a related question, if Unfriendly Artificial Intelligence is developed, how "unfriendly" is it expected to be? The most plausible sounding outcome may be human extinction. The worst case scenario could be if the UAI actively tortures humanity, but I can't think of many scenarios in which this would occur.

Comment author: hairyfigment 19 August 2017 10:31:06PM 0 points [-]

I would only expect the latter if we started with a human-like mind. A psychopath might care enough about humans to torture you; an uFAI not built to mimic us would just kill you, then use you for fuel and building material.

(Attempting to produce FAI should theoretically increase the probability by trying to make an AI care about humans. But this need not be a significant increase, and in fact MIRI seems well aware of the problem and keen to sniff out errors of this kind. In theory, an uFAI could decide to keep a few humans around for some reason - but not you. The chance of it wanting you in particular seems effectively nil.)

Comment author: SnowSage4444 18 March 2017 03:01:28PM 0 points [-]

No, really, what?

What "Different rules" could someone use to decide what to believe, besides "Because logic and science say so"? "Because my God said so"? "Because these tea leaves said so"?

Comment author: hairyfigment 20 March 2017 06:32:33PM 0 points [-]

Yes, but as it happens that kind of difference is unnecessary in the abstract. Besides the point I mentioned earlier, you could have a logical set of assumptions for "self-hating arithmetic" that proves arithmetic contradicts itself.

Completely unnecessary details here.

Comment author: gjm 13 March 2017 06:02:37PM 0 points [-]

Is there good reason to believe that any method exists that will reliably resolve epistemological disputes between parties with very different underlying assumptions?

Comment author: hairyfigment 14 March 2017 01:10:56AM 0 points [-]

Not if they're sufficiently different. Even within Bayesian probability (technically) we have an example in the hypothetical lemming race with a strong Gambler's Fallacy prior. ("Lemming" because you'd never meet a species like that unless someone had played games with them.)

On the other hand, if an epistemological dispute actually stems from factual disagreements, we might approach the problem by looking for the actual reasons people adopted their different beliefs before having an explicit epistemology. Discussing a religious believer's faith in their parents may not be productive, but at least progress seems mathematically possible.

Comment author: Elo 28 February 2017 11:41:24AM 0 points [-]

I think this is a bad example. The example seems like an instrumental example. Epistemic alone would have you correct the grammar because that's good epistemics. Instrumental would have you bend the rules for the other goals you have on the pathway to winning.

Comment author: hairyfigment 03 March 2017 01:29:27AM 0 points [-]

How could correcting grammar be good epistemics? The only question of fact there is a practical one - how various people will react to the grammar coming out of your word-hole.

Comment author: entirelyuseless 13 February 2017 02:35:33PM 1 point [-]

My reason for rejecting the claim of BB is that the claim is useless -- and I am quite sure that is my reason. I would definitely reject it for that reason even if I had an argument that seemed extremely convincing to me that there is a 95% chance I am a BB.

A theory that says I am a BB cannot assign a probability to anything, not even by giving a uniform distribution. A BB theory is like a theory that says, "you are always wrong." You cannot get any probability assignments from that, since as soon as you bring them up, the theory will say your assignments are wrong. In a similar way, a BB theory implies that you have never learned or studied probability theory. So you do not know whether probabilities should sum to 100% (or to any similar normalized result) or anything else about probability theory.

As I said, BB theory is useless -- and part of its uselessness is that it cannot imply any conclusions, not even any kind of prior over your experiences.

Comment author: hairyfigment 13 February 2017 09:31:52PM 0 points [-]
  1. I'm using probability to represent personal uncertainty, and I am not a BB. So I think I can legitimately assign the theory a distribution to represent uncertainty, even if believing the theory would make me more uncertain than that. (Note that if we try to include radical logical uncertainty in the distribution, it's hard to argue the numbers would change. If a uniform distribution "is wrong," how would I know what I should be assigning high probability to?)

  2. I don't think you assign a 95% chance to being a BB, or even that you could do so without severe mental illness. Because for starters:

  3. Humans who really believe their actions mean nothing don't say, "I'll just pretend that isn't so." They stop functioning. Perhaps you meant the bar is literally 5% for meaningful action, and if you thought it was 0.1% you'd stop typing?

  4. I would agree if you'd said that evolution hardwired certain premises or approximate priors into us 'because it was useful' to evolution. I do not believe that humans can use the sort of pascalian reasoning you claim to use here, not when the issue is BB or not BB. Nor do I believe it is in any way necessary. (Also, the link doesn't make this clear, but a true prior would need to include conditional probabilities under all theories being considered. Humans, too, start life with a sketch of conditional probabilities.)

Comment author: Douglas_Knight 12 February 2017 08:29:01PM 3 points [-]

538 put Trump winning popular vote at 20%. They put Trump winning EC while losing popular at 10%.

Comment author: hairyfigment 13 February 2017 08:41:27AM 0 points [-]

OK, they gave him a greater chance than I thought of winning the popular vote. I can't tell if that applies to the polls-plus model which they actually seemed to believe, but that's not the point. The point is, they had a model with a lot of uncertainty based on recognizing the world is complicated, they explicitly assigned a disturbing probability to the actual outcome, and they praised Trump's state/Electoral College strategy for that reason.

Comment author: entirelyuseless 12 February 2017 03:45:47PM 0 points [-]

Which conclusion? I believe that a Boltzmann brain cannot validly believe or reason about anything, and I certainly believe that I am not a Boltzmann brain.

More importantly, I believe everything I said there.

Comment author: hairyfigment 13 February 2017 08:02:25AM 0 points [-]

Seems like you're using a confusing definition of "believe", but the point is that I disagree about our reasons for rejecting the claim that you're a BB.

Note that according to your reasoning, any theory which says you're a BB must give us a uniform distribution for all possible experiences. So rationally coming to assign high probability to that theory seems nearly impossible if your experience is not actually random.

Comment author: entirelyuseless 09 February 2017 02:10:27PM 0 points [-]

A Boltzmann brain has no way to know anything, reason to any conclusion, or whatever. So it has no way to know whether its experience should seem coherent or not. So your claim that this needs explanation is an unjustified assumption, if you are a Boltzmann brain.

Comment author: hairyfigment 11 February 2017 10:51:10PM 0 points [-]

One man's modus ponens is another man's modus tollens. I don't even believe that you believe the conclusion.

Comment author: Erfeyah 09 February 2017 08:46:46PM *  2 points [-]

Shouldn't a lack of belief in god imply:

P(not("God exists")) = 0.5

P("God exists") = 0.5

(I am completely ignoring the very important part of defining God in the sentence as I take the question to be asking of a way to express 'not knowing' in probabilistic terms. This can be applied to any subject really.)

Comment author: hairyfigment 09 February 2017 11:06:10PM 0 points [-]

I am completely ignoring the very important part of defining God

That is indeed the chief problem here. I'm assuming you're talking about the prior probability which we have before looking at the evidence.

View more: Next