gjm comments on Rationality Quotes Thread September 2015 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (482)
No, as much as seven bits more complex. (More precisely, I think it's probably a lot more more-complex than that, but I'm quite uncertain about my estimates.)
Damn, you caught me. (Seriously: I'm pretty sure that being really good at faking intelligence requires intelligence. I'm not so sure about reasonable-ness.)
One bit is twice as likely.
Seven bits are two-to-the-seven times as likely, which is 128 times.
...surely?
I can think of a few ways to fake greater intelligence then you have. Most of them require a more intelligent accomplice, in one way or another. But yes, reasonableness is probably easier to fake.
128x more unlikely but not 128x more complex; for me, at least, complexity is measured in bits rather than in number-of-possibilities.
[EDITED to add: If anyone has a clue why this was downvoted, I'd be very interested. It seems so obviously innocuous that I suspect it's VoiceOfRa doing his thing again, but maybe I'm being stupid in some way I'm unable to see.]
...I thought that the ratio of likeliness due to the complexity argument would be the inverse of the ratio of complexity. Thus, something twice as complex would be half as likely. Is this somehow incorrect?
(I have no idea why it was downvoted)
All else being equal, something that takes n bits to specify has probability proportional to 2^-n. So if hypothesis A takes 110 bits and hypothesis B takes 100, then A is about 1000x less probable.
Exactly what "all else being equal" means is somewhat negotiable.
Anyway, the point is that the natural way to measure complexity is in bits, and probability varies exponentially, not linearly, with number of bits.
Yes, and hypothesis A is also 1024x as complex - since it takes ten more bits to specify.
...it seems that our disagreement here is in the measure of complexity, and not the measure of probability. My measure of complexity is pretty much the inverse of probability, while you're working on a log scale by measuring it in terms of a number of bits.
Yes, apparently we're using the word "complexity" differently.
So, getting back to what I said that apparently surprised you: Yes, I think it is very plausible that the best theistic explanation for everything we observe around us is what I call "7 bits more complex" and you call "128x more complex" than the best non-theistic explanation; just to be clear what that means, I mean that if we could somehow write down a minimal-length complete description of what we see (compressing it via computer programs / laws of physics / etc.) subject to the constraint "must not make essential use of gods", and another subject instead to the constraint "must make essential use of gods", then my guess at the length of the second description is >= 7 bits longer than my guess at the length of the first. Actually I think the second description would have to be much longerer than that, but I'm discounting because this is confusing stuff and I'm far from certain that I'm right.
And you, if I'm understanding you correctly, are objecting not so much "no, the theistic description will be simpler" as "well, maybe you're right that the nontheistic description will be simpler, but we should expect it to be simpler by less than one random ASCII character's worth of description length".
Of course the real diffiulty here is that we aren't in a position to say what a minimal length theistic or nontheistic description of the universe would look like. We have a reasonable set of laws of physics that might form the core of the nontheistic description, but (1) we know the laws we have aren't quite right, (2) it seems likely that the vast bulk of the complexity needed is not in the laws but in whatever arbitrary-so-far-as-we-know boundary conditions[1] need to be added to get our universe rather than a completely different one with the same laws, and we've no idea how much information that takes or even whether it's finite. And on the theistic side we have at most a pious hope that something like "this is the best of all possible worlds" might suffice, but no clear idea of how to specify what notion of "best" is appropriate, and the world looks so much unlike the best of all possible worlds according to any reasonable notion that this fact is generally considered one of the major reasons for disbelieving in gods. So what hope have we of figuring out which description is shorter?
[1] On some ways of looking at the problem, what needs specifying is not so much boundary conditions as our location within a vast universe or multiverse. Similar problem.
It is confusing. I'm still not even convinced that the theist's description would be longer, but my estimation is so vague and has such massively large error bars that I can't say you're wrong, even if what you're saying is surprising to me.
More or less. I'm saying I would find it surprising if the existence of God made the universe significantly more complex. (In the absolutely minimal-length description, I expect it to work out shorter, but like I say above, there are massive error bars on my estimates).
While I've heard this argued before, I have yet to see an idea for a world that (a) is provably better, (b) cannot be created by sufficient sustained human effort (in an "if everyone works together" kind of way) and (c) cannot be taken apart by sustained human effort into a world vaguely resembling ours (in an "if there are as many criminals and greedy people as in this world").
I'm not saying that there isn't nasty stuff in this world. I'm just not seeing a way that it can be removed without also removing things like free will.
Very little, really. There's a lot of unknowns.
If we get seriously into discussing arguments from evil we could be here all year :-), so I'll just make a few points and leave it.
(1) Many religious believers, including (I think) the great majority of Christians, anticipate a future state in which sin and suffering and death will be no more. I'm pretty sure they see this as a good thing, whether they anticipate losing their free will to get it or not.
(2) I don't know whether I can see any way to make a world with nothing nasty in it at all without losing other things we care about, but it doesn't seem difficult to envisage ways in which omnipotence in the service of perfect goodness could improve the world substantially. For instance, consider a world exactly like this one except that whenever any cell in any animal's body (human or other) gets into a state that would lead to a malignant tumour, God magically kills it. Boom, no more cancer. (And no effect at all on anyone who wouldn't otherwise be getting cancer.) For an instance of a very different kind, imagine that one day people who pray actually start getting answers. Consistently. I don't mean obliging answers to petitionary prayers, I mean communication. Suddenly anyone who prays gets a response; the responses are consistent and, for some categories of public prayer, public. There is no longer any more scope for wars about whose vision of God is right than there is for wars about whose theory of gravity is right, and anyone who tries to recruit people to blow things up in the name of God gets contradicted by a message from God himself. There might still be scope for fights between people who think it's God doing this and people who think it's a super-powerful evil being, but I don't think it's credible that this wouldn't decrease religious strife. And if you think that being badly wrong about God is a serious problem (whether just because it's bad to be wrong about important things, or because it leads to worse actions, or because it puts one in danger of damnation) then I hope you'll agree that almost everyone on earth having basically correct beliefs about God would be a gain. And no, it wouldn't mean abolishing free will; do we lack free will because we find it difficult to believe the sky is green on account of seeing it ourselves?
(3) I think your conditions a,b,c are too strict, in that I see no reason why candidate better worlds need to satisfy them all in order to be evidence that our actual world isn't the best possible. Perhaps, e.g., a better world is possible that could be created by sustained human effort with everyone working together but won't because actually, in practice, in the world we've got, everyone doesn't work together. So, OK, you can blame humanity for the fact that we haven't created that world, and maybe doing so makes you feel better, but more than one agent can be rightly blamed for the same thing and the fact that it's (kinda) our fault doesn't mean it isn't God's. Do you suppose he couldn't have encouraged us more effectively to do better? If not, doesn't the fact that not even the most effective encouragement infinite wisdom could devise would lead us to do it suggest that saying we could is rather misleading? And (this is a point I think is constantly missed) whyever should we treat human nature, as it now is, as a given? Could your god really not have arranged for humanity to be a little nicer and smarter? In terms of your condition (c), why on earth should we, when considering what better worlds there might be, only consider candidates in which "there are as many criminals and greedy people as in this world"?
I've heard arguments that we've already reached that state - think about if you go back in time about two thousand years and describe modern medical technology and lifestyles. (I don't agree with those arguments, mind you, but I do think that such a future state is going to have to be something that we build, not that we are given.
It's difficult to be certain.
Now I'm imagining a lot of scientists studying and trying to figure out why some cells just mysteriously vanish for no good reason - and this becoming the greatest unsolved question in medical science and taking all the attention of people who might otherwise be figuring out cures for TB or various types of flu. (In this hypothetical universe, they wouldn't know about malignant tumours, of course).
And if someone would otherwise develop a LOT of cancer, then Sudden Cell Vanishing Syndrome could, in itself, become a major problem...
Mind, I'm not saying it's certain that universe would be worse, or even that it's probable. It's just easy to see how that universe could be worse.
That would be interesting. And you raise a lot of good points - there would be a lot of positive effects. But, at the same time... I think HPMOR showed quite nicely that sometimes, having a list of instructions with regard to what to do is a good deal less valuable than being able to understand the situation, take responsibility, and do it yourself.
People would still have free will, yes. But how many people would voluntarily abdicate their decision-making processes to simply do what the voice in the sky tells them to do (except the bits where it says "THINK FOR YOURSELVES")?
...this is something which I think would probably be a net benefit. But I can't be certain.
...very probably.
That just means that a better world needs to be designed that can be created under the constraints of not everyone working together. It's a hard problem, but I don't think it's entirely insoluble.
That is a good question. I have no good answers for it.
...less criminals and greedy people would make things a lot easier, but I'm not quite sure how to arrange that without either (a) reducing free will or (b) mass executions, which could cause other problems.