gjm comments on Rationality Quotes Thread September 2015 - Less Wrong

3 Post author: elharo 02 September 2015 09:25AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (482)

You are viewing a single comment's thread. Show more comments above.

Comment author: gjm 08 October 2015 11:21:27AM *  0 points [-]

128x more unlikely but not 128x more complex; for me, at least, complexity is measured in bits rather than in number-of-possibilities.

[EDITED to add: If anyone has a clue why this was downvoted, I'd be very interested. It seems so obviously innocuous that I suspect it's VoiceOfRa doing his thing again, but maybe I'm being stupid in some way I'm unable to see.]

Comment author: CCC 12 October 2015 10:35:03AM 0 points [-]

...I thought that the ratio of likeliness due to the complexity argument would be the inverse of the ratio of complexity. Thus, something twice as complex would be half as likely. Is this somehow incorrect?

(I have no idea why it was downvoted)

Comment author: gjm 12 October 2015 11:01:03AM 4 points [-]

Is this somehow incorrect?

All else being equal, something that takes n bits to specify has probability proportional to 2^-n. So if hypothesis A takes 110 bits and hypothesis B takes 100, then A is about 1000x less probable.

Exactly what "all else being equal" means is somewhat negotiable.

  • If you are using a Solomonoff prior, it means: in advance of looking at any empirical evidence at all, the probability you assign to a hypothesis should be proportional to 2^-n where n is the number of bits in a minimal computer program that specifies the hypothesis, in a language satisfying some technical conditions. Exactly how this cashes out depends on the details of the language you use, and there's no way of actually computing the numbers n in general, and there's no law that says you have to use a Solomonoff prior anyway.
  • More generally, whatever prior you use, there are 2^n hypotheses of length n (and if you describe them in a language satisfying those technical conditions, then they are all genuinely different and as n varies you get every computable hypothesis) so (handwave handwave) on average for large n an n-bit hypothesis has to have probability something like 2^-n.

Anyway, the point is that the natural way to measure complexity is in bits, and probability varies exponentially, not linearly, with number of bits.

Comment author: CCC 13 October 2015 10:26:03AM 0 points [-]

So if hypothesis A takes 110 bits and hypothesis B takes 100, then A is about 1000x less probable.

Yes, and hypothesis A is also 1024x as complex - since it takes ten more bits to specify.

Anyway, the point is that the natural way to measure complexity is in bits, and probability varies exponentially, not linearly, with number of bits.

...it seems that our disagreement here is in the measure of complexity, and not the measure of probability. My measure of complexity is pretty much the inverse of probability, while you're working on a log scale by measuring it in terms of a number of bits.

Comment author: gjm 13 October 2015 01:05:21PM 0 points [-]

Yes, apparently we're using the word "complexity" differently.

So, getting back to what I said that apparently surprised you: Yes, I think it is very plausible that the best theistic explanation for everything we observe around us is what I call "7 bits more complex" and you call "128x more complex" than the best non-theistic explanation; just to be clear what that means, I mean that if we could somehow write down a minimal-length complete description of what we see (compressing it via computer programs / laws of physics / etc.) subject to the constraint "must not make essential use of gods", and another subject instead to the constraint "must make essential use of gods", then my guess at the length of the second description is >= 7 bits longer than my guess at the length of the first. Actually I think the second description would have to be much longerer than that, but I'm discounting because this is confusing stuff and I'm far from certain that I'm right.

And you, if I'm understanding you correctly, are objecting not so much "no, the theistic description will be simpler" as "well, maybe you're right that the nontheistic description will be simpler, but we should expect it to be simpler by less than one random ASCII character's worth of description length".

Of course the real diffiulty here is that we aren't in a position to say what a minimal length theistic or nontheistic description of the universe would look like. We have a reasonable set of laws of physics that might form the core of the nontheistic description, but (1) we know the laws we have aren't quite right, (2) it seems likely that the vast bulk of the complexity needed is not in the laws but in whatever arbitrary-so-far-as-we-know boundary conditions[1] need to be added to get our universe rather than a completely different one with the same laws, and we've no idea how much information that takes or even whether it's finite. And on the theistic side we have at most a pious hope that something like "this is the best of all possible worlds" might suffice, but no clear idea of how to specify what notion of "best" is appropriate, and the world looks so much unlike the best of all possible worlds according to any reasonable notion that this fact is generally considered one of the major reasons for disbelieving in gods. So what hope have we of figuring out which description is shorter?

[1] On some ways of looking at the problem, what needs specifying is not so much boundary conditions as our location within a vast universe or multiverse. Similar problem.

Comment author: CCC 14 October 2015 09:20:02AM 1 point [-]

Actually I think the second description would have to be much longerer than that, but I'm discounting because this is confusing stuff and I'm far from certain that I'm right.

It is confusing. I'm still not even convinced that the theist's description would be longer, but my estimation is so vague and has such massively large error bars that I can't say you're wrong, even if what you're saying is surprising to me.

And you, if I'm understanding you correctly, are objecting not so much "no, the theistic description will be simpler" as "well, maybe you're right that the nontheistic description will be simpler, but we should expect it to be simpler by less than one random ASCII character's worth of description length".

More or less. I'm saying I would find it surprising if the existence of God made the universe significantly more complex. (In the absolutely minimal-length description, I expect it to work out shorter, but like I say above, there are massive error bars on my estimates).

the world looks so much unlike the best of all possible worlds according to any reasonable notion

While I've heard this argued before, I have yet to see an idea for a world that (a) is provably better, (b) cannot be created by sufficient sustained human effort (in an "if everyone works together" kind of way) and (c) cannot be taken apart by sustained human effort into a world vaguely resembling ours (in an "if there are as many criminals and greedy people as in this world").

I'm not saying that there isn't nasty stuff in this world. I'm just not seeing a way that it can be removed without also removing things like free will.

what hope have we of figuring out which description is shorter?

Very little, really. There's a lot of unknowns.

Comment author: gjm 14 October 2015 10:41:16AM 2 points [-]

If we get seriously into discussing arguments from evil we could be here all year :-), so I'll just make a few points and leave it.

(1) Many religious believers, including (I think) the great majority of Christians, anticipate a future state in which sin and suffering and death will be no more. I'm pretty sure they see this as a good thing, whether they anticipate losing their free will to get it or not.

(2) I don't know whether I can see any way to make a world with nothing nasty in it at all without losing other things we care about, but it doesn't seem difficult to envisage ways in which omnipotence in the service of perfect goodness could improve the world substantially. For instance, consider a world exactly like this one except that whenever any cell in any animal's body (human or other) gets into a state that would lead to a malignant tumour, God magically kills it. Boom, no more cancer. (And no effect at all on anyone who wouldn't otherwise be getting cancer.) For an instance of a very different kind, imagine that one day people who pray actually start getting answers. Consistently. I don't mean obliging answers to petitionary prayers, I mean communication. Suddenly anyone who prays gets a response; the responses are consistent and, for some categories of public prayer, public. There is no longer any more scope for wars about whose vision of God is right than there is for wars about whose theory of gravity is right, and anyone who tries to recruit people to blow things up in the name of God gets contradicted by a message from God himself. There might still be scope for fights between people who think it's God doing this and people who think it's a super-powerful evil being, but I don't think it's credible that this wouldn't decrease religious strife. And if you think that being badly wrong about God is a serious problem (whether just because it's bad to be wrong about important things, or because it leads to worse actions, or because it puts one in danger of damnation) then I hope you'll agree that almost everyone on earth having basically correct beliefs about God would be a gain. And no, it wouldn't mean abolishing free will; do we lack free will because we find it difficult to believe the sky is green on account of seeing it ourselves?

(3) I think your conditions a,b,c are too strict, in that I see no reason why candidate better worlds need to satisfy them all in order to be evidence that our actual world isn't the best possible. Perhaps, e.g., a better world is possible that could be created by sustained human effort with everyone working together but won't because actually, in practice, in the world we've got, everyone doesn't work together. So, OK, you can blame humanity for the fact that we haven't created that world, and maybe doing so makes you feel better, but more than one agent can be rightly blamed for the same thing and the fact that it's (kinda) our fault doesn't mean it isn't God's. Do you suppose he couldn't have encouraged us more effectively to do better? If not, doesn't the fact that not even the most effective encouragement infinite wisdom could devise would lead us to do it suggest that saying we could is rather misleading? And (this is a point I think is constantly missed) whyever should we treat human nature, as it now is, as a given? Could your god really not have arranged for humanity to be a little nicer and smarter? In terms of your condition (c), why on earth should we, when considering what better worlds there might be, only consider candidates in which "there are as many criminals and greedy people as in this world"?

Comment author: CCC 15 October 2015 09:48:37AM 1 point [-]

Many religious believers, including (I think) the great majority of Christians, anticipate a future state in which sin and suffering and death will be no more. I'm pretty sure they see this as a good thing, whether they anticipate losing their free will to get it or not.

I've heard arguments that we've already reached that state - think about if you go back in time about two thousand years and describe modern medical technology and lifestyles. (I don't agree with those arguments, mind you, but I do think that such a future state is going to have to be something that we build, not that we are given.

it doesn't seem difficult to envisage ways in which omnipotence in the service of perfect goodness could improve the world substantially.

It's difficult to be certain.

For instance, consider a world exactly like this one except that whenever any cell in any animal's body (human or other) gets into a state that would lead to a malignant tumour, God magically kills it. Boom, no more cancer.

Now I'm imagining a lot of scientists studying and trying to figure out why some cells just mysteriously vanish for no good reason - and this becoming the greatest unsolved question in medical science and taking all the attention of people who might otherwise be figuring out cures for TB or various types of flu. (In this hypothetical universe, they wouldn't know about malignant tumours, of course).

And if someone would otherwise develop a LOT of cancer, then Sudden Cell Vanishing Syndrome could, in itself, become a major problem...

Mind, I'm not saying it's certain that universe would be worse, or even that it's probable. It's just easy to see how that universe could be worse.

For an instance of a very different kind, imagine that one day people who pray actually start getting answers. Consistently. I don't mean obliging answers to petitionary prayers, I mean communication

That would be interesting. And you raise a lot of good points - there would be a lot of positive effects. But, at the same time... I think HPMOR showed quite nicely that sometimes, having a list of instructions with regard to what to do is a good deal less valuable than being able to understand the situation, take responsibility, and do it yourself.

People would still have free will, yes. But how many people would voluntarily abdicate their decision-making processes to simply do what the voice in the sky tells them to do (except the bits where it says "THINK FOR YOURSELVES")?

...this is something which I think would probably be a net benefit. But I can't be certain.

I think your conditions a,b,c are too strict

...very probably.

Perhaps, e.g., a better world is possible that could be created by sustained human effort with everyone working together but won't because actually, in practice, in the world we've got, everyone doesn't work together.

That just means that a better world needs to be designed that can be created under the constraints of not everyone working together. It's a hard problem, but I don't think it's entirely insoluble.

And (this is a point I think is constantly missed) whyever should we treat human nature, as it now is, as a given? Could your god really not have arranged for humanity to be a little nicer and smarter?

That is a good question. I have no good answers for it.

why on earth should we, when considering what better worlds there might be, only consider candidates in which "there are as many criminals and greedy people as in this world"?

...less criminals and greedy people would make things a lot easier, but I'm not quite sure how to arrange that without either (a) reducing free will or (b) mass executions, which could cause other problems.

Comment author: gjm 15 October 2015 12:07:02PM 1 point [-]

I've heard arguments that we've already reached that state

Then I suggest that you classify the people making those arguments as Very Silly and don't listen to them in future.

I do think that such a future state is going to have to be something that we build, not that we are given.

You're welcome to think that; my point is simply that if such a thing is possible and desirable then either one can have a better world than this without abrogating free will, or else free will isn't as important as theists often claim it is when confronted with arguments from evil.

(Perhaps your position is that the world could indeed be much better, but that the only way to make such a better world without abrogating free will is to have us do it gradually starting with a really bad world. I hope I will be forgiven for saying that that doesn't seem like a position anyone would adopt for reasons other than a desperate attempt to avoid the conclusion of the argument from evil.)

I'm imagining a lot of scientists studying and trying to figure out why some cells just mysteriously vanish for no good reason

Again, you're welcome to imagine whatever you like, but if you're suggesting that this would be a likely consequence of the scenario I proposed then I think you're quite wrong (and again wonder whether it would occur to you to imagine that if you weren't attempting to justify the existence of cancer to defend your god's reputation). Under what circumstances would they notice this? Cells die all the time. We don't have the technology to monitor every cell -- or more than a tiny fraction of cells -- in a living animal and see if it dies. We don't have the technology or the medical understanding to be able to say "huh, that cell died and I don't know why; that's really unusual". Maybe some hypothetical super-advanced medical science would be flummoxed by this, but right now I'm pretty sure no one would come close to noticing.

(Also, you could combine this with my second proposal, and then what happens is that someone says "hey, God, would you mind telling us why these cells are dying?" and God says "oh, yeah, those are ones that were going wrong and would have turned into runaway growths that could kill you. I zap those just before they do. You're welcome.".)

And if someone would otherwise develop a LOT of cancer [...]

Please, think about that scenario for thirty seconds, and consider whether you can actually envisage a situation where having those cancerous cells self-destruct would be worse than having them turn into tumours.

sometimes, having a list of instructions with regard to what to do [...]

But that was no part of the scenario I described. In that scenario, it could be that when people ask God for advice he says "Sorry, it's going to be better for you to work this one out on your own."

without either (a) reducing free will [...]

So here's the thing. Apparently "reducing free will" is a terrible awful thing so bad that its spectre justifies the Holocaust and child sex abuse and all the other awful things that bad people do without being stopped by God. So ... how come we don't have more free will than we do? Why are we so readily manipulated by advertisements, so easily entrapped by habits, so easily overwhelmed by the desire for food or sex or whatever? It seems to me that if we take this sort of "free will defence" seriously enough for it to work, then we replace (or augment) the argument from evil with an equally fearsome argument from un-freedom.

Comment author: CCC 16 October 2015 10:58:54AM 0 points [-]

Then I suggest that you classify the people making those arguments as Very Silly and don't listen to them in future.

...perhaps I have failed to properly convey that argument. I did not intend to say that our world now is in a state of perfection. I intended to point out that, if you were to go back in time a couple of thousand years and talk to a random person about our current society, then he would be likely to imagine it as a state of perfection. Similarly, if a random person in this era were to describe a state of perfection, then that might be a description of society a couple of thousand years from now - and the people of that time would still not consider their world in a state of perfection.

In short, "perfection" may be a state that can only be approached asymptotically. We can get closer to it, but never reach it; we can labour to reduce the gap, but never fully eliminate it.

my point is simply that if such a thing is possible and desirable then either one can have a better world than this without abrogating free will, or else free will isn't as important as theists often claim it is when confronted with arguments from evil.

You mean, just kind of starting up the universe at the point where all the major social problems have already been solved, with everyone having a full set of memories of how to keep the solutions working and what happens if you don't?

...I have little idea why the universe isn't like that (and the little idea I have is impractically speculative).

Perhaps your position is that the world could indeed be much better, but that the only way to make such a better world without abrogating free will is to have us do it gradually starting with a really bad world.

The only way? No. Starting a universe at the point where the answers to society's problems are known is a possible way to do that.

...the thing is, I don't know what the goal, the purpose of the universe is. Free will is clearly a very important part of those aims - either a goal in itself, or strictly necessary in order to achieve some other goal or goals - but I'm fairly sure it's not the only one. It may be that other ways of making a better world without abrogating free will all come at the cost of some other important thing that is somehow necessary for the universe.

Though this is all very speculative, and the argument is rather shaky.

Under what circumstances would they notice this? Cells die all the time. We don't have the technology to monitor every cell -- or more than a tiny fraction of cells -- in a living animal and see if it dies.

Okay, if the cells just die and don't vanish, then that makes it a whole lot less physics-breaking. (Alternatively, if they are simply replaced with healthy cells, then it becomes even harder to spot).

(Also, you could combine this with my second proposal, and then what happens is that someone says "hey, God, would you mind telling us why these cells are dying?" and God says "oh, yeah, those are ones that were going wrong and would have turned into runaway growths that could kill you. I zap those just before they do. You're welcome.".)

...you know, combining those would be interesting as well. (Then the next logical question asked would be "Why don't you zap all diseases?")

Please, think about that scenario for thirty seconds, and consider whether you can actually envisage a situation where having those cancerous cells self-destruct would be worse than having them turn into tumours.

No, I can't. This guy's in massive trouble either way.

But that was no part of the scenario I described. In that scenario, it could be that when people ask God for advice he says "Sorry, it's going to be better for you to work this one out on your own."

A fair point.

Some people would be discouraged by this, others would work harder...

So here's the thing. Apparently "reducing free will" is a terrible awful thing so bad that its spectre justifies the Holocaust and child sex abuse and all the other awful things that bad people do without being stopped by God.

Yes, and I'm not quite sure that I get the whole of the why either.

So ... how come we don't have more free will than we do? Why are we so readily manipulated by advertisements, so easily entrapped by habits, so easily overwhelmed by the desire for food or sex or whatever?

...huh. That's... that's a very good question, really.

Hmmm. It seems logical that it must be possible to talk someone into (or out of) a course of action. "Here is some information that shows why it is to your benefit to do X" has to be possible, or there is no point to communication and we might as well all be alone.

And given that that is possible, advertising is an inevitable consequence - tell a million people to buy Tasty Cheese Snax or whatever, and some of them will listen. (More complex use of advertising is merely a refinement of technique). I don't really see any logical alternative - either advertising, which is a special case of persuasion, has to work to some degree, or persuasion must be impossible. (If persuasion of a specific type proves impossible, advertisers will simply use a form of persuasion that is effective).

Habits... as far as I can tell, habits are a consequence of the structure of the human brain (we're pattern-recognition machines, and almost all biases and problems in human thought come from this). A habit is merely a pattern of action; something that we find ourselves doing by default. Avoiding habits would require a pretty much total rewrite of the human brain. Which may be a good or a bad thing, but is a completely unknown thing.

Desires for food and stuff? ...I have no idea. You could probably base an argument from unfreedom around that. (It's clear enough where the desires come from - people without those desires would have been outcompeted by people with them, so there's evolutionary pressure to have those biases. Is this an inevitable consequence of an evolutionary development?)