gjm comments on Rationality Quotes Thread September 2015 - Less Wrong

3 Post author: elharo 02 September 2015 09:25AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (482)

Sort By: Controversial

You are viewing a single comment's thread. Show more comments above.

Comment author: gjm 30 September 2015 11:09:08AM 3 points [-]

the probable aims and suspected intentions of such a being

The general opinion around here (which I share) is that the complexity of those is much higher than you probably think it is. "Human-level" concepts like "mercy" and "adultery" and "benevolence" and "cowardice" feel simple to us, which means that e.g. saying "God is a perfectly good being" feels like a low-complexity claim; but saying exactly what they mean is incredibly complicated, if it's possible at all. Whereas, e.g., saying "electrons obey the Dirac equation" feels really complicated to us but is actually much simpler.

Of course you're at liberty to say: "No! Actually, human-level concepts really are simple, because the underlying reality of the universe is the mind of God, which entertains such concepts as easily as it does the equations of quantum physics". And maybe the relative plausibility of that position and ours ultimately depends on one's existing beliefs about gods and naturalism and so forth. I suggest that (1) the startling success of reductionist mathematics-based science in understanding, explaining and predicting the universe and (2) the total failure of teleological purpose-based thinking in the same endeavour (see e.g., the problem of evil) give good reason to prefer our position to yours.

The laws of physics would then derive from this.

That sounds really optimistic.

Comment author: VoiceOfRa 01 October 2015 03:42:00AM -1 points [-]

"Human-level" concepts like "mercy" and "adultery" and "benevolence" and "cowardice" feel simple to us

They can be derived from simple game theory as applied to humans.

Comment author: gjm 01 October 2015 12:23:30PM 1 point [-]

I'm not entirely convinced, but in any case even "human" is a really complicated concept.

Comment author: VoiceOfRa 02 October 2015 01:07:59AM *  -2 points [-]

I'm not entirely convinced, but in any case even "human" is a really complicated concept.

I guess that means humans don't exist. Oh, wait.

Comment author: gjm 02 October 2015 04:57:50PM 2 points [-]

No idea where you get that from. Theories don't get a complexity penalty for the complexity of things that appear in universes governed by the theories, but for the complexity of their assumptions. If you have an explanation of the universe that has "there is a good god" as a postulate, then whatever complexity is hidden in the words "good" and "god" counts against that explanation.

Comment author: VoiceOfRa 02 October 2015 11:04:42PM 0 points [-]

Yes, and God would care about game theory concepts and apply them to whatever being exist.

Comment author: gjm 03 October 2015 01:39:02AM 1 point [-]

If I'm correctly understanding what you're claiming, it's something like this: "One can postulate a supremely good being without needing human-level concepts that turn out to be really high-complexity, by defining 'good' in very general game-theoretic terms". (And, I assume from the context in which you're making the claims: "... And this salvages the project, mentioned above by CCC, of postulating God as an explanation for the world we see, the idea being that ultimately the details of physical law follow from God's commitment to making the best possible world or something of the kind".)

I'm very pessimistic about the prospects for defining "good" in abstract game-theoretic terms with enough precision to carry out any project like this. You'd need your definition to pick out what parts of the world are to count as agents that can be involved in game-like interactions, and to identify what their preferences are, and to identify what counts as a move in each game, and so forth. That seems really difficult (and high-complexity) to me, whether you focus on identifying human agents or whether you try to do something much more general. Evidently you think otherwise. Could you explain why?

(I'll mention two specific difficulties I anticipate if you're aiming for simplicity through generality. First: how do you avoid identifying everything as an agent and everything that happens as an action? Second: if the notion of goodness that emerges from this is to resemble ours enough for the word "good" actually to be appropriate, it will have to give different weight to different agents's interests -- humans should matter more than ducks, etc. How will it do that?)

Comment author: TheAncientGeek 05 October 2015 09:54:38AM 1 point [-]

I'm very pessimistic about the prospects for defining "good" in abstract game-theoretic terms with enough precision to carry out any project like this. You'd need your definition to pick out what parts of the world are to count as agents that can be involved in game-like interactions, and to identify what their preferences are, and to identify what counts as a move in each game, and so forth. That seems really difficult (and high-complexity) to me, whether you focus on identifying human agents or whether you try to do something much more general. Evidently you think otherwise. Could you explain why?

So it would be difficult for a fintie being that is figuring out some facts that it doesn't already know on the basis of other facts that it does know. Now..how about an omniscient being?

Comment author: gjm 05 October 2015 01:03:19PM 0 points [-]

I think you may be misunderstanding what the relevance of the "difficulty" is here.

The context is the following question:

  • If we are comparing explanations for the universe on the basis of hypothesis-complexity (e.g., because we are using something like a Solomonoff prior), what complexity should we estimate for notions like "good"?

If some notion like "perfectly benevolent being of unlimited power" turns out to have very low complexity, so much the better for theistic explanations of the universe. If it turns out to have very high complexity, so much the worse for such explanations.

(Of course that isn't the only relevant question. We also need to estimate how likely a universe like ours is on any given hypothesis. But right now it's the complexity we're looking at.)

In answering this question, it's completely irrelevant how good some hypothetical omniscient being might be at figuring out what parts of the world count as "agents" and what their preferences are and so on, even though ultimately hypothetical omniscient beings are what we're interested in. The atheistic argument here isn't "It's unlikely that the world was created by a god who wants to satisfy the preferences of agents in it, because identifying those agents and their preferences would be really difficult even for a god" (to which your question would be an entirely appropriate rejoinder). It's something quite different: "It's not a good explanation for the universe to say that it was created by a god who wants to satisfy the preferences of agents in it, because that's a very complex hypothesis, because the notions of 'agent' and 'preferences' don't correspond to simple computer programs".

(Of course this argument will only be convincing to someone who is on board with the general project of assessing hypotheses according to their complexity as defined in terms of computer programs or something roughly equivalent, and who agrees with the claim that human-level notions like 'agent' and 'preference' are much harder to write programs for than physics-level ones like 'electron'. Actually formalizing all this stuff seems like a very big challenge, but I remark that in principle -- if execution time and computer memory are no object -- we basically already know how to write a program that implements physics-so-far-as-we-understand-it, but we seem to be some way from writing one that implements anything much like morality-as-we-understand-it.)

Comment author: TheAncientGeek 06 October 2015 01:56:46PM 1 point [-]

It's not surprising that one particular parsimony principle can be used to overturn one particular form of theism. After all, most theists disagree with most theisms...and most believres in a Weird Science hypothesis (MUH, Matrix, etcv ) don't believe in the others.

The question is: where is the slam dunk against theism..the one that works against all forms of theism, that works only against theism , and not against similar scientific ideas like Matrix Lords, and works against the strongest arguments for theism, not just biblically literalist creationist protestant Christianity, and doesn't rest on cherry-picking particular parisimony principles?

There are multiple principles of parsimony, multiple Occam's razors.

Some focus on ontology, on the multiplication of entities, as in the original razor others on epistemology the multiplication of assumptions. The Kolmogorov complexity measure is more alligned to the latter.

Smaller universes are favoured by the ontological razor,but disfavoured by the Epistemological razor, because they are more arbitrary. Maximally large universes can have low epistemic complexity (because you have to add information specifying hwat has been left out to arrive at smaller universs), and low K. complexity (because short programmes can generate infinite bitstrings, eg an expansion of pi).

we basically already know how to write a program that implements physics-so-far-as-we-understand-it, but we seem to be some way from writing one that implements anything much like morality-as-we-understand-it.

Morality as we know it evolved from physics plus starting conditions. When you say that physics is soluble but morality isn't, I suppose you mean that the starting conditions are absent.

Comment author: VoiceOfRa 03 October 2015 05:24:38AM *  -2 points [-]

That seems really difficult (and high-complexity) to me, whether you focus on identifying human agents or whether you try to do something much more general. Evidently you think otherwise.

This is motivated stopping. You don't want to admit any evidence for theism so you declare the problem impossible instead of thinking about it for 10 seconds.

Here are some hints: If you were dropped into an alien planet or even an alien universe you would have no trouble identifying the most agenty things.

You'd need your definition to pick out what parts of the world are to count as agents that can be involved in game-like interactions,

Well there you go, agents are things that can be involved in game-like interactions.

Comment author: gjm 03 October 2015 09:18:56AM *  5 points [-]

[...] motivated [...] don't want [...] instead of thinking about it

This is the third time in the last few weeks that you have impugned my integrity on what seems to me to be zero evidence. I do wish you would at least justify such claims when you make them. (When I have asked you to do so in the past you have simply ignored the requests.)

Would it kill you to entertain some other hypotheses -- e.g., "the other guy is simply failing to notice something I have noticed" and "I am simply failing to notice something the other guy has noticed"? Perhaps it would; your consistent strategy of downvoting everyone who disagrees with you doesn't exactly suggest that you're here for a collaborative search for truth as opposed to fighting a war with arguments as soldiers.

[EDITED to add: I didn't, in fact, declare anything impossible; and before declaring it very difficult I did in fact think about it for more than ten seconds. I see little evidence that you've given as much thought to anything I've said in this discussion.]

you would have no trouble

I have agent-identifying hardware in my brain. It is, I think, quite complicated. I don't know how to make a computer identify agents, and so far as I know no one else does either. The best automated things I know of for tasks remotely resembling agent-identification are today's state-of-the-art image classifiers, which typically involve large mysterious piles of neural network weights, which surely count as high-complexity if anything does.

agents are things that can be involved in game-like interactions

Identifying game-like interactions is also (so far as I can tell) a problem no one has any inkling how to solve, especially if we don't have the prior ability to identify the agents.

Comment author: VoiceOfRa 04 October 2015 03:03:35AM -2 points [-]

Perhaps it would; your consistent strategy of downvoting everyone who disagrees with you

No, but I do downvote people who appear to be completely mind-killed.

Identifying game-like interactions is also (so far as I can tell) a problem no one has any inkling how to solve, especially if we don't have the prior ability to identify the agents.

Rather, identifying agents using algorithms with reasonable running time is a hard problem.

Also, consider the following relatively uncontroversial beliefs around here:

1) The universe has low Kolmogorov complexity.

2) An AGI is likely to be developed and when it does it'll take over the universe.

Now let's consider some implications of these beliefs:

3) An AGI has low Kolmogorov complexity since it can be specified as "run this low Kolmogorov complexity universe for a sufficiently long period of time".

Also the AGI to be successful is going to have to be good at detecting agents so it can dedicated sufficient resources to defeating/subverting them. Thus detecting agents must have low Kolmogorov complexity.

Comment author: Good_Burning_Plastic 03 October 2015 10:03:42AM *  2 points [-]

no trouble the most agenty things

I think there a word missing there. ("trouble believing"? "trouble with"? "trouble recognizing"?)

Comment author: VoiceOfRa 04 October 2015 02:47:46AM 1 point [-]

Thanks, fixed.

Comment author: tut 02 October 2015 11:10:18AM *  4 points [-]

No, but it does mean that if you want to argue that humans exist you must provide strong positive evidence, perhaps telling us an address where we can meet a real live human ;)

Comment author: Transfuturist 04 October 2015 05:24:54AM 2 points [-]

I could stand to meet a real-life human. I've heard they exist, but I've had such a hard time finding one!

Comment author: TheAncientGeek 30 September 2015 03:09:47PM *  0 points [-]

Note that infinite sets can have very low informational complexity-- that's why complexity isn't a slam-dunk against MUH.

Don't think of infinite entities as very large finite entities.

Comment author: gjm 30 September 2015 03:23:23PM 0 points [-]

I'm pretty sure I wasn't thinking of infinite entities as very large finite entities, nor was I claiming that infinite sets must have infinite complexity or anything of the kind. What I was claiming high complexity for is the concept of "good", not God or "perfectly good" as opposed to "merely very good".

Comment author: TheAncientGeek 30 September 2015 06:19:32PM 0 points [-]

Wouldn't "perfectly good" be the appropriate concept here?

Comment author: gjm 30 September 2015 10:06:03PM 1 point [-]

Yes, but the point is that the "perfectly" part (1) isn't what I'm blaming for the complexity and (2) doesn't appear to me to make the complexity go away by its presence.

Comment author: TheAncientGeek 01 October 2015 08:27:37AM *  2 points [-]

I don't see how you can be sure about, when there is so much disagreement about the meaning of good. Human preferences are complex because they are idiosyncratic, but why would a deity, particularly a "philosopher's god", have idiosyncratic preferences? And an omniscient deity could easily be a 100% accurate consequentialist..the difficult part of consequentialism, having reliable knowledge of the consequences, has been granted...all you need to add to omniscience is a Good Will.

IOW, regarding both atheism and consequentialism as slam-dunks is a bit of a problem, because if you follow through the consequences of consequentialism, many of the arguments atheism unravel: a consequentialist deity is fully entitled to destroy two cities to save 10, that would be his version of a trolley problem.

Comment author: Lumifer 01 October 2015 02:23:16PM 1 point [-]

a consequentialist deity is fully entitled to destroy two cities to save 10

Not if the deity is omnipotent.

Comment author: TheAncientGeek 02 October 2015 12:04:02PM 1 point [-]

That's debatable, at which point it is no longer a slam dunk.

Comment author: gjm 01 October 2015 12:27:27PM 2 points [-]

It seems to me that no set of preferences that can be specified very simply without appeal to human-level concepts is going to be close enough to what we call "good" to deserve that name.

a consequentialist deity is fully entitled to destroy two cities to save 10

I entirely agree, but I don't see how this makes a substantial fraction of the arguments for atheism unravel; in particular, most thoughtful statements of the argument from evil say not "bad things happen, therefore no god" but "bad things happen without any sign that they are necessary to enable outweighing gains, therefore probably no god".

Comment author: CCC 05 October 2015 08:27:25AM 0 points [-]

The general opinion around here (which I share) is that the complexity of those is much higher than you probably think it is.

That is possible. I have no idea how to specify such things in a minimum number of bits of information.

Whereas, e.g., saying "electrons obey the Dirac equation" feels really complicated to us but is actually much simpler.

This is true; yet there may be fewer human-level concepts and more laws of physics. I am still unconvinced which complexity is higher; mainly because I have absolutely no idea how to measure the complexity of either in the first place. (One can do a better job of estimating the complexity of the laws of physics because they are better known, but they are not completely known).


But let us consider what happens if you are right, and the complexity of my hypothesis is higher than the complexity of yours. Then that would form a piece of probabilistic evidence in favour of the atheist hypothesis, and the correct action to take would be to update - once - in that direction by an appropriate amount. I'm not sure what an appropriate amount is; that would depend on the ratio of the complexities (but is capped by the possibility of getting that ratio wrong).

This argument does not, and can not, in itself, give anywhere near the amount of certainty implied by this statement (quoted from here):

...would rather push a button that would destroy the world if God exists, than a button that had a known probability of one in a billion of destroying the world.


I should also add that the existence of God does not invalidate reductionist mathematics-based thinking in any way.

Comment author: gjm 05 October 2015 09:40:25AM 1 point [-]

there may be fewer human-level concepts and more laws of physics

Well, I suppose in principle there might. But would you really want to bet that way?

update - once - in that direction by an appropriate amount

Yes, I completely agree.

capped by the possibility of getting that ratio wrong

Almost, but not exactly. It makes a difference how wrong, and in which direction.

can not [...] give anywhere near the amount of certainty [...] one in a billion

One in a billion is only about 30 bits. I don't think it's at all impossible for the complexity-based calculation, if one could do it, to give a much bigger odds ratio than that. The question then is what to do about the possibility of having got the complexity-based calculation (or actually one's estimate of it) badly wrong. I'm inclined to agree that when one takes that into account it's not reasonable to use an odds ratio as large as 10^9:1.

But it's not as if this complexity argument is the only reason anyone has for not believing in God. (Some people consider it the strongest reason, but "strongest" is not the same as "only".)

Incidentally, I offer the following (not entirely serious) argument for pressing the boom-if-God button rather than the boom-with-small-probability button: the chances of the world being undestroyed afterwards are presumably better if God exists.

Comment author: CCC 06 October 2015 10:07:34AM 1 point [-]

Well, I suppose in principle there might. But would you really want to bet that way?

Insufficient information to bet either way.

The question then is what to do about the possibility of having got the complexity-based calculation (or actually one's estimate of it) badly wrong. I'm inclined to agree that when one takes that into account it's not reasonable to use an odds ratio as large as 10^9:1.

Yes, that's what I meant by "capped" - if I did that calculation (somehow working out the complexities) and it told me that there was a one-in-a-billion chance, then there would be a far, far better than a one-in-a-billion chance that the calculation was wrong.

But it's not as if this complexity argument is the only reason anyone has for not believing in God. (Some people consider it the strongest reason, but "strongest" is not the same as "only".)

Noted.

If I assume that the second-strongest reason is (say) 80% as strong as the strongest reason (by which I mean, 80% as many bits of persuasiveness), the third-strongest reason is 80% as strong as that, and so on; if the strength of all this (potentially infinite) series of reasons is added together, it would come to five times as strong as the strongest reason.

Thus, for a thirty-bit strength from all the reasons, the strongest reason would need a six-bit strength - it would need to be worth one in sixty-four (approximately).

Of course, there's a whole lot of vague assumptions and hand-waving in here (particularly that 80% figure, which I just pulled out of nowhere) but, well, I haven't seen any reason to think it at all likely that the complexity argument is worth even three bits, never mind six.

(Mind you, I can see how a reasonable and intelligent person might disagree on me about that).

Incidentally, I offer the following (not entirely serious) argument for pressing the boom-if-God button rather than the boom-with-small-probability button: the chances of the world being undestroyed afterwards are presumably better if God exists.

...serious or not, that is a point worth considering. I'm not sure that it's true, but it could be interesting to debate.

Comment author: gjm 06 October 2015 04:29:35PM 1 point [-]

80% [...] 80% [...] 80%

I would expect heavier tails than that. (For other questions besides that of gods, too.) I'd expect that there might be dozens of reasons providing half a bit or so.

I haven't seen any reason to think it at all likely that the complexity argument is worth even three bits, never mind six.

For what it's worth, I might rate it at maybe 7 bits. Whether I'm a reasonable and intelligent person isn't for me to say :-).

Comment author: CCC 07 October 2015 11:24:16AM 0 points [-]

I would expect heavier tails than that. (For other questions besides that of gods, too.) I'd expect that there might be dozens of reasons providing half a bit or so.

Fair enough. That 80% figure was kindof pulled out of nowhere, really.

For what it's worth, I might rate it at maybe 7 bits. Whether I'm a reasonable and intelligent person isn't for me to say :-).

You think the theistic explanation might be as much as a hundred times more complex?

...there may be some element of my current position biasing my estimate, but that does seem a little excessive.

Whether I'm a reasonable and intelligent person isn't for me to say :-).

So far as this debate goes, my impression is that you either are both reasonable and intelligent or you're really good at faking it.

Comment author: gjm 07 October 2015 02:24:04PM 0 points [-]

as much as a hundred times more complex?

No, as much as seven bits more complex. (More precisely, I think it's probably a lot more more-complex than that, but I'm quite uncertain about my estimates.)

really good at faking it

Damn, you caught me. (Seriously: I'm pretty sure that being really good at faking intelligence requires intelligence. I'm not so sure about reasonable-ness.)

Comment author: CCC 08 October 2015 09:08:50AM 0 points [-]

No, as much as seven bits more complex.

One bit is twice as likely.

Seven bits are two-to-the-seven times as likely, which is 128 times.

...surely?

(Seriously: I'm pretty sure that being really good at faking intelligence requires intelligence. I'm not so sure about reasonable-ness.)

I can think of a few ways to fake greater intelligence then you have. Most of them require a more intelligent accomplice, in one way or another. But yes, reasonableness is probably easier to fake.

Comment author: gjm 08 October 2015 11:21:27AM *  0 points [-]

128x more unlikely but not 128x more complex; for me, at least, complexity is measured in bits rather than in number-of-possibilities.

[EDITED to add: If anyone has a clue why this was downvoted, I'd be very interested. It seems so obviously innocuous that I suspect it's VoiceOfRa doing his thing again, but maybe I'm being stupid in some way I'm unable to see.]

Comment author: CCC 12 October 2015 10:35:03AM 0 points [-]

...I thought that the ratio of likeliness due to the complexity argument would be the inverse of the ratio of complexity. Thus, something twice as complex would be half as likely. Is this somehow incorrect?

(I have no idea why it was downvoted)