VoiceOfRa comments on Rationality Quotes Thread September 2015 - Less Wrong

3 Post author: elharo 02 September 2015 09:25AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (482)

Sort By: Leading

You are viewing a single comment's thread. Show more comments above.

Comment author: VoiceOfRa 02 October 2015 01:07:59AM *  -2 points [-]

I'm not entirely convinced, but in any case even "human" is a really complicated concept.

I guess that means humans don't exist. Oh, wait.

Comment author: tut 02 October 2015 11:10:18AM *  4 points [-]

No, but it does mean that if you want to argue that humans exist you must provide strong positive evidence, perhaps telling us an address where we can meet a real live human ;)

Comment author: Transfuturist 04 October 2015 05:24:54AM 2 points [-]

I could stand to meet a real-life human. I've heard they exist, but I've had such a hard time finding one!

Comment author: gjm 02 October 2015 04:57:50PM 2 points [-]

No idea where you get that from. Theories don't get a complexity penalty for the complexity of things that appear in universes governed by the theories, but for the complexity of their assumptions. If you have an explanation of the universe that has "there is a good god" as a postulate, then whatever complexity is hidden in the words "good" and "god" counts against that explanation.

Comment author: VoiceOfRa 02 October 2015 11:04:42PM 0 points [-]

Yes, and God would care about game theory concepts and apply them to whatever being exist.

Comment author: gjm 03 October 2015 01:39:02AM 1 point [-]

If I'm correctly understanding what you're claiming, it's something like this: "One can postulate a supremely good being without needing human-level concepts that turn out to be really high-complexity, by defining 'good' in very general game-theoretic terms". (And, I assume from the context in which you're making the claims: "... And this salvages the project, mentioned above by CCC, of postulating God as an explanation for the world we see, the idea being that ultimately the details of physical law follow from God's commitment to making the best possible world or something of the kind".)

I'm very pessimistic about the prospects for defining "good" in abstract game-theoretic terms with enough precision to carry out any project like this. You'd need your definition to pick out what parts of the world are to count as agents that can be involved in game-like interactions, and to identify what their preferences are, and to identify what counts as a move in each game, and so forth. That seems really difficult (and high-complexity) to me, whether you focus on identifying human agents or whether you try to do something much more general. Evidently you think otherwise. Could you explain why?

(I'll mention two specific difficulties I anticipate if you're aiming for simplicity through generality. First: how do you avoid identifying everything as an agent and everything that happens as an action? Second: if the notion of goodness that emerges from this is to resemble ours enough for the word "good" actually to be appropriate, it will have to give different weight to different agents's interests -- humans should matter more than ducks, etc. How will it do that?)

Comment author: TheAncientGeek 05 October 2015 09:54:38AM 1 point [-]

I'm very pessimistic about the prospects for defining "good" in abstract game-theoretic terms with enough precision to carry out any project like this. You'd need your definition to pick out what parts of the world are to count as agents that can be involved in game-like interactions, and to identify what their preferences are, and to identify what counts as a move in each game, and so forth. That seems really difficult (and high-complexity) to me, whether you focus on identifying human agents or whether you try to do something much more general. Evidently you think otherwise. Could you explain why?

So it would be difficult for a fintie being that is figuring out some facts that it doesn't already know on the basis of other facts that it does know. Now..how about an omniscient being?

Comment author: gjm 05 October 2015 01:03:19PM 0 points [-]

I think you may be misunderstanding what the relevance of the "difficulty" is here.

The context is the following question:

  • If we are comparing explanations for the universe on the basis of hypothesis-complexity (e.g., because we are using something like a Solomonoff prior), what complexity should we estimate for notions like "good"?

If some notion like "perfectly benevolent being of unlimited power" turns out to have very low complexity, so much the better for theistic explanations of the universe. If it turns out to have very high complexity, so much the worse for such explanations.

(Of course that isn't the only relevant question. We also need to estimate how likely a universe like ours is on any given hypothesis. But right now it's the complexity we're looking at.)

In answering this question, it's completely irrelevant how good some hypothetical omniscient being might be at figuring out what parts of the world count as "agents" and what their preferences are and so on, even though ultimately hypothetical omniscient beings are what we're interested in. The atheistic argument here isn't "It's unlikely that the world was created by a god who wants to satisfy the preferences of agents in it, because identifying those agents and their preferences would be really difficult even for a god" (to which your question would be an entirely appropriate rejoinder). It's something quite different: "It's not a good explanation for the universe to say that it was created by a god who wants to satisfy the preferences of agents in it, because that's a very complex hypothesis, because the notions of 'agent' and 'preferences' don't correspond to simple computer programs".

(Of course this argument will only be convincing to someone who is on board with the general project of assessing hypotheses according to their complexity as defined in terms of computer programs or something roughly equivalent, and who agrees with the claim that human-level notions like 'agent' and 'preference' are much harder to write programs for than physics-level ones like 'electron'. Actually formalizing all this stuff seems like a very big challenge, but I remark that in principle -- if execution time and computer memory are no object -- we basically already know how to write a program that implements physics-so-far-as-we-understand-it, but we seem to be some way from writing one that implements anything much like morality-as-we-understand-it.)

Comment author: TheAncientGeek 06 October 2015 01:56:46PM 1 point [-]

It's not surprising that one particular parsimony principle can be used to overturn one particular form of theism. After all, most theists disagree with most theisms...and most believres in a Weird Science hypothesis (MUH, Matrix, etcv ) don't believe in the others.

The question is: where is the slam dunk against theism..the one that works against all forms of theism, that works only against theism , and not against similar scientific ideas like Matrix Lords, and works against the strongest arguments for theism, not just biblically literalist creationist protestant Christianity, and doesn't rest on cherry-picking particular parisimony principles?

There are multiple principles of parsimony, multiple Occam's razors.

Some focus on ontology, on the multiplication of entities, as in the original razor others on epistemology the multiplication of assumptions. The Kolmogorov complexity measure is more alligned to the latter.

Smaller universes are favoured by the ontological razor,but disfavoured by the Epistemological razor, because they are more arbitrary. Maximally large universes can have low epistemic complexity (because you have to add information specifying hwat has been left out to arrive at smaller universs), and low K. complexity (because short programmes can generate infinite bitstrings, eg an expansion of pi).

we basically already know how to write a program that implements physics-so-far-as-we-understand-it, but we seem to be some way from writing one that implements anything much like morality-as-we-understand-it.

Morality as we know it evolved from physics plus starting conditions. When you say that physics is soluble but morality isn't, I suppose you mean that the starting conditions are absent.

Comment author: tailcalled 06 October 2015 05:33:44PM 1 point [-]

Morality as we know it evolved from physics plus starting conditions. When you say that physics is soluble but morality isn't, I suppose you mean that the starting conditions are absent.

You need to know not just the starting conditions, but also the position where morality evolves. That position can theoretically have huge complexity.

Comment author: gjm 06 October 2015 04:37:54PM 0 points [-]

multiple Occam's razors

Well, obviously we should pick the simplest one :-).

Seriously: I wouldn't particularly expect there to be a single all-purpose slam dunk against all varieties of theism. Different varieties of theism are, well, very different. (Even within, say, protestant Christianity, one has the fundamentalists and the super-fuzzy liberals, and since they agree on scarcely any point of fact I wouldn't expect any single argument to be effective against both positions.)

ontology [...] epistemology

I'm pretty sure that around these parts the "epistemological" sort (minimize description / program rather than size of what it describes / produces) is much, much more widely held than then "ontological" sort.

I suppose you mean that the starting conditions are absent.

That's one reasonable way of looking at it, but if the best way we can find to compute morality-as-we-understand-it is to run a complete physical simulation of our universe then the outlook doesn't look good for the project of finding a simpler-than-naturalism explanation of our universe based on the idea that it's the creation of a supremely good being.

Comment author: hairyfigment 06 October 2015 08:07:46PM 0 points [-]

I'm pretty sure that around these parts the "epistemological" sort (minimize description / program rather than size of what it describes / produces) is much, much more widely held than then "ontological" sort.

So you don't think we're mostly solipsists? :)

Comment author: VoiceOfRa 03 October 2015 05:24:38AM *  -2 points [-]

That seems really difficult (and high-complexity) to me, whether you focus on identifying human agents or whether you try to do something much more general. Evidently you think otherwise.

This is motivated stopping. You don't want to admit any evidence for theism so you declare the problem impossible instead of thinking about it for 10 seconds.

Here are some hints: If you were dropped into an alien planet or even an alien universe you would have no trouble identifying the most agenty things.

You'd need your definition to pick out what parts of the world are to count as agents that can be involved in game-like interactions,

Well there you go, agents are things that can be involved in game-like interactions.

Comment author: gjm 03 October 2015 09:18:56AM *  5 points [-]

[...] motivated [...] don't want [...] instead of thinking about it

This is the third time in the last few weeks that you have impugned my integrity on what seems to me to be zero evidence. I do wish you would at least justify such claims when you make them. (When I have asked you to do so in the past you have simply ignored the requests.)

Would it kill you to entertain some other hypotheses -- e.g., "the other guy is simply failing to notice something I have noticed" and "I am simply failing to notice something the other guy has noticed"? Perhaps it would; your consistent strategy of downvoting everyone who disagrees with you doesn't exactly suggest that you're here for a collaborative search for truth as opposed to fighting a war with arguments as soldiers.

[EDITED to add: I didn't, in fact, declare anything impossible; and before declaring it very difficult I did in fact think about it for more than ten seconds. I see little evidence that you've given as much thought to anything I've said in this discussion.]

you would have no trouble

I have agent-identifying hardware in my brain. It is, I think, quite complicated. I don't know how to make a computer identify agents, and so far as I know no one else does either. The best automated things I know of for tasks remotely resembling agent-identification are today's state-of-the-art image classifiers, which typically involve large mysterious piles of neural network weights, which surely count as high-complexity if anything does.

agents are things that can be involved in game-like interactions

Identifying game-like interactions is also (so far as I can tell) a problem no one has any inkling how to solve, especially if we don't have the prior ability to identify the agents.

Comment author: VoiceOfRa 04 October 2015 03:03:35AM -2 points [-]

Perhaps it would; your consistent strategy of downvoting everyone who disagrees with you

No, but I do downvote people who appear to be completely mind-killed.

Identifying game-like interactions is also (so far as I can tell) a problem no one has any inkling how to solve, especially if we don't have the prior ability to identify the agents.

Rather, identifying agents using algorithms with reasonable running time is a hard problem.

Also, consider the following relatively uncontroversial beliefs around here:

1) The universe has low Kolmogorov complexity.

2) An AGI is likely to be developed and when it does it'll take over the universe.

Now let's consider some implications of these beliefs:

3) An AGI has low Kolmogorov complexity since it can be specified as "run this low Kolmogorov complexity universe for a sufficiently long period of time".

Also the AGI to be successful is going to have to be good at detecting agents so it can dedicated sufficient resources to defeating/subverting them. Thus detecting agents must have low Kolmogorov complexity.

Comment author: gjm 04 October 2015 11:19:28AM 2 points [-]

I do downvote people who appear to be completely mind-killed

I think your mindkill detection algorithms need some tuning; they have both false positives and false negatives.

Rather [...] with reasonable running time

I know of no credible way to do it with unreasonable running time either. (Unless you count saying "AIXI can solve any solvable problem, in principle, so use AIXI", but I see no reason to think that this leads you to a solution with low Kolmogorov complexity.)

I don't think your argument from superintelligent AI works; exactly where it fails depends on some details you haven't specified, but the trouble is some combination of the following.

  • For your first premise to be uncontroversial around here, I think you need to either take it as applying only to the form of the laws of physics and not to initial conditions, arbitrary constants, etc. (in which case you can't identify "this universe" and still have it be of low complexity) or adopt something like Tegmark's MUH that amounts to running every version of the universe (all boundary conditions, all values for the constants, etc.) in parallel (in which case what gets taken over by a superintelligent AI is no longer the whole thing but a possibly-tiny part, and specifying that part costs a lot of complexity).
  • You need to say where in the universe the AGI is, which imposes a large complexity cost -- unless ...
  • ... unless you are depending on it taking over the whole universe so that you can just point at the whole caboodle and say "that thing" -- but then presumably its agent-detection facilities are a tiny part of the whole (not necessarily a spatially localized part, of course), and singling those out so you can say "agents are things that that identifies as agents" again has a large complexity cost from locating them.
Comment author: VoiceOfRa 05 October 2015 02:44:10AM -2 points [-]

For your first premise to be uncontroversial around here, I think you need to either take it as applying only to the form of the laws of physics and not to initial conditions, arbitrary constants, etc. (in which case you can't identify "this universe" and still have it be of low complexity)

Doesn't that undermine the premise of the whole "a godless universe has low Kolmogorov complexity" argument that you're trying to make?

adopt something like Tegmark's MUH that amounts to running every version of the universe (all boundary conditions, all values for the constants, etc.) in parallel (in which case what gets taken over by a superintelligent AI is no longer the whole thing but a possibly-tiny part, and specifying that part costs a lot of complexity).

Well, all the universes that support can life are likely wind up taken over by AGI's.

unless you are depending on it taking over the whole universe so that you can just point at the whole caboodle and say "that thing" -- but then presumably its agent-detection facilities are a tiny part of the whole (not necessarily a spatially localized part, of course), and singling those out so you can say "agents are things that that identifies as agents" again has a large complexity cost from locating them.

But, the AGI can. Agentiness is going to be a very important concept for it. Thus it's likely to have a short referent to it.

Comment author: Transfuturist 05 October 2015 04:18:39AM *  2 points [-]

Doesn't that undermine the premise of the whole "a godless universe has low Kolmogorov complexity" argument that you're trying to make?

Again, there is a difference between the complexity of the dynamics defining state transitions, and the complexity of the states themselves.

But, the AGI can. Agentiness is going to be a very important concept for it. Thus it's likely to have a short referent to it.

What do you mean by "short referent?" Yes, it will likely be an often-used concept, so the internal symbol signifying the concept is likely to be short, but that says absolutely nothing about the complexity of the concept itself. If you want to say that "agentiness" is a K-simple concept, perhaps you should demonstrate that by explicating a precise computational definition for an agent detector, and show that it doesn't fail on any conceivable edge-cases.

Saying that it's important doesn't mean it's simple. "For an AGI to be successful it is going to have to be good at reducing entropy globally. Thus reducing entropy globally must have low Kolmogorov complexity."

Comment author: gjm 05 October 2015 09:31:09AM 1 point [-]

the whole "a godless universe has low Kolmogorov complexity" argument that you're trying to make.

It might, perhaps, if I were actually trying to make that argument. But so far as I can see no one is claiming here that the universe has low komplexity. (All the atheistic argument needs is for the godless version of the universe to have lower komplexity than the godded one.)

all the universes that can support life are likely to wind up taken over by AGIs.

Even if so, you still have the locate-the-relevant-bit problem. (Even if you can just say "pick any universe", you have to find the relevant bit within that universe.) It's also not clear to me that locating universes suitable for life within something like the Tegmark multiverse is low-komplexity.

the AGI can. [...] it's likely to have a short referent to it.

An easy-to-use one, perhaps, but I see no guarantee that it'll be something easy to identify for others, which is what's relevant.

Consider humans; we're surely much simpler than a universe-spanning AGI (and also more likely to have a concept that nicely matches the human concept of "agent"; perhaps a universe-spanning AGI would instead have some elaborate range of "agent"-like concepts making fine distinctions we don't see or don't appreciate; but never mind that). Could you specify how to tell, using a human brain, whether something is an agent? (Recall that for komplexity-measuring purposes, if you do so by means of language or something then the komplexity of that language is part of the cost you pay. In fact, it's worse; you need to specify how to work out that language by looking at human brains. Similarly, if you want to say "look at the neurons located here", the thing you need to pay the komplexity-cost of is not just specifying "here" but specifying how to find "here" in a way that works for any possible human-like thing.)

Comment author: Transfuturist 04 October 2015 05:51:34AM *  1 point [-]

An AGI has low Kolmogorov complexity since it can be specified as "run this low Kolmogorov complexity universe for a sufficiently long period of time".

That's a fundamental misunderstanding of complexity. The laws of physics are simple, but the configurations of the universe that runs on it can be incredibly complex. The amount of information needed to specify the configuration of any single cubic centimeter of space is literally unfathomable to human minds. Running a simulation of the universe until intelligences develop inside of it is not the same as specifying those intelligences, or intelligence in general.

Also the AGI to be successful is going to have to be good at detecting agents so it can dedicated sufficient resources to defeating/subverting them. Thus detecting agents must have low Kolmogorov complexity.

The convenience of some hypothetical property of intelligence does not act as a proof of that property. Please note that we are in a highly specific environment, where humans are the only sapients around, and animals are the only immediately recognizable agents. There are sci-fi stories about your "necessary" condition being exactly false; where humans do not recognize some intelligence because it is not structured in a way that humans are capable of recognizing.

Comment author: hairyfigment 04 October 2015 07:00:40AM 0 points [-]

Not sure anyone is dumb enough to think the visible universe has low Kolmogorov complexity. That's actually kind of the reason why we keep talking about a universal wavefunction, and even larger Big Worlds, none of which an AGI could plausibly control.

Comment author: [deleted] 04 October 2015 03:25:45PM -1 points [-]

The Second Law of Thermodynamics causes the Kolmogorov complexity of the universe to increase over time. What you've actually constructed is an argument against being able to simulate the universe in full fidelity.

Comment author: IlyaShpitser 05 October 2015 02:42:48AM *  2 points [-]

This is not right, K(.) is a function that applies to computable objects. It either does not apply to our Universe, or is a constant if it does (this constant would "price the temporal evolution in").

Comment author: [deleted] 05 October 2015 03:46:48AM 1 point [-]

I sincerely don't think it works that way. Consider the usual relationship between Shannon entropy and Kolmogorov complexity: H(x) \proportional E[K(x)]. We know that the Gibbs, and thus Shannon, entropy of the universe is nondecreasing, and that thus means that the distribution over universe-states is getting more concentrated on more complex states over time. So the Kolmogorov complexity of the universe, viewed at a given instant in time but from a "god's eye view", is going up.

You could try to calculate the maximum possible entropy in the universe and "price that in" as a constant, but I think that dodges the point in the same way as AIXI_{tl} does by using an astronomically large "constant factor". You're just plain missing information if you try to simulate the universe from its birth to its death from within the universe. At some point, your simulation won't be identical to the real universe anymore, it'll diverge from reality because you're not updating it with additional empirical data (or rather, because you never updated it with any empirical data).

Hmmm... is there an extension of Kolmogorov complexity defined to describe the information content of probabilistic Turing machines (which make random choices) instead of deterministic ones? I think that would better help describe what we mean by "complexity of the universe".

Comment author: 50lbsofstorkmeat 05 October 2015 04:22:24AM 0 points [-]

This is not and can not be true. I mean, for one the universe doesn't have a Kolmogorov complexity*. But more importantly, a hypothesis is not penalized for having entropy increase over time as long as the increases in entropy arise from deterministic, entropy-increasing interactions specified in advance. Just as atomic theory isn't penalized for having lots of distinct objects, thermodynamics is not penalized for having seemingly random outputs which are secretly guided by underlying physical laws.

*If you do not see why this is true, consider that there can be multiple hypothesis which would output the same state in their resulting universes. An obvious example would be one which specifies our laws of physics and another which specifies the position of every atom without compression in the form of physical law.

Comment author: [deleted] 06 October 2015 11:05:43PM 0 points [-]

*If you do not see why this is true, consider that there can be multiple hypothesis which would output the same state in their resulting universes. An obvious example would be one which specifies our laws of physics and another which specifies the position of every atom without compression in the form of physical law.

This is exactly the sort of thing for which Kolmogorov complexity exists: to specify the length of the shortest hypothesis which outputs the correct result.

Just as atomic theory isn't penalized for having lots of distinct objects

Atomic theory isn't "penalized" because it has lots of distinct but repeated objects. It actually has very few things that don't repeat. Atomic theory, after all, deals with masses of atoms.

Comment author: VoiceOfRa 05 October 2015 02:34:22AM -1 points [-]

The Second Law of Thermodynamics causes the Kolmogorov complexity of the universe to increase over time. What you've actually constructed is an argument against being able to simulate the universe in full fidelity.

Um, you appear to be trying to argue that the universe has infinite Kolmogorov complexity. Well, if it does it kind of undermines the whole "we must reject God because a godless universe has lower Kolmogorov" complexity argument.

Comment author: [deleted] 05 October 2015 03:35:39AM -1 points [-]

Um, you appear to be trying to argue that the universe has infinite Kolmogorov complexity.

Not infinite, just growing over time. This just means that it's impossible to simulate the universe with full fidelity from inside the universe, as you would need a bigger universe to do it in.

Comment author: Good_Burning_Plastic 03 October 2015 10:03:42AM *  2 points [-]

no trouble the most agenty things

I think there a word missing there. ("trouble believing"? "trouble with"? "trouble recognizing"?)

Comment author: VoiceOfRa 04 October 2015 02:47:46AM 1 point [-]

Thanks, fixed.