All of Prankster's Comments + Replies

Prankster460

Done.

Looking forward to the analysis and release of data!

It seems to me that, as you point out yourself, the concepts mean different things depending on whether you apply them to humans.

Abstract intelligence and abstract rationality are pretty much the same thing as far as I understand. The first is "ability to efficiently achieve goals in a wide range of domains", and the second one is a combination of instrumental rationality and epistemic rationality, which amounts to basically "solving problems given your information" and "acquiring information". When put together, the two types... (read more)

Something I can say about the world doesn't completely determine everything else I can say about the same green strand, but something that exists in the world does completely determine what else exists along the same blue line.

That seems true. The core reductionist tenet seems to be that you don't need the thing that exists explained/observed on every level of abstraction, but rather that you could deduce everything else about the object given only the most fundamental description. This seems to imply that there is some element of direction even in the ... (read more)

0[anonymous]
Well, what pushed me to write this post—in combination with the sequences here—was David Deutsch's books Fabric of Reality and Beginning of Infinity; I don't know that either is legally available online, I'm afraid.

Fair correction, I think "explanation" and "cause" got lumped together under the general file of "words that mean 'X is so because of Y' " category. Anyway, I can see the difference now and the argument makes sense the way you put it in your response to shminux.

I still think the blue arrow might be directional, though. It seems to me that in many cases things on one level could be made out of several different things on the lower level (e.g a "door" might be made out of wood or metal, it might or might not have a han... (read more)

1[anonymous]
I would say: "door" is an element of the map, and could be made from "wood" or "metal," and have or not have a "handle"; but this door beside me right now is an element of the territory, and is made from wood, and does have a handle. The green arrows are map, and directional; the blue line is territory, and not directional. Something I can say about the world doesn't completely determine everything else I can say about the same green strand, but something that exists in the world does completely determine what else exists along the same blue line. I tried to make what I was getting at clearer in my edit to the OP a few minutes ago.

The article seems to explicitly state that the blue line mentioned is a deliberate oversimplification for explaining, in general, how to think about knowledge in a reductionist sense. The bigger issue here seems to be that you might be able to build a brain out of something that isn't atoms, rather than being able to build different things with atoms.

I really like the imagery in your explanation, but I am not entirely clear on what the golden threads symbolize here. Would it be fair to say that the golden threads are the explanations of how a law or model on a lower level of abstraction causes the observations on a higher level?

Also, I don't really think you could deduce the entire structure of the blue line given by any one point as you seem to imply.

If you are given the physics of a universe, there might be several possible types of physiology, and for every such physiology there might be several di... (read more)

3[anonymous]
That's a good way of putting it, except that it would be "explains" rather than "causes." I definitely should make it more clear that—because there are actually many more columns than shown in the diagram—a golden thread connects an entire row to the entire row above it, not just one point to one point. I wasn't clear there; please see my reply to shminux, who had the same objection.

The value-loading problem is the problem of getting an AI to value certain things, that is, writing it's utility function. In solving this problem, you can either try to hard-code something into the function, like "paperclips good!". This is direct specification; writing a function that values certain things, but when we want to make an AI value things like "doing the right thing" this becomes unfeasible.

Instead, you could solve the problem by having the AI figure out what you want by itself. The idea is then that the AI can figure out ... (read more)