It seems to me that, as you point out yourself, the concepts mean different things depending on whether you apply them to humans.
Abstract intelligence and abstract rationality are pretty much the same thing as far as I understand. The first is "ability to efficiently achieve goals in a wide range of domains", and the second one is a combination of instrumental rationality and epistemic rationality, which amounts to basically "solving problems given your information" and "acquiring information". When put together, the two types...
Something I can say about the world doesn't completely determine everything else I can say about the same green strand, but something that exists in the world does completely determine what else exists along the same blue line.
That seems true. The core reductionist tenet seems to be that you don't need the thing that exists explained/observed on every level of abstraction, but rather that you could deduce everything else about the object given only the most fundamental description. This seems to imply that there is some element of direction even in the ...
Fair correction, I think "explanation" and "cause" got lumped together under the general file of "words that mean 'X is so because of Y' " category. Anyway, I can see the difference now and the argument makes sense the way you put it in your response to shminux.
I still think the blue arrow might be directional, though. It seems to me that in many cases things on one level could be made out of several different things on the lower level (e.g a "door" might be made out of wood or metal, it might or might not have a han...
The article seems to explicitly state that the blue line mentioned is a deliberate oversimplification for explaining, in general, how to think about knowledge in a reductionist sense. The bigger issue here seems to be that you might be able to build a brain out of something that isn't atoms, rather than being able to build different things with atoms.
I really like the imagery in your explanation, but I am not entirely clear on what the golden threads symbolize here. Would it be fair to say that the golden threads are the explanations of how a law or model on a lower level of abstraction causes the observations on a higher level?
Also, I don't really think you could deduce the entire structure of the blue line given by any one point as you seem to imply.
If you are given the physics of a universe, there might be several possible types of physiology, and for every such physiology there might be several di...
The value-loading problem is the problem of getting an AI to value certain things, that is, writing it's utility function. In solving this problem, you can either try to hard-code something into the function, like "paperclips good!". This is direct specification; writing a function that values certain things, but when we want to make an AI value things like "doing the right thing" this becomes unfeasible.
Instead, you could solve the problem by having the AI figure out what you want by itself. The idea is then that the AI can figure out ...
Done.
Looking forward to the analysis and release of data!