Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
Prankster460

Done.

Looking forward to the analysis and release of data!

It seems to me that, as you point out yourself, the concepts mean different things depending on whether you apply them to humans.

Abstract intelligence and abstract rationality are pretty much the same thing as far as I understand. The first is "ability to efficiently achieve goals in a wide range of domains", and the second one is a combination of instrumental rationality and epistemic rationality, which amounts to basically "solving problems given your information" and "acquiring information". When put together, the two types of rationality amount to "gather information about domains and achieve your goals within them", or phrased in another way "ability to efficiently achieve goals in a wide range of domains".

When applied to humans these words mean slightly different things and I think the analogies presented by the other commenters are accurate.

Something I can say about the world doesn't completely determine everything else I can say about the same green strand, but something that exists in the world does completely determine what else exists along the same blue line.

That seems true. The core reductionist tenet seems to be that you don't need the thing that exists explained/observed on every level of abstraction, but rather that you could deduce everything else about the object given only the most fundamental description. This seems to imply that there is some element of direction even in the blue arrow, since one model follows from another.

It's not clear to me why it would be an error within reductionism to say that the higher levels of abstraction approximates the lower ones or something like that. Maybe I should read up on reductionism somewhere outside LW, can you recommend any specific articles that argues for directionless blue arrows?

Fair correction, I think "explanation" and "cause" got lumped together under the general file of "words that mean 'X is so because of Y' " category. Anyway, I can see the difference now and the argument makes sense the way you put it in your response to shminux.

I still think the blue arrow might be directional, though. It seems to me that in many cases things on one level could be made out of several different things on the lower level (e.g a "door" might be made out of wood or metal, it might or might not have a handle etc. but so long as your high level abstraction recognizes it as a door that doesn't matter). Given any point in the space of different things you could say about the world, it seems that granting it constrains what can be on other levels, but doesn't clearly define them (e.g of all the standard model variations you could write out equations for a subset larger than one might be used to "explain" physiology. I can't prove this to you, but it seems true.)

I might be misunderstanding what it would mean for the blue arrows to have directions in this scheme though, so if that's the case this should be easily resolved.

The article seems to explicitly state that the blue line mentioned is a deliberate oversimplification for explaining, in general, how to think about knowledge in a reductionist sense. The bigger issue here seems to be that you might be able to build a brain out of something that isn't atoms, rather than being able to build different things with atoms.

I really like the imagery in your explanation, but I am not entirely clear on what the golden threads symbolize here. Would it be fair to say that the golden threads are the explanations of how a law or model on a lower level of abstraction causes the observations on a higher level?

Also, I don't really think you could deduce the entire structure of the blue line given by any one point as you seem to imply.

If you are given the physics of a universe, there might be several possible types of physiology, and for every such physiology there might be several different types of neural circuitry. Similarly you could say that there are still some degrees of freedom left over when some arbitrary psychology is given; it might be possible to have human-like cognition within the framework of purely newtonian physics, or we might be able to have a mind with the same morality but vastly different circuitry.

Of course you gain information about the entire blue line when you are given a single point, but it does not seem sufficient for crafting a complete model to know a lot about, say, the moral values of humans or the mental states of earthworms.

The value-loading problem is the problem of getting an AI to value certain things, that is, writing it's utility function. In solving this problem, you can either try to hard-code something into the function, like "paperclips good!". This is direct specification; writing a function that values certain things, but when we want to make an AI value things like "doing the right thing" this becomes unfeasible.

Instead, you could solve the problem by having the AI figure out what you want by itself. The idea is then that the AI can figure out the aggregate of human morality and act accordingly by simply being told to "do what I mean" or something similar. While this might require more cognitive work by the AI, it is almost certainly safer than trying to formalize morality ourselves. In theory this way of solving the problems avoids an AI that suddenly breaks down on some border case, for example a smilemaximizer filling the galaxy with tiny smileys instead of happy humans having fun.

This is all a loose paraphrasing from the last liveblogging event EY had in the FB group where he discusses open problems in FAI.