Comment author: Prankster 26 October 2014 10:42:27PM 34 points [-]

Done.

Looking forward to the analysis and release of data!

Comment author: Prankster 13 August 2014 03:08:08PM 0 points [-]

It seems to me that, as you point out yourself, the concepts mean different things depending on whether you apply them to humans.

Abstract intelligence and abstract rationality are pretty much the same thing as far as I understand. The first is "ability to efficiently achieve goals in a wide range of domains", and the second one is a combination of instrumental rationality and epistemic rationality, which amounts to basically "solving problems given your information" and "acquiring information". When put together, the two types of rationality amount to "gather information about domains and achieve your goals within them", or phrased in another way "ability to efficiently achieve goals in a wide range of domains".

When applied to humans these words mean slightly different things and I think the analogies presented by the other commenters are accurate.

Comment author: Shane_Patt 27 April 2014 09:45:07PM *  1 point [-]

I would say: "door" is an element of the map, and could be made from "wood" or "metal," and have or not have a "handle"; but this door beside me right now is an element of the territory, and is made from wood, and does have a handle. The green arrows are map, and directional; the blue line is territory, and not directional. Something I can say about the world doesn't completely determine everything else I can say about the same green strand, but something that exists in the world does completely determine what else exists along the same blue line.

I tried to make what I was getting at clearer in my edit to the OP a few minutes ago.

Comment author: Prankster 28 April 2014 05:55:58AM 1 point [-]

Something I can say about the world doesn't completely determine everything else I can say about the same green strand, but something that exists in the world does completely determine what else exists along the same blue line.

That seems true. The core reductionist tenet seems to be that you don't need the thing that exists explained/observed on every level of abstraction, but rather that you could deduce everything else about the object given only the most fundamental description. This seems to imply that there is some element of direction even in the blue arrow, since one model follows from another.

It's not clear to me why it would be an error within reductionism to say that the higher levels of abstraction approximates the lower ones or something like that. Maybe I should read up on reductionism somewhere outside LW, can you recommend any specific articles that argues for directionless blue arrows?

Comment author: Shane_Patt 27 April 2014 07:33:47PM 2 points [-]

golden threads are the explanations of how a law or model on a lower level of abstraction causes the observations on a higher level

That's a good way of putting it, except that it would be "explains" rather than "causes." I definitely should make it more clear that—because there are actually many more columns than shown in the diagram—a golden thread connects an entire row to the entire row above it, not just one point to one point.

don't really think you could deduce the entire structure of the blue line given by any one point

I wasn't clear there; please see my reply to shminux, who had the same objection.

Comment author: Prankster 27 April 2014 09:33:23PM 1 point [-]

Fair correction, I think "explanation" and "cause" got lumped together under the general file of "words that mean 'X is so because of Y' " category. Anyway, I can see the difference now and the argument makes sense the way you put it in your response to shminux.

I still think the blue arrow might be directional, though. It seems to me that in many cases things on one level could be made out of several different things on the lower level (e.g a "door" might be made out of wood or metal, it might or might not have a handle etc. but so long as your high level abstraction recognizes it as a door that doesn't matter). Given any point in the space of different things you could say about the world, it seems that granting it constrains what can be on other levels, but doesn't clearly define them (e.g of all the standard model variations you could write out equations for a subset larger than one might be used to "explain" physiology. I can't prove this to you, but it seems true.)

I might be misunderstanding what it would mean for the blue arrows to have directions in this scheme though, so if that's the case this should be easily resolved.

In response to Tapestries of Gold
Comment author: ShardPhoenix 27 April 2014 09:37:36AM 10 points [-]

Isn't there still an asymmetry, in that all brains are made of atoms, but few atoms are part of brains, etc? Basically in some sense the higher levels are more rare and fragile than the lower levels.

Comment author: Prankster 27 April 2014 12:33:55PM 1 point [-]

The article seems to explicitly state that the blue line mentioned is a deliberate oversimplification for explaining, in general, how to think about knowledge in a reductionist sense. The bigger issue here seems to be that you might be able to build a brain out of something that isn't atoms, rather than being able to build different things with atoms.

In response to Tapestries of Gold
Comment author: Prankster 27 April 2014 11:26:10AM 3 points [-]

I really like the imagery in your explanation, but I am not entirely clear on what the golden threads symbolize here. Would it be fair to say that the golden threads are the explanations of how a law or model on a lower level of abstraction causes the observations on a higher level?

Also, I don't really think you could deduce the entire structure of the blue line given by any one point as you seem to imply.

If you are given the physics of a universe, there might be several possible types of physiology, and for every such physiology there might be several different types of neural circuitry. Similarly you could say that there are still some degrees of freedom left over when some arbitrary psychology is given; it might be possible to have human-like cognition within the framework of purely newtonian physics, or we might be able to have a mind with the same morality but vastly different circuitry.

Of course you gain information about the entire blue line when you are given a single point, but it does not seem sufficient for crafting a complete model to know a lot about, say, the moral values of humans or the mental states of earthworms.

Comment author: mgin 23 January 2014 12:16:26AM 6 points [-]

I expect to need to solve the value-loading problem via indirect normativity rather than direct specification (see Bostrom 2014).

What does this mean?

Comment author: Prankster 26 January 2014 10:56:13AM 3 points [-]

The value-loading problem is the problem of getting an AI to value certain things, that is, writing it's utility function. In solving this problem, you can either try to hard-code something into the function, like "paperclips good!". This is direct specification; writing a function that values certain things, but when we want to make an AI value things like "doing the right thing" this becomes unfeasible.

Instead, you could solve the problem by having the AI figure out what you want by itself. The idea is then that the AI can figure out the aggregate of human morality and act accordingly by simply being told to "do what I mean" or something similar. While this might require more cognitive work by the AI, it is almost certainly safer than trying to formalize morality ourselves. In theory this way of solving the problems avoids an AI that suddenly breaks down on some border case, for example a smilemaximizer filling the galaxy with tiny smileys instead of happy humans having fun.

This is all a loose paraphrasing from the last liveblogging event EY had in the FB group where he discusses open problems in FAI.