Wiki Contributions

Comments

This closely relates to the internalist/description theory of meaning in philosophy. The theory said, if we refer to something, we do so via a mental representation ("meanings are in the head"), which is something we can verbalize as a description. A few decades ago, some philosophers objected that we are often able to refer to things we cannot define, seemingly refuting the internalist theory in favor of an externalist theory ("meanings are not in the head"). For example, we can refer to gold even if we we aren't able to define it via its atomic number.

However, the internalist/description theory only requires that there is some description that identifies gold for us, which doesn't necessarily mean we can directly define what gold is. For example, "the yellow metal that was highly valued throughout history and which chemists call 'gold' in English" would be sufficient to identify gold with a description. Another example: You don't know at all what's in the box in front of you, but you can refer to its contents with "The contents of the box I see in front of me". Referring to things only requires we can describe them at least indirectly.

cubefox1d112

For illustration, what would be an example of having different shards for "I get food" () and "I see my parents again" () compared to having one utility distribution over , , , ?

"I can do X" seems to be short for "If I wanted to do X, I would do X." It's a hidden conditional. The ambiguity is the underspecified time. I can do X -- when? Right now? After a few months of training?

Thanks for this post. I had two similar thoughts before.


One thing I'd like to discuss is Bostrom's definition of intelligence as instrumental rationality:

By ‘‘intelligence’’ here we mean something like instrumental rationality—skill at prediction, planning, and means-ends reasoning in general. (source)

This seems to be roughly similar to your "competence". I agree that this is probably too wide a notion of intelligence, at least in intuitive terms. For example, someone could plausibly suffer from akrasia (weakness of will) and thus be instrumentally irrational, while still be considered highly intelligent. Intelligence seems to be necessary for good instrumental reasoning, but not sufficient.

I think a better candidate for intelligence, to stay with the concept of rationality, would be epistemic rationality. That is, the ability to obtain well-calibrated beliefs from experience, or a good world model. Instrumental rationality requires epistemic rationality (having accurate beliefs is necessary for achieving goals), but epistemic rationality doesn't require the ability to effectively achieve goals. Indeed, epistemic rationality doesn't seem to require being goal-directed at all, except insofar we describe "having accurate beliefs" as a goal.

We can imagine a system that only observes the world and forms highly accurate beliefs about it, while not having the ability or a desire to change it. Intuitively such a system could be very intelligent, yet the term "instrumentally rational" wouldn't apply to it.

As the instrumental rationality / epistemic rationality (intelligence) distinction seems to roughly coincide with your competence / intelligence distinction, I wonder which you regard as the better picture. And if you view yours as more adequate, how do competence and intelligence relate to rationality?


An independent idea is that it is apparently possible to divide "competence" or "instrumental rationality" into two independent axes: Generality, and intelligence proper (or perhaps: competence proper). The generality axis describes how narrow or general a system is. For example, AlphaGo is a very narrow system, since it is only able to do one specific thing, namely playing Go. But within this narrow domain, AlphaGo clearly exhibits very high intelligence.

Similarly, we can imagine a very general system, but with quite low intelligence. Animals come to mind. Indeed I have argued before that humans are much more intelligent than other animals, but apparently not significantly more general. Animals seem to be already highly general, insofar they solve things like "robotics" (real world domain) and real-time online learning. The reason e.g. Yudkowsky sees apes as less general than humans seem to have mainly to do with their intelligence, not with their generality.

One way to think about this: You have an AI model, and create a version that is exactly similar, except you scale up the model size. A scaled up AlphaGo would be more intelligent, but arguably not more general. Similarly, the additional abilities of a scaled up LLM would be examples of increased intelligence, not of increased generality. And humans seem to be mostly scaled-up versions of smaller animals as well. The main reason we are so smart seems to be our large brain / neuron count. Generality seems to be a matter of "architecture" rather than model size, in the sense that AlphaGo and GPT-3 have different architectures, such that GPT-3 is more general; and GPT-2 and GPT-3 have the same architecture, such that GPT-3 is more intelligent, but not any more general.

Now your stratification of learning into several levels seems to be a case of such generality. The more levels a cognitive system implements, the more general it arguably is. I'm not sure whether your level 3 describes online learning or meta learning. One could perhaps argue that humans exhibit meta-learning in contrast to other animals, and should therefore considered to be more general. But again, maybe other animals also have this ability, just to a lesser degree, because they are less intelligent in the above sense (having smaller brains), not because they implement a less general cognitive architecture.

Anyway, I wonder whether you happen to have any comments on these related ideas.

I agree. This is unfortunately often done in various fields of research where familiar terms are reused as technical terms.

For example, in ordinary language "organic" means "of biological origin", while in chemistry "organic" describes a type of carbon compound. Those two definitions mostly coincide on Earth (most such compounds are of biological origin), but when astronomers announce they have found "organic" material on an asteroid this leads to confusion.

Yeah. It's possible to give quite accurate definitions of some vague concepts, because the words used in such definitions also express vague concepts. E.g. "cygnet" - "a young swan".

What's more likely: You being wrong about the obviousness of the sphere Earth theory to sailors, or the entire written record (which included information from people who had extensive access to the sea) of two thousand years of Chinese history and astronomy somehow ommitting the spherical Earth theory? Not to speak of other pre-Hellenistic seafaring cultures which also lack records of having discovered the sphere Earth theory.

There is a large difference between sooner and later. Highly non-obvious ideas will be discovered later, not sooner. The fact that China didn't rediscover the theory in more than two thousand years means that it the ability to sail the ocean didn't make it obvious.

Kind of a long shot, but did Polynesian people have ideas on this, for example?

As far as we know, nobody did, except for early Greece. There is some uncertainty about India, but these sources are dated later and from a time when there was already some contact with Greece, so they may have learned it from them.

I see no reason to doubt that the article is accurate. Why would Chinese scholars completely miss the theory if it was obvious among merchants? There should in any case exist some records of it, some maps. Yet none exist. And why would it even be obvious that the Earth is a sphere from long distance travel alone?

Nevertheless, I don't think this is all that counterfactual. If you're obsessed with measuring everything, and like to travel (like the Greeks), I think eventually you'll have to discover this fact.

I don't think this makes sense. If the Chinese didn't reinvent the theory in more than two thousand years, this makes it highly "counterfactual". The longer a theory isn't reinvented, the less obvious it must be.

Answer by cubefoxApr 24, 202480

That the earth is a sphere:

Today, we have lost sight of how counter-intuitive it is to believe the earth is not flat. Its spherical shape has been discovered just once, in Athens in the fourth century BC. The earliest extant reference to it being a globe is found in Plato’s Phaedo, while Aristotle’s On the Heavens contains the first examination of the evidence. Everyone who has ever known the earth is round learnt it indirectly from Aristotle.

Thus begins "The Clash Between the Jesuits and Traditional Chinese Square-Earth Cosmology". The article tells the dramatic story of how some Jesuits tried to establish the spherical-Earth theory in 16th century China, where it was still unknown, partly by creating an elaborate world map to gain the trust of the emperor.

They were ultimately not successful, and the spherical-Earth theory only gained influence in China when Western texts were increasingly translated into Chinese more than two thousand years after the theory was originally invented.

Which makes it a good candidate for one of the most non-obvious / counterfactual theories in history.

Load More