You are viewing a version of this post published on the . This link will always display the most recent version of the post.

What is true of one apple may not be true of another apple; thus more can be said about a single apple than about all the apples in the world.
       —Twelve Virtues of Rationality 

Within their own professions, people grasp the importance of narrowness; a car mechanic knows the difference between a carburetor and a radiator, and would not think of them both as "car parts".  A hunter-gatherer knows the difference between a lion and a panther.  A janitor does not wipe the floor with window cleaner, even if the bottles look similar to one who has not mastered the art.

Outside their own professions, people often commit the misstep of trying to broaden a word as widely as possible, to cover as much territory as possible.  Is it not more glorious, more wise, more impressive, to talk about all the apples in the world?  How much loftier it must be to explain human thought in general, without being distracted by smaller questions, such as how humans invent techniques for solving a Rubik's Cube.  Indeed, it scarcely seems necessary to consider specific questions at all; isn't a general theory a worthy enough accomplishment on its own?

It is the way of the curious to lift up one pebble from among a million pebbles on the shore, and see something new about it, something interesting, something different. You call these pebbles "diamonds", and ask what might be special about them—what inner qualities they might have in common, beyond the glitter you first noticed. And then someone else comes along and says: "Why not call this pebble a diamond too? And this one, and this one?" They are enthusiastic, and they mean well. For it seems undemocratic and exclusionary and elitist and unholistic to call some pebbles "diamonds", and others not. It seems... narrow-minded... if you'll pardon the phrase. Hardly open, hardly embracing, hardly communal.

You might think it poetic, to give one word many meanings, and thereby spread shades of connotation all around. But even poets, if they are good poets, must learn to see the world precisely. It is not enough to compare love to a flower. Hot jealous unconsummated love is not the same as the love of a couple married for decades. If you need a flower to symbolize jealous love, you must go into the garden, and look, and make subtle distinctions—find a flower with a heady scent, and a bright color, and thorns. Even if your intent is to shade meanings and cast connotations, you must keep precise track of exactly which meanings you shade and connote.

It is a necessary part of the rationalist's art—or even the poet's art!—to focus narrowly on unusual pebbles which possess some special quality. And look at the details which those pebbles—and those pebbles alone!—share among each other.  This is not a sin.

It is perfectly all right for modern evolutionary biologists to explain just the patterns of living creatures, and not the "evolution" of stars or the "evolution" of technology.  Alas, some unfortunate souls use the same word "evolution" to cover the naturally selected patterns of replicating life, and the strictly accidental structure of stars, and the intelligently configured structure of technology.  And as we all know, if people use the same word, it must all be the same thing.  You should automatically generalize anything you think you know about biological evolution to technology.  Anyone who tells you otherwise must be a mere pointless pedant.  It couldn't possibly be that your abysmal ignorance of modern evolutionary theory is so total that you can't tell the difference between a carburetor and a radiator.  That's unthinkable.  No, the other guy—you know, the one who's studied the math—is just too dumb to see the connections.

And what could be more virtuous than seeing connections?  Surely the wisest of all human beings are the New Age gurus who say "Everything is connected to everything else."  If you ever say this aloud, you should pause, so that everyone can absorb the sheer shock of this Deep Wisdom.

There is a trivial mapping between a graph and its complement.  A fully connected graph, with an edge between every two vertices, conveys the same amount of information as a graph with no edges at all.  The important graphs are the ones where some things are not connected to some other things.

When the unenlightened ones try to be profound, they draw endless verbal comparisons between this topic, and that topic, which is like this, which is like that; until their graph is fully connected and also totally useless. The remedy is specific knowledge and in-depth study. When you understand things in detail, you can see how they are not alike, and start enthusiastically subtracting edges off your graph.

Likewise, the important categories are the ones that do not contain everything in the universe.  Good hypotheses can only explain some possible outcomes, and not others.

It was perfectly all right for Isaac Newton to explain just gravity, just the way things fall down—and how planets orbit the Sun, and how the Moon generates the tides—but not the role of money in human society or how the heart pumps blood. Sneering at narrowness is rather reminiscent of ancient Greeks who thought that going out and actually looking at things was manual labor, and manual labor was for slaves.

As Plato put it (in The Republic, Book VII):

"If anyone should throw back his head and learn something by staring at the varied patterns on a ceiling, apparently you would think that he was contemplating with his reason, when he was only staring with his eyes... I cannot but believe that no study makes the soul look on high except that which is concerned with real being and the unseen. Whether he gape and stare upwards, or shut his mouth and stare downwards, if it be things of the senses that he tries to learn something about, I declare he never could learn, for none of these things admit of knowledge: I say his soul is looking down, not up, even if he is floating on his back on land or on sea!"

Many today make a similar mistake, and think that narrow concepts are as lowly and unlofty and unphilosophical as, say, going out and looking at things—an endeavor only suited to the underclass.  But rationalists—and also poets—need narrow words to express precise thoughts; they need categories which include only some things, and exclude others. There's nothing wrong with focusing your mind, narrowing your categories, excluding possibilities, and sharpening your propositions. Really, there isn't! If you make your words too broad, you end up with something that isn't true and doesn't even make good poetry.

And DON'T EVEN GET ME STARTED on people who think Wikipedia is an "Artificial Intelligence", the invention of LSD was a "Singularity" or that corporations are "superintelligent"!

New Comment
66 comments, sorted by Click to highlight new comments since:

Eliezer, Actually, I'd like to read good critiques of descriptions of corporations as superintelligent (or more nuanced versions of that assertion/theory, such as that some corporations may be intelligent, and more intelligent than individual humans).

Where can I find such critiques?

Well I don't know about "super intelligent", but modern corporations do seem remarkably like "unfriendly AI" (as defined in the Sequences). They have a very simplified utility function (shareholder value) and tend to maximize it at the expense of all rival human values. They are also very powerful and potential immortal.

The only open question is how intelligent they actually are. The naive answer is that any corporation is at least as intelligent as its most intelligent employee; but anyone who has actually worked for a modern corporation will know just how far from the truth this is. As stupid as their stupidest manager is maybe closer to the truth. So there's some hope there.

I'm sure I'm not the first on LW to draw this parallel...

Large corporations are not really very like AIs at all. An Artificial Intelligence is an intelligence with a single utility function, whereas a company is a group of intelligences with many complex utility functions. I remain unconvinced that aggregating intelligences and applying the same terms is valid - it is, roughly speaking, like trying to apply chromodynamics to atoms and molecules. Maximising shareholder value is also not a simple problem to solve (if it were, the stock market would be a lot simpler!), especially since "shareholder value" is a very vague concept. In reality, large corporations almost never seek to maximise shareholder value (that is, in theory one might, but I can't actually imagine such a firm). The relevant terms to look up are "satisficing" and "principal-agent problem".

This rather spoils the idea of firms being intelligent - the term does not appear applicable (which is, I think, Eliezer's point).

Corporations do not have utility function, or they do not have a single utility function. They have many utility functions. You might "money pump" the corporation.

[-][anonymous]10

How said anything about AI?

Super Intelligence = A General intelligence, that is much smarter than any human.

I consider my self to be an intelligence, event though my mind is made of many sub-processes, and I don't have a stable coherent utility function (I am still working on that).

The relevant questions are: It is sometimes useful to model corporations as single agents? - I don't know. Are corporations much smarter than any human? - No, they are not.

I say "sometimes useful", because, some other time you would want to study the corporations internal structure, and then it is defiantly not useful to see it as one entity. But since there are no fundamental indivisible substance of intelligence, any intelligence will have internal parts. Therefore having internal parts can not be exclusive to being an intelligent agent.

The only sense in which all AIs have utility functions is a sense in which they are describable as having UFs, in a 'map' sense.

I'd say "artificial" is probably the wrong word for describing the intelligence demonstrated by corporations.  A corporation's decision calculations are constructed out of human beings, but only a very small part of the process is actually explicitly designed by human beings.

"Gestalt" intelligence is probably a better way to describe it.  Like an ant-hill.  Human brains are to the corporation what neurons are to the human brain.

I doubt one could say with any confidence that they are universally "smarter" or "dumber" than individual humans.  What they are is different.  They usually trade speed and flexibility of calculation for broader reach of influence and information gathering.  This is better for some purposes.  Worse for others.

What about this version:

The modern corporation is as intelligent as its leader, but has a learning/doing disability in areas such as __ {fill in areas looked after by least intelligent employees who have a free hand in decision making in those areas}.

I know this isnt a perfect version, but I feel that some thought needs to go into judging the performance ability of different corporations.

I dont think they resemble anything like an AI, or anyhting at all in the sense in which the phrase AI was originally coined, but it is sometimes useful to think of corporations as people.

Legally speaking companies are treated as juristic people. This is true of my jurisdiction and my guess that it is so for most.

HA: Shouldn't the burden be on the people claiming a corporation is "superintelligent" to justify their claim? It's not the job of the rest of us to write preemptive refutations of every possible incorrect argument. It's the job of the people making the claims to justify their claims. So, for what value of "superintelligent" are corporations superintelligent, and why?

So, for what value of "superintelligent" are corporations superintelligent, and why?

They can achieve complex optimisations that no individual could do by themselves. So I suppose the value of 'superintelligent' would be 'a little bit'.

Eliezer, I fear you are dangerously close to being labeled a "logical atomist" for being so fond of distinctions. :)

Eliezer,

I agree with what you're saying. But there is something to this "everything is connected" idea. Almost every statistical problem I work on is connected to other statistical problems I've worked on, and realizing these connections has been helpful to me.

The problem with harping on everything is connected is that it is, but good systems are created bottom up instead of top down. You didn't sit down and say "All statistical problems are governed by overarching concept X, which leads to the inference of methods a, b, and c, which in turn lead to these problems." You said, "I have these problems, and certain similarities imply a larger system." It's like biology, Linnaeus did not come up with his classification system out of thin air, he first studied many individual animals and their properties and only subsequently noticed similarities and differences which he could classify. Narrowness is where we need to start, because it gives us the building blocks for broader ideas.

Seems to me the ideal way for understanding systems is to analyse and then synthesise.

Jeff Kottalam, I'd also like to be directed to such claims and claim justifications (there's a protean claim justification on my blog). I'll resist the temptation of the thread-jacking bait that constitutes your last sentence, and encourage you -and Eliezer- to join me on my blog to continue the conversation on this topic.

I think the graph comparison isn't a completely valid metaphor. With the graph you describe the relationship between two nodes is binary, either it's present or absent. But between topics there are numerous types of connections, for sure the statement "everything is connected" conveys no useful information but I believe that it's very difficult to find two topics with no type of connection. For instance Wikipedia couldn't be considered an artificial intelligence but I would not be surprised if there are certain topics in artificial intelligence that could be applied to wikipedia (associations between topics could be a possibility though I don't know enough about AI to know if that would be useful). For instance simply drawing an edge from AI to Wikipedia tells little, but perhaps 3 unique edges describing the precise connections could be very informative. In this way one can achieve a connected graph that still is very informative.

I have little to contribute to furthering the discussion in the post, but the "importance of narrowness" leads me to an observation.

Thousands of litigators litigate tens of thousands of cases before juries and those litigators, and their specialized vendors, focus much of their attention on biases. Billions of dollars are bet in this market, where highly intelligent people hotly contest one another in overcoming (or even better, seeding) bias and rationality (irrationality) among jurors, judges, media commentators and even scientific experts. Litigators grasp the importance of narrowness in this websites subject matter. Someone might look (or may already have looked) into that as a source of research material, although a lot of trade secrecy may need to be overcome.

Scientific experts might be a fertile area. The law imposes a list of requirements for scientif evidence (guess if peer review is required) and litigators who discredit experts often expose biases. The legal system, of course, has its own entrenched biases- often judges prohibit expert testimony that eyewitness identifcation or finger printing have little credibility. Lie detectors have been successfully tossed from the court room. One odd development is that prosecutors have been hamstrung by, and defense attorneys taking advantage of, the expectations of jurors who watch lots of the procedure television shows.

Thats my brain dump. I hope someone enjoyed it. Enjoy the website.

Eliezer: Excellent post, but I wonder if what you're saying is related to impressionist painting or the French Revolution? :-)

Seriously, I constantly meet people who ask me questions like: "could quantum algorithms have implications for biomechanical systems?" Or "could neural nets provide insight to the P versus NP problem?" And I struggle to get across to them what you've articulated so clearly here: that part of being a successful researcher is figuring out what isn't related to what else.

The answer to both "could-questions" is yes.

People who work on the theory of neural nets have created a lot of mathematical theorems. It's plausible that some of those theorems are helpful when you want to solve P versus NP.

Econophysics is a fairly established field. I think everyone involved understands that money is something different than atoms who interact with other atoms. That doesn't invalidate the field of Econophysics.

You can make a pretty decent living as scientifist by using insight generated in field A to solve the problems of field B.

Research into quantum algorithms is likely to produce knowledge that's useful for people who work in other fields such as biomechanical systems.

On reflection, the saying at the top of this post is not true. The implicit assumption that fails is that the only thing one can say about all the apples in the world are things that are true of all those apples. But there are many other things one can say about all of the apples in the world. One can, for example, talk about the distribution of apple sizes, similar to the way one could talk about the size of a particular apple. For any feature of an apple we can talk about the distribution of that feature among all the apples of the world.

Robin, if you give a probabilistic distribution which describes apples in general, that distribution will have higher entropy than a deterministic description of one apple - it may legitimately be said to contain less information.

Eliezer, compare giving a probability distribution over the feature vector of a particular apple (giving a value for each feature), versus a probability distribution over the vector of vectors that describes the features of each apple in the set "all the apples in the world." Surely the second vector has more info, in an entropy or any other sense.

It certainly has more predictive power for future apples!

well, I googled superintelligence and corporations and this came up with the top result for an articulated claim that corporations are superintelligent:

http://roboticnation.blogspot.com/2005/07/understanding-coming-singularity.html#112232394069813120

The top result for an articulated claim that corporations are not superintelligent came from our own Nick Bostrom:

http://64.233.169.104/search?q=cache:4SF3hsyMvasJ:www.nickbostrom.com/ethics/ai.pdf+corporations+superintelligent&hl=en&ct=clnk&cd=4&gl=us

Nick Bostrom "A superintelligence is any intellect that is vastly outperforms the best human brains in practically every field, including scientific creativity, general wisdom, and social skills.1This definition leaves open how the superintelligence is implemented – it could be in a digital computer, an ensemble of networked computers, cultured cortical tissue, or something else."

If one is defining superintelligent as able to beat any human in any field, then I think it's reasonable to say that no corporations currently behave in a superintelligent manner. But that doesn't mean that the smartest corporations aren't smarter than the smartest humans. It may mean that it's just not rational for them to engage in those specific tasks. Anyways, the way corporations operate, one wouldn't attempt, as a unit, to be more socially skilled than Bill Clinton. It would just pay to utilize Bill Clinton's social skills.

So Nick's point is interesting, but I don't think it's an ending point, it's a starting or midway point in the analysis of networked groups of humans (and nonhuman computers, etc.) as potentially distinct intelligences, in my opinion.

Here are some more personal thoughts on this in a recent blog post of mine:

http://hopeanon.typepad.com/my_weblog/2007/08/do-archetypes-e.html

Robin, it is hopefully obvious to anyone who knows enough to ask the question, that I meant "more propositions are true of one apple than are true over all apples", rather than "one apple considered in individual detail contains more information than two apples considered in individual detail". I have no objection to someone separately explaining natural selection and intelligence.

Humans are not monolithic egos. We have inconsistent desires, weird internal structure in our motivational systems, competing neural processes (not just goals), etc, but we are FAR more monolithic and integrated than corporations are. A corporation could probably hire Bill Clinton to do any of a fairly large space of tasks, but could it get perfectly solve the problem of organizing incentives in such a manner as to make him serve its goals as well as he serves his own? If I recall, it seems to me that he served his own goals to the detriment of his party to a significant degree a decade or so back. The Swiss patent office had some trouble a few years back motivating one of their clerks to be as productive for money as he was in following his own curiosity.

[-]LP220

Scott wrote "Seriously, I constantly meet people who ask me questions like: 'could quantum algorithms have implications for biomechanical systems?' Or 'could neural nets provide insight to the P versus NP problem?' And I struggle to get across to them what you've articulated so clearly here: that part of being a successful researcher is figuring out what isn't related to what else."

But another part is looking at things from a different perspective -- sometimes, a researcher might ask herself a question such as: "What would it be like if biomechanical systems were governed by quantum algorithms?" Not because she thinks these things really must be related, but because anything that provides a new angle has the potential to spark a creative solution or insight.

Also... maybe they could be related! You don't want to rule it out too soon.

This is off-topic, but it might amuse some of the people here.

I am the very model of a Singularitarian (YouTube video)

The phrase "everything is connected", I agree, is too general to be helpful in anyway.

For example, say "Everything" implies all things. Then, "is connected," being any way that these things relate to one another. By thinking of any example "thing" it is then connected to any other "thing" by the simple fact you know about both of them.

Thus, "deep truths" (which I take to mean something that is a general truth and often hidden beneath some reality) are often just generalizations about truths we take for granted already. Thus, they really cease to be helpful.

Certainly most of the things that have been called "deep truths" are really just clever sayings. Often they are Dennettian deepities---trivial in one reading and absurd in another.

A very good article.

The topic here - the virtues of specificity - is compelling because so much contemporary discourse is conducted through analogy and metaphor.

The devil is in the details and those crucial details are lost in discussions that focus on words where common and specific definitions are not set out clearly at the beginning of the debate.

Although general analogies are a useful way of understanding a new concept, true understanding can only come specificity, as you say.

That said there is still value to looking at the bigger picture. The best thinkers combine several areas of deep expertise with a much greater range of general knowledge.

So I guess it's worth knowing a little about a lot and a lot about a little.

ALSO: At the risk of completely going against the advice of this article, is your statement here a similar idea to Popperian empiricism {you can only ever prove a theory to be false}:

"When you understand things in detail, you can see how they are not alike, and start enthusiastically subtracting edges off your graph."

So subtracting edges off the graph entails falsifying the hypothesis that there is a connection between two nodes...

...or am I just generalising here? ;)

Newton focused on forces and gravity. Later physicists generalized newtonian mechanics, coming up with formalisms for expressing a host of different problems using a common approach (Lagrangian mechanics with generalized coordinates). They weren't losing precision or sacrificing any power to anticipate reality by having an insight that many apparently different problems can be looked at as being essentially the same problem. A cylinder accellerating down a ramp as it rolls is the same problem as a satellite orbiting the L5 lagrangian point. Another unification was Maxwell's equations for electrodynamics, which unified and linked a large number of earlier, more focused understandings (e.g. Ampere's law, Coulomb's law, the Biot-Savart law).

One more example: a physics-trained researcher studying the dynamic topology of the internet recognized a mathematical similarity between the dynamics of the network and the physics of bosons, and realized that the phenomenon of Google's huge connectedness is, in a very real sense, following the same mathematics as a Bose-Einstein condensate.

Eliezer's post seemed to denigrate people's interest in finding such connections and generalizations. Or did I miss the point? Are these sorts of generalizations not the kind he was referring to?

I think I agree with you, majus.

I would add as a counter-example that the problem of explaining mankind's nature and origin becomes solvable when the problem is extended to the problem of explaining the nature and origin of every species in the biosphere. The problem of explaining Mary's illness may become easier if it is broadened to the problem of explaining the illness of the 20 people who became sick immediately after the company picnic.

To my mind narrowness should not be called a virtue. Instead we have the tactic or heuristic of narrowing, which is frequently successful. But a skilled pedagogue will present this tactic paired with the tactic of broadening, which is also frequently successful. The trick, of course is to choose the right tactic. Or perhaps to know when to switch tactics when the originally chosen one isn't working.

Agreed. Newton was in fact takign a broad view compared to his predecessors, who beleived that Earthly happenings and celestial behavour must have different explanations. The point of his lawof gravity is that it uniformally applies to both moon and apple.

In point of fact, Isaac Newton did not "explain just gravity" - he also invented the calculus and developed important insights into the nature of light, among numerous other contributions to science.

During the same life (presumably as indivisible to him as mine seems to me - but that's another issue), he apparently wrote more on aspects of religiosity than he did on science (according to a lazy skim through the wikipedia entry), dabbled extensively in alchemical investigations, ran the Royal Mint (and as such was in fact deeply concerned with the "role of money in society" - to significant practical effect at the time), and became an MP.

Of course, this might not impact upon the point you are trying to make - you might just have selected a poor example.

However, casting about for a better example (immediately recognisable names who have made a singular contribution to science but did nothing else of note/had no significant, tangential side interests) - I find it hard to come up with one. Even if there is one, I think that s/he might well be an exception, rather than a rule.

So what's my point?

I feel that your defence of narrowness is too narrow, and that your denunciation of "everything is connected" is too broad.

Everything is indeed connected - this is trivially true; philosophically, logically and physically. As you say, though, the statement only becomes interesting when we start to examine what the connections are; how they function, what the relationship of different connections is, what networks these connections form which can be recognised as recurring patterns that have real effects/can be affected.

In the context of these investigations, narrowness is just a question of perspective, and any notion that operating only at particular level of perspective is 'correct' seems fatuous. Even the suggestion that one level of perspective is generally to be preferred would need careful justification.

In a current, 'real-world' context, consider the designer of a functional aspect of, say, a transport system. We expect the designer to produce something efficient, safe, economical, and practical. We might say; that's it - you have no other responsibility. But each of those requirements can be viewed more or less narrowly.

For the last three hundred years or so, western culture has been tending to suggest to people that they should view the requirements of their task more and more narrowly. And this has appeared to be highly 'successful' - in terms of valuable and significant parameters such as mortality, increasing education, enlarged franchise, standard of living etc. - so that the trend becomes reinforced.

However, it has become evident that this narrowness has led us to ignore the wider network within which we live - the ecosystem of the planet. Our transport designer should no longer consider environmental impacts as 'externalities' that can only distract from the task at hand.

It is becoming incumbent upon us to develop a range of perspectives, and to understand the usefulness and application of them, and how to change perspective while working on a single task. This is hard for an individual. For it to become a cultural mode is monumental.

Narrowness is an effective mode of operation only when it is appropriate. Opening our eyes wide and jumping into a sea of possible connections without prejudging them is another viable mode in appropriate circumstances.

As an architect, I find I need to employ a range of modes, from extreme breadth to extreme narrowness. One metric of an effective architect might well be to look at how well s/he judges what level of breadth/narrowness is appropriate in a given situation.

In point of fact, Isaac Newton did not "explain just gravity" - he also invented the calculus and developed important insights into the nature of light, among numerous other contributions to science.

[ ...]

Of course, this might not impact upon the point you are trying to make - you might just have selected a poor example.

However, casting about for a better example (immediately recognisable names who have made a singular contribution to science but did nothing else of note/had no significant, tangential side interests)...

Eliezer was not trying to give examples of people who made singular contributions but did nothing else. Rather, he was trying to give examples of singular contributions that had a lot to say about some things, but nothing of note to say about other things. His example was not Isaac Newton, but rather Newton's theory of gravity.

Inventing calclus could be said to be an integral element of Newton inventing his theory of gravity.

I see what you did there.

But seriously, the role of calculus is kinda interesting because he did it all geometrically, apparently: http://en.wikipedia.org/wiki/Philosophi%C3%A6_Naturalis_Principia_Mathematica

In formulating his physical theories, Newton developed and used mathematical methods now included in the field of calculus. But the language of calculus as we know it was largely absent from the Principia; Newton gave many of his proofs in a geometric form of infinitesimal calculus, based on limits of ratios of vanishing small geometric quantities.

ran the Royal Mint (and as such was in fact deeply concerned with the "role of money in society" - to significant practical effect at the time)

This would be the precise point that immediately occurred to me too. So no, it's not just you.

Tyrrell seems correct about the point being made, but nevertheless this wasn't a great example.

It couldn't possibly be that your abysmal ignorance of modern evolutionary theory is so total that you can't tell the difference between a carburetor and a radiator. That's unthinkable. No, the other guy - you know, the one who's studied the math - is just too dumb to see the connections.

This is the point at which it became apparent that this is one of those EY essays where I think "so who annoyed him in this particular way?" It appears to be the sort of essay that's a reaction to (or, more generously, strongly inspired by) a particular incident or person, rather than a careful attempt to speak much more broadly. This does not make it incorrect or not useful; it is, however, important in trying to sufficiently duplicate the conditions in the writer's head to understand it properly.

It may be a particular incident or person in EY's head, but it's not a unique one. It was very reminiscent of a crank interviewed for a segment of This American Life, who evidently wasn't unique judging from the way physicists reacted to his communications. It's also reminiscent of at least one conversation I've had.

[-][anonymous]-3-1

The last sentence doesn't read as well now as it did then.

This is silly. Generalizations are important. Generalizations that overlook important specific cases are a mistake. Not exactly a "deep" thought here!

Specific cases' importance isn't based on a generalization itself. It's based on a generalization's use. That's the important thing. So one can't determine a specific case's importance by looking at it and the generalization alone.

I've found this to be true: there are lumpers, and there are splitters.

Sometimes, individually, in some fields, lumpers can be splitters. Sometimes, individually, in some fields, splitters can be lumpers. Mostly, though, lumpers default to lumping; and splitters default to splitting.

I'm a splitter. I don't like using generic terms. I like using specifics. My blood boils when people misuse terms (recently happened when someone used a term I use regularly, "cognitive dissonance", to describe someone not agreeing with someone else's opinion)

Many others I've butted heads and befriended have been lumpers. They group everything as much as possible, and seem to think that splitters are "too detail-oriented".


I am so glad I stumbled across this site, btw. Great work!

The key, it seems to me, is to learn when to lump and when to split.

Sometimes generality is exactly what we need; other times precision and specificity are required. How we know which is which is a problem that I think is difficult, but not insoluble.

Exactly. Many people seem angry because lumpers lump when they should split. And in those cases I am angry as well. But one could write the complementary article complaining about spliters splitting when they should lump. I am also angry in those cases. Daniel Dennett makes a good point about this in his article "Real Patterns".

And what could be more virtuous than seeing connections? Surely the wisest of all human beings are the New Age gurus who say "Everything is connected to everything else." If you ever say this aloud, you should pause, so that everyone can absorb the sheer shock of this Deep Wisdom.

There is a trivial mapping between a graph and its complement. A fully connected graph, with an edge between every two vertices, conveys the same amount of information as a graph with no edges at all. The important graphs are the ones where some things are not connected to some other things.

When the unenlightened ones try to be profound, they draw endless verbal comparisons between this topic, and that topic, which is like this, which is like that; until their graph is fully connected and also totally useless. The remedy is specific knowledge and in-depth study. When you understand things in detail, you can see how they are not alike, and start enthusiastically subtracting edges off your graph.

Here's a way to visualize this. Write down a horizontal list of all the things. Write down a vertical list of all the things. Now draw columns and rows so you have a table of all the things with all the things. Now colour a square white if two things are connected, and colour a square black if two things aren't connected. So if all the things are connected, then you have a white canvas. And if all the things are unconnected, you have a black canvas.

Now these are opposites, but they're not opposites like apples and democracy, they're opposites like heads and tails. They're two sides of the same coin is what I'm saying. On the axis of total colour they're as far apart as possible, but in the space of information, where distance is proportional to the complexity of transformations you have to do to transform one set of information to another, they're right next to each other. You just invert that thing. So saying everything's connected is a lot like saying nothing's connected.

This metaphor can be extended to apply to some other Yudkowskian wisdom:

  • Reversed stupidity is not intelligence.

Suppose we draw a person's set of beliefs as a pattern on this canvas. And suppose the set of correct beliefs looks like a black-and-white picture of a cat. (Please quote that line out of context.) Now if you take an idiot, his beliefs don't look like a cat. But they also don't look like a picture of an anticat, the inversion of a cat, because to draw an anticat, he'd have to go to all the trouble of knowing exactly what the cat looks like and then getting everything precisely wrong. He'd have to be exactly right about what to be wrong about. He'd have to know exactly what a cat looks like, to draw something that looks exactly not like a cat. So what do the idiot's beliefs look like? They're like a badly-drawn cat. It might have a really big nose, or only three legs. But it's still a lot more like a cat than an anticat.

So if you just decide to believe the opposite of what the idiot believes, you just invert his badly-drawn cat. What you get won't be a well-drawn cat. It'll be a badly-drawn anticat, with three badly-drawn antilegs and an antinose that's too big. The only way to get a better drawing is to actually look at the Canonical Cat and draw Her well. (Obviously, the Canonical Cat symbolizes reality.)

  • No power hath noise.

Now I have a picture of a badly-drawn cat, and I want to maximize the number of pixels that are the same as they are in Omega's picture of the Canonical Cat. So I pick a few random pixels and flip them. Does this get me closer to a picture of Her furry-pawed splendour?

Well, maybe. If I started out with more pixels opposite to the Canonical Cat than pixels that truly reflect Her feline glory, then randomness will boost me closer to having half my pixels right. But if I started out with a picture that looks more like the Canonical Cat than like Her nemesis, the Anticanonical Anticat, then randomizing is bad, for it moves my picture further from an accurate representation of Her whispy whiskers and closer towards the hairball-choked darkness of the dread Anticat.

But since most people are closer to the Cat's light than to the darkness of Her nemesis, randomizing doesn't work. It only works to boost you back if were originally dwelling in the valley of the shadow of the Anticat.

  • The fallacy of grey.

Her pixels are so radiant and Her light so blinding that no mortal can truly gaze upon the Canonical Cat. So we don't know which pixels would be black and which would be white in a faithful portrayal of Her furry visage. (The Anticanonical Anticat is likewise shrouded in darkness.) In fact, we mortals are so weak before the Divine Pixels, their light so bright beyond our vision and their mysterious ways so far, so very far beyond our comprehension, that we know not the colour of a single pixel with absolute certainty.

The best that mortals such as we can do is to guess at how likely each pixel is to be white or black, and then colour the pixel grey with a value indicative of how confident our best felinosophers are that a white pixel there would be an accurate indication of Her eternal beauty, rather than one of the Marks of the Anticat. And in so doing, we may form a picture of Her, even if, being the work of mere mortals, it is a bit blurry and unclear. And we must be careful to not paint the Canonical Cat too darkly, for else She will smite us for our insolence. And neither may we colour the darkness of the Anticat too brightly, lest we see the hideous horrors that hide in His depths.

But some, seeing that not a single pixel has been coloured absolutely, now shout, as if they were the bearers of some new and deep wisdom, that all our pixels are the same, for they are all shades of grey! And, so steeped are they in wickedness, they do proclaim that, since no perfect image has ever been graven, all images are equally representative of the Canonical Cat (Her paws be praised). And they hold aloft their unholy tome, The Dog Delusion, and speak out against "The Doctrinal Dog, the Canonical Cat, and all other Orthodox Organisms". And so these heathens equate the Lady of the Light, Her holiness the Canonical Cat, with the Duke of Darkness, the Earl of Evil, the Anticat Himself! What blasphemy!, and O! what sacrilege!

(Sorry, I got a bit carried away.)

Here's a way to visualize this. Write down a horizontal list of all the things. Write down a vertical list of all the things. Now draw columns and rows so you have a table of all the things with all the things. Now colour a square white if two things are connected, and colour a square black if two things aren't connected. So if all the things are connected, then you have a white canvas. And if all the things are unconnected, you have a black canvas.

Wouldn't it make more sense to use a grey scale? :-)

As Alfred Korzybski said, the map is not the territory. If you say Wikipedia is an "Artificial Intelligence" you have a map of Wikipedia that's not the standard map that people use to understand Wikipedia.

That map let's us see things in Wikipedia that might be interesting. If someone responds and argues that Wikipedia isn't an "Artificial Intelligence" that might lead to valuable insight into what it means to be an "Artificial Intelligence".

If you ask yourself on how well Wikipedia fulfills it's role as an "Artificial Intelligence" you get a list of things where Wikipedia fails. That list might be valuable if you want to further understand how to interact with Wikipedia. It can lead to creative thoughts.

If you hire a janitor you won't expect him to do creative things like wishing the floor with window cleaner. On the other hand a modern artist might want to make a creative statement by wishing the floor on purpose with window cleaner. Without knowing the context it's very hard to say that the artist who washes the floor on purpose with window cleaner is wrong.

To make his statement the artists has to know about the ideas that his audience has about floors and window cleaner. The artist is different from a person who simply knows nothing about floors or window cleaner. If you just know that someone washed the floor with window cleaner you don't know whether the person is simply stupid or whether they make a profound statement.

Wait, you criticize the fallaciousness of the ancient Greeks, and then follow up with a quote from Plato on the same subject? Doesn't that undermine your statement about them a bit?

He's taking a critical attitude to the position expressed in the quote, not quoting a passage from Plato which shares his criticism.

Maybe I'm misunderstanding Plato, then? It seems to me that Plato's advocating that you can't learn about things outside by staring at the ceiling, but by interacting with them, which is Yudkowsky's position as well.

I think you are misunderstanding the Plato quote. He's not saying that you have to go and look at things outside rather than "staring at the ceiling," but that "staring at the ceiling" (making observations about things) isn't a true exercise of reason. He's arguing that only contemplation of that which cannot be perceived by the senses is truly exalting.

Oh, okay. Thank you!

[-]Ariqun-30

I agree with your perspective. I've noticed that in this fast-paced culture (fast food, instant messaging, etc.) more and more people are using hasty generalizations in their speech. If they do not, they risk losing their audience.

An example of this is a young woman that a friend and I passed while walking. She was conversing with her companion and said, 'And like, 80% of the world knows ___ [I don't remember exact details but it was something about technology/texting].'

That's what society is producing! My friend studies statistics and had a good time talking about how she was wrong.

Saying 'everything is connected' has turned from being potentially wise to just another hasty statement used typically to sound profound in a discussion/gain a winning point. Which is disconcerting, considering the implications of such a statement. I wonder at how many of the people who make that claim spend their time connecting everything. Seems like a daunting task.

You do realize you're generalizing about "this fast-paced culture"? Not saying it's a hasty generalization, mind, but still.

Two vertices are connected if there exists a walk between them

-- proofwiki)

Given this definition of connected, I believe "Everything is connected to everything else" is true.

Can you think of a counter-example?

Edit: Wow, downvotes? I wouldn't have expected that on this site. My point relates to the absence of "floating" ideas in reality. Everything really should be connected, because everything comes from reality. If a thing wasn't based on reality in some way, where could it come from? I thought this line of reasoning would be obvious though. My other point, however, is that Eliezer's blog posts often seem to grossly misinterpret things people say and mean, for example his definition of "connected" above.

A fully connected graph, with an edge between every two vertices, conveys the same amount of information as a graph with no edges at all. The important graphs are the ones where some things are not connected to some other things.

A graph with weighted vertices provides more information than both the above graphs. Also, "connected" in graph theory usually means "there is a path from every node to every other node", not "there are edges between every node". Of course when talking about real things connected to each other, it is usually more interesting to note in what way they are connected, rather than observe that there exists a connection -- and that does require narrowing one's focus.

Also, "connected" in graph theory usually means "there is a path from every node to every other node", not "there are edges between every node".

The word "fully" there is meant to be a significant qualifier, and he explains what he means immediately afterwards. If he had referred to it as a "complete" graph instead, I don't imagine that as many readers would have understood that sentence, though I probably would have put a "directly" in between "not" and "connected."

Lorem Ipsum
[-][anonymous]10

I totally, I agree that it is often better to study narrow and deep.

But this word policing is not net helpful, all things coincided.

Yudkovsky does not like when people call the invention of LSD a Singularity. Ok, I can see why. But I don't like Yudkovsky use of the word singularity, because that is absolutely not what the word means in physics or math. I used to be quite upset over the fact that AI people had generalized the word "singularity" to mean "exponential or super exponential growth". On the other hand, what ever. It is really not that big of a deal. I will have to say "mathematical singularity" some times, to specify what I mean, when ever it is not clear from the context. I can live with that compromise.

Different fields use the same word to mean different things. This some times leads to misunderstanding, which is bad. But the alternative would be for every field to make up their own strings of syllables for every technical word, which is just too unpractical.

Also, I happen to know, that when astrophysicists talk about the evolution of stars, they are not borrowing the word "evolution" from the biological use. They are using "evolution" in the more original meaning, which is "how something change over time", from the word "evolve". The evolution of a star is the process of how the star change over time, from creation to end. No one in the field thinks that they should borrow ideas from biology on the ground that biologists use the same word. Nether can I imagine anyone in evolutionary biology deciding to draw conclusions from theories of the evolution of starts, just because of the common word "evolution".

I can totally imagine someone how knows close to nothing about both stars and biology, being confused by this word "evolution" being used in different settings. Confusing the uneducated public is generally bad. More specifically it is uncooperative, since in most field, you yourself is part of the uneducated public. But there is also a trade-off. How much effort should we put on this? Avoiding otherwise useful use of words is a high cost.

The Singularity, Quantum Cromodynamics, Neural networks, Tree (in graph theory), Imaginary numbers, Magic (as placeholder for the un-explained), Energy, etc.

The use of metaphors and other types of borrowed words in technical language is widespread, because it is so damned practical. Sometimes we use metaphors the same way as good poet, to lend the preciseness from one concept to another. But sometimes one just needs a label and reusing an old word is less effort than coming up with, and remember, an actual new sound.

Back to the trade-off. How much would it help if different topics never borrowed language of each other? Would the general public be significantly less confused? For this tactic to work, everyone, not just scientists, has to stop borrowing words of each other. And we have to the restrict usage of hundreds (maybe thousands) of words that are already in use.

But maybe there is a third way? Instead of teaching everyone not to borrow words, we could teach everyone that words can have different meanings in different context. This is also a huge project, but considerably smaller for several reasons.

  1. It is an easier lesson to learn. At least for me, and generalizing from one example.
  2. It is more aligned with how natural language actually work.
  3. It is a lesson that can be tough one person a the time. We don't have to change all at once, for it to work.

My model of Yudkowsky (which is created solely from reading many of his LessWrong posts) now complains that my suggestion will not work, because of how the brain work. Using the same words causes our brain to use the same mental bucket, or something like that.

But I know that my suggestion works, at least it works for me. My brain have different mental settings for different topics and situations, where words can have different meaning in different settings. It does not mean that I have conflicting mental models of the world, just that I keep conflicting definitions of words. It is very much like switching to a different language. The word "bard" means shed in English, but it means child in my native language, Swedish, and this is not a problem. I would never even have connected the English::barn and Swedish::barn, if it was not pointed out to me in a totally unrelated discussion.

Unfortunately I don't know how my brain ended up like this, so I can't show you the way. I can only testify that the destination exists. But if I where to guess, I would say that, I just gradually built up different sets of technical vocabulary, which sometimes had overlapping sounds. Maybe being bi-lingual helps? Not overly thinking in words probably helps too.

Sometimes when a conversation is sliding from one topic to an other, maybe a physics conversation take a turn in to pure math, I will notice that my brain have switched language setting, because the sentence I remember just saying, does not make sense to me anymore.

This text smells like pretty much emotional rationalization (in the psychological sense) of a certain biased point of view.

Actually, I'm not an enemy of narrow questions, and in the same way, I'm not an enemy of the plurality of meanings. The focused, narrow, formal approach is of great power indeed, but it is also restricted and new theories are being constructed again and again - outside of a narrow framework and back to some new one.

Consider a man who just learned to drink from a certain brown glass. Then, he sees a steel mug. They are quite different objects, with different properties, different names, and meanings attached in different linguistic contexts. If he can not grasp that what is common, he won't be able to generalize the knowledge at all. 

But somehow this trivial observation (consequences of which play a role on every layer of abstraction in thinking) tends to be forgotten when dozens of layers of abstraction are being created, the definition starts to battle the actual meaning until the latter is completely lost and one starts to rationalize upon those layers of abstractions while common sense whispers: "It's damn meaningless, it doesn't help to understand anything". What happens then? Then comes the time to go back to the connected uncertain world.

There is more. A natural language contains a vast plurality of word meanings which actually help to look at things from different angles and to learn such commonalities by reading words in different contexts. If you will defy such a reality of natural language and human thinking you would risk becoming isolated in bubbles of extremely precise meanings that not understood by anyone other except Chosen Ones. It is already hard to extract meaning (you can read "ideas") from books with narrative formalized too much. So to make full use of people's knowledge It might be not useful to be biased towards narrowness which can disconnect people's knowledge and prevent understanding.

So to me personally it's not a virtue to put a narrow approach on a pedestal. Whatever thought trick humanity came up with and while it's working well - it's rational to me to use in the right situation. But you still can go deeper and be precise as much as you want if it proves to be worthy in (how ironically) a precise way.

I agree with the benefits of narrowness, but let's not forget there is a (big) drawback here: science and math are, in their core, built around generalizations. If you only ever study the single apple, or any number of apples individually, and not take the step of generalizing to all apples, or maybe all apples in a given farm, at least, you have zero predictive power. The same goes for Rationality, by the way. What good is talking about biases and Bayesianism, If I can only apply it to Frank from down the street?

I'm arrogantly confident you agree with me on this to some level, Eliezer, and just were not careful with your phrasing. But I think this is more than semantic nitpicking - there is a real, hard trade-off at play here between sticking to concrete, specific examples on which we can have all the knowledge we want, and applying ideas to as many problems as possible, to gain more predictive power and understanding of the Laws of Reality. I think a more careful formulation is to say "do not generalize irresponsibly". Don't abandon the specific examples, as they anchor you down to reality and details, but do try to find patterns and commonalities where they appear - and pinpoint them in precise, well defined, some-result-subspaces-excluding manners.

It was perfectly all right for Isaac Newton to explain just gravity, just the way things fall down—and how planets orbit the Sun, and how the Moon generates the tides—but not the role of money in human society or how the heart pumps blood.

This just reminds of the Unix software philosophy "do one thing and do it well"

And DON’T EVEN GET ME STARTED on people who think Wikipedia is an “Artificial Intelligence,”

With the invention of LLMs, this aged poorly. It turns out that most of the research that goes into developing artificial intelligence consists of cataloguing the world and writing it up on the internet.