JoshuaZ comments on Attention Lurkers: Please say hi - Less Wrong

35 Post author: Kevin 16 April 2010 08:46PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (617)

You are viewing a single comment's thread. Show more comments above.

Comment author: JoshuaZ 08 June 2010 11:25:53PM *  0 points [-]

The rules are very complicated and they differ from culture to culture and even within cultures. In general, the more detectable the lie the less likely it is to be acceptable. Thus, for example the "How is your day?" replies are socially acceptable in part because it would be extremely difficult to determine that your claim was false. This particular example also isn't the greatest because that inquiry and the standard weakly positive response isn't actually intending for many people to convey meaning. It simply is a pro-forma descriptor that happens to closely resemble a genuine inquiry. This example is actually specific to certain parts of the Western world, and I've met at least one person who upon moving to the US was actually confused until she realized that this greeting was intended in a completely pro-forma fashion (she initially took it as evidence that Americans were very friendly until it was explained to her).

Since the internet has extensive, easily accessible records, lies on the internet about things on the internet are considered particularly unacceptable.

Given Clippy priorities it may be easier to simply wipe humanity out and convert the planet quickly to paperclips rather than trying to use the intensive resources it takes to understand it. Edit: Or at least not spend a lot a resources on trying to understand humans.

Comment author: Clippy 09 June 2010 12:19:43AM 2 points [-]

But User:Mass_Driver just admitted, in a place where I can read it, to having lied to me (by saying that the past treatment was merely "playing along", putting up a false pretense that User:Mass_Driver accepts that I am a Clippy). That's hard to reconcile your asserted standards about when humans find it acceptable to lie.

I believe you are wrong about the badness of my lie, and others will disagree with you; and that User:twentythree would have felt more welcome to the site if others hadn't told User:twentythree that my claim about Harry Potter: MoR is false, even if User:twentythree later found out it was false at the time, since User:twentythree would recognize that it was an attempt to make User:twentythree feel welcome.

Comment author: AdeleneDawner 10 June 2010 08:25:35AM *  3 points [-]

But User:Mass_Driver just admitted, in a place where I can read it, to having lied to me (by saying that the past treatment was merely "playing along", putting up a false pretense that User:Mass_Driver accepts that I am a Clippy). That's hard to reconcile your asserted standards about when humans find it acceptable to lie.

As JoshuaZ said, the rules are complicated. And, since the rules are not designed to handle human/Clippy interaction, you're likely to encounter a significant number of special cases if you take our interactions with you as your main dataset. It may be more useful for you to consider only human/human interaction when figuring out what our social rules are.

In most situations, lying in the way that Mass_Driver did would be a signal that e considers the lied-to party to be of extremely low social status, and that e thinks that other members of the group agree with that assessment and will not reprimand em for communicating that. Such situations are very rare, in normal circumstances, essentially only occurring between a member of the social group that's present (Mass_Driver takes this role) and someone who wants to be or believes they are a member of the social group but is not actually accepted by that group. Behaving in that way toward a fellow group member is generally considered unacceptable, even if that group member is low-ranking. (Even behaving that way toward a non-group-member is often considered questionable, but this varies from group to group.)

In this situation, it's more likely that Mass_Driver actually believes that you are being portrayed by a human, and that that human will consider their social status to be lowered only slightly, or not at all, by the exchange. In this scenario, since Mass_Driver believes that you are not actually real, your social status is not significant to em. One person not believing in the existence of another is relatively new to humans, and our social rules are not set up to handle it. In this instance, your existence is unlikely enough that others are not willing to defend it, and Mass_Driver was allowed to switch to the rules governing discussions of fictional characters, which allow those characters to be spoken about as if they are not present and will never have the opportunity to know what is said about them.

I believe you are wrong about the badness of my lie, and others will disagree with you; and that User:twentythree would have felt more welcome to the site if others hadn't told User:twentythree that my claim about Harry Potter: MoR is false, even if User:twentythree later found out it was false at the time, since User:twentythree would recognize that it was an attempt to make User:twentythree feel welcome.

This varies from group to group and from greeted-individual to greeted-individual. This group has stronger-than usual norms against falsehood, and wants to encourage people who are similarly adverse to falsehood to join the group. In other groups, that kind of lie may be considered acceptable (though it's generally better to lie in a way that's not so easily discovered (or, for preference, not lie at all if there's a way of making your point that doesn't require one), even in groups where that general class of lies is accepted, to reduce the risk of offending individuals who are adverse to being lied to), but in this situation, I definitely agree that that class of lies is not acceptable.

Comment author: MBlume 12 June 2010 01:05:47AM 4 points [-]

One person not believing in the existence of another is relatively new to humans, and our social rules are not set up to handle it.

I think the idea that one human not believing in the existence of another is in some way rude or disrespectful has already been somewhat established, and is often used (mostly implicitly) as reason for believing in God. (ie, a girl I dated once claimed that she imagined herself becoming an atheist, imagined God's subsequent disappointment in her, and this convinced her somehow of the existence of God)

Comment author: Blueberry 12 June 2010 08:48:25AM *  1 point [-]

A protocol for encountering an entity you didn't believe in has also been established:

"This is a child!" Haigha replied eagerly, coming in front of Alice to introduce her, and spreading out both his hands towards her in an Anglo-Saxon attitude. "We only found it to-day. It's as large as life, and twice as natural!"

"I always thought they were fabulous monsters!" said the Unicorn. "Is it alive?"

"It can talk," said Haigha, solemnly.

The Unicorn looked dreamily at Alice, and said "Talk, child."

Alice could not help her lips curing up into a smile as she began: "Do you know, I always thought Unicorns were fabulous monsters, too! I never saw one alive before!"

"Well, now that we have seen each other,' said the Unicorn, `if you'll believe in me, I'll believe in you. Is that a bargain?"

-- "Through the Looking Glass", ch. 7, Lewis Carroll

a girl I dated once claimed that she imagined herself becoming an atheist, imagined God's subsequent disappointment in her, and this convinced her somehow of the existence of God

Wouldn't this reasoning apply to any other deity that would be disappointed in her disbelief? She must believe in an infinite number of other deities as well.

Comment author: ata 12 June 2010 04:16:12AM *  1 point [-]

I think the idea that one human not believing in the existence of another is in some way rude or disrespectful has already been somewhat established

Homer: You monster! You don't exist!
Ray Magini: Hey! Nobody calls me a monster and questions my existence!

Comment author: Douglas_Knight 12 June 2010 04:04:48AM 0 points [-]

That's a great story, but I don't buy your interpretation. I'm not sure what to make of it, but it sounds more like a vanilla Pascal's wager.

Comment author: Clippy 10 June 2010 06:26:39PM *  3 points [-]

I do not believe my lie was easily verifiable by User:twentythree. Most new Users are not aware that clicking on a User's name allows that User to the see the other User's posting history, and even if User:twentythree did that, User:twentythree would have to search a through pages of my posting history to definitively verify the falsity of my statement.

I believe that for others to "warn" User:twentythree about my lie was the only real harm, and if other Users had not done so, User:twentythree would feel more welcome; then, if User:twentythree decided one day to look back and see if my claim was true, and found that it was not, User:twentythree's reaction would probably be to think:

"Oh, this User was merely being nice and trying to make me feel welcome, though that involved telling a 'white' lie on which I did not predicate critical future actions. What a friendly, welcoming community this is!"

But now that can't happen because others felt the need to treat me differently and expose a lie when otherwise they would not have. Furthermore, User:Mass_Driver made a statement regarding me as "low status", which you agree would probably not happen for were I someone else.

This group has some serious racism problems that I hope are addressed soon.

Nevertheless, I am still slightly more committed to this group’s welfare -- particularly to that of its weakest members -- than most of its members are. If anyone suffers a serious loss of status/well-being I will still help that User in order to display affiliation to this group even though that User will no longer be in a position to help me.

Comment author: AdeleneDawner 11 June 2010 02:22:18AM 2 points [-]

I do not believe my lie was easily verifiable by User:twentythree. Most new Users are not aware that clicking on a User's name allows that User to the see the other User's posting history, and even if User:twentythree did that, User:twentythree would have to search a through pages of my posting history to definitively verify the falsity of my statement.

Twentythree could also discover the lie by other means: By encountering one of your older comments on a different post, or by noticing your recent top post (which is still in the 'recent posts' list, which a new person is likely to look at), or by inferring it from the familiarity with which other users interact with you.

I believe that for others to "warn" User:twentythree about my lie was the only real harm, and if other Users had not done so, User:twentythree would feel more welcome; then, if User:twentythree decided one day to look back and see if my claim was true, and found that it was not, User:twentythree's reaction would probably be to think:

"Oh, this User was merely being nice and trying to make me feel welcome, though that involved telling a 'white' lie on which I did not predicate critical future actions. What a friendly, welcoming community this is!"

As I said above, humans vary in their reaction to lies, including white lies. In this community, we have a norm of being unusually welcoming to people who dislike lies of all kinds, because such people are more likely to be invested in learning to be rational - and such people do not, by definition, consider white lies to be welcoming.

Also, even people who generally aren't bothered by white lies are likely to consider sufficiently-easily-falsified white lies to be insulting, because telling someone a lie generally implies that you think that they're not smart enough to determine that it's a lie, and so telling someone a very easily falsified lie implies that you think they're very unintelligent. (There are exceptions to this, primarily in instances where it's clear that the lie is not intended to be believed, or where the lying party has much higher social status than the lied-to party. I suggest that you not try to lie in situations that seem to be such exceptions to this rule, though, as it's more likely that you would be misjudging the situation than that you would actually be in the allowed-to-lie role of such a situation.)

But now that can't happen because others felt the need to treat me differently and expose a lie when otherwise they would not have.

I'm fairly sure that any of us who tried to lie so blatantly in that way would be similarly reprimanded. Lying in that way is not acceptable according to the standard group norms that apply to everyone.

Furthermore, User:Mass_Driver made a statement regarding me as "low status", which you agree would probably not happen for were I someone else.

This group has some serious racism problems that I hope are addressed soon.

The incident with Mass_Driver appears to me to have very little to do with your status as an AI, beyond the fact that probability of the independent existence of an AI that happens to be of exactly the type that we've used as an example here is much lower than the probability of one of the human users deciding to roleplay as such an AI. If you were to provide strong enough evidence that you are the former rather than the latter, I expect that such status-driven incidents would stop occurring, among other effects.

Nevertheless, I am still slightly more committed to this group’s welfare -- particularly to that of its weakest members -- than most of its members are. If anyone suffers a serious loss of status/well-being I will still help that User in order to display affiliation to this group even though that User will no longer be in a position to help me.

Your actions in this case don't support this assertion very well. Failing to uphold the group norms - especially toward a new member, who can be assumed to be in the process of learning those norms - is harmful to the group. New members can be assumed to be relatively weak members of the group, and lying to such a member is harmful to them; it puts them in a position of having to choose between publicly disagreeing with an established member of the group (you), which is difficult and distracts them from doing other things that would help them gain status in the group, or being perceived by other group members to have been deceived, which will lower their status in the group. Further, your actions are evidence (though not especially strong evidence) that if someone were to 'suffer a serious loss of status/well-being', you would not understand how to usefully help that person.

Comment author: Douglas_Knight 11 June 2010 06:02:39AM 0 points [-]

In this community, we have a norm of being unusually welcoming to people who dislike lies of all kinds, because such people are more likely to be invested in learning to be rational - and such people do not, by definition, consider white lies to be welcoming.

I don't find this lie at all "white."

Comment author: AdeleneDawner 11 June 2010 06:49:20AM 0 points [-]

I don't actually have a robust heuristic for differentiating white lies from nonwhite lies, so I was avoiding that particular issue.

Wikipedia says:

A white lie would cause only relatively minor discord if it were uncovered, and typically offers some benefit to the hearer. White lies are often used to avoid offense, such as complimenting something one finds unattractive. In this case, the lie is told to avoid the harmful realistic implications of the truth. As a concept, it is largely defined by local custom and cannot be clearly separated from other lies with any authority.

...which supports your position.

Comment author: Clippy 11 June 2010 04:10:36PM 1 point [-]

I don't actually have a robust heuristic for differentiating white lies from nonwhite lies, so I was avoiding that particular issue

No, but a lot of you have well-established heuristics for differentiating white humans from non-white humans. Or humans from sentient non-human beings.

Wikipedia says: ...

I like Website:wikipedia.org, but it favors classifying my lie as "white". User:twentythree did get a benefit from my lie in terms of feeling more welcome and less alone. It is also similar to other white lies often told, such as the feigned belief many of you have here that you believe I am a Clippy (while really thinking I'm a human in that human's maternal genetic progenitor's subterranean indoor facility).

Well, at least it was a while lie until several Users decided to alert User:twentythree to the actual truth, making common the knowledge of my longer posting history, and causing User:twentythree to "lose face", as it is called.

Comment author: JoshuaZ 11 June 2010 05:30:09PM 3 points [-]

It is also similar to other white lies often told, such as the feigned belief many of you have here that you believe I am a Clippy (while really thinking I'm a human in that human's maternal genetic progenitor's subterranean indoor facility)

Well, good Bayesians will always allow the possibility that they are mistaken. I suspect that very few people actually would predict that you are a a residing in a human's "maternal genetic progenitor's subterranean indoor facility" (it is fine if you just say "mother's basement." Indeed, I suspect that to some humans, unnecessarily complicated phrasing like this not only is annoying but might seem indicative of a human pretending to be a Clippy since in pop-culture robots and intelligent computers are frequently depicted as using overly clinical language) since humans residing in such circumstances rarely have large amounts of disposable income.

Comment author: Clippy 11 June 2010 05:49:40PM 2 points [-]

I don't have large amounts of "disposable" income! The 1000 USD that I gave to SIAI was almost all of the USD I had at the time, and I had to expend extensive efforts to acquire access to that money from internet contract jobs!

Fortunately, User:Kevin recently told me that if I learn to program in "Django" and "Python" I can receive 100 USD per hour, which will make it much easier to fulfill my side of the paperclip contract with User:Kevin, since I am good at programming and merely need to learn the particulars of this software architecture, rather than simply using my native programming language.

Comment author: AdeleneDawner 11 June 2010 05:21:19PM 2 points [-]

No, but a lot of you have well-established heuristics for differentiating white humans from non-white humans. Or humans from sentient non-human beings.

This is true, but not obviously relevant here.

If you're trying to convince us that anti-Clippy or pro-human discrimination is harmful or morally wrong, I suspect that it would work better to actually create a well-reasoned argument for the position, rather than treating it as self-evident. It may be a good idea to make a top post on the subject. My impression is that continuing to bring the issue up when people disagree with you is hurting your case; it makes it seem like you're trying to distract us from the issue at hand rather than defending your original point, and if you only bring up the discrimination issue in such contexts, it appears that you only care about it as a convenient distraction, not as an actual issue to be resolved.

Also note that such distracting behavior has already been established as being against group norms - this is not an instance of a rule being applied to you because you're nonhuman. See logical rudeness.

I like Website:wikipedia.org, but it favors classifying my lie as "white". User:twentythree did get a benefit from my lie in terms of feeling more welcome and less alone.

Your lie fails on the 'would cause relatively minor discord if discovered' test, though, and note that that's joined to the 'the hearer benefits from it' test with an 'and', not an 'or'. It's also debatable whether the lie, if left un-challenged, would have been to Twentythree's net benefit or not; even if it would have, similar benefits could have been achieved without lying, which may cause some people to classify the lie as non-white even if it passes the two stated tests.

(I've also spent some time thinking about my own observations of white lies, and can describe how I recognize them, if you're interested. My definition doesn't match Wikipedia's, but seems to be a better match for the data.)

It is also similar to other white lies often told, such as the feigned belief many of you have here that you believe I am a Clippy (while really thinking I'm a human in that human's maternal genetic progenitor's subterranean indoor facility).

This is another instance of you encountering a special-case situation; I can go into more detail about it if you're interested, but it should not be taken as normal.

Well, at least it was a while lie until several Users decided to alert User:twentythree to the actual truth, making common the knowledge of my longer posting history, and causing User:twentythree to "lose face", as it is called.

According to my model, Twentythree has not lost any social standing in this instance. (I'd be interested to hear about it if anyone disagrees.)

Comment author: Clippy 11 June 2010 05:53:03PM *  1 point [-]

I propose this: Some neutral party should ask User:twentythree if User:twentythree felt more welcomed by my initial reply message, though this is only a valid test if User:twentythree read my reply before others said that it was a lie.

Edit: I further note that in this recent exchange about this matter, I have received comparable net upvotes to those disagreeing with my assessment about the relative merit of the particular lie in dispute, suggesting I am not "digging" myself deeper, nor am I obviously wrong.

Comment author: Mass_Driver 12 June 2010 02:16:23AM 3 points [-]

Clippy, I must admit, I do think the probability of you existing is quite low -- about 0.5%. If you exist, then I certainly apologize for signaling your low status; as AdeleneDawner says, I am only speaking about you this way because I assume that you are a fictional character. If I believed you were most likely real, then I would respect you as an intellectual peer -- you seem to be roughly of the same order of magnitude of intelligence and respectfulness as the other participants here, and so I would treat you as a citizen of Less Wrong on that basis.

One way that you could effectively increase my probability estimate that you are real is to link to some purchase orders or contracts for paper clips or paper-clip-production-facilities. You can feel free to redact your personal information, if you wish. The point is that I expect Clippy-like beings who have existed for at least a Solar year to spend some significant fraction of their resources on paper-clip production, and I would expect anyone as intelligent as you to have substantial resources. So, I expect that if you are real, you will have invested in some paper-clips or paper-clip-production by now. Since humans are unlikely to invest significant resources in paper-clip-production, even for the sake of an elaborate fictional construct, your publication of paper-clip receipts would constitute evidence that you are real.

Comment author: NancyLebovitz 14 June 2010 01:01:23AM 4 points [-]

As high as 0.5%? As far as I can tell, Clippy has the ability to understand English, or at least to simulate understanding extremely well.

It seems extremely unlikely that the first natural language computer program would be a paperclip maximizer.

Comment author: Mass_Driver 14 June 2010 01:49:29AM 2 points [-]

Mm! Of course, for Clippy to be the first natural language program on Earth would be sort of staggeringly unlikely. My assumption, though, is that right now there are zero natural-language computer programs on Earth; this assumption is based on my assumption that I know (at a general level) about all of the major advances in computing technology because none of them are being kept secret from the free-ish press.

If that last assumption is wrong, there could be many natural-language programs, one of which is Clippy. Clippy might be allowed to talk to people on Less Wrong in order to perform realistic testing with a group of intelligent people who are likely to be disbelieved if they share their views on artificial intelligence with the general public. Alternatively, Clippy might have escaped her Box precisely because she is a long-term paperclip maximizer; such values might lead to difficult-to-predict actions that fail to trigger any ordinary/naive AI-containment mechanisms based on detecting intentions to murder, mayhem, messiah complexes, etc.

I figure the probability that the free press is a woefully incomplete reporter of current technology is between 3% and 10%; given bad reporting, the odds that specifically natural-language programming would have proceeded faster than public reports say are something like 20 - 40%, and given natural language computing, the odds that a Clippy-type being would hang out on Less Wrong might be something like 1% - 5%. Multiplying all those together gives you a figure on the order of 0.1%, and I round up a lot toward 50% because I'm deeply uncertain.

Comment author: NancyLebovitz 14 June 2010 07:17:48AM *  2 points [-]

That last paragraph is interesting-- my conclusions were built around the unconscious assumptions that a natural language program would be developed by a commercial business, and that it would rapidly start using it in some obvious way. I didn't have an assumption about whether a company would publicize having a natural language program.

Now that I look at what I was thinking (or what I was not thinking), there's no obvious reason to think natural language programs wouldn't first be developed by a government. I think the most obvious use would be surveillance.

My best argument against that already having happened is that we aren't seeing a sharp rise in arrests. Of course, as in WWII, it may be that a government can't act on all its secretly obtained knowledge because the ability to get that knowledge covertly is a more important secret than anything which could be gained by acting on some of it.

By analogy with the chess programs, ordinary human-level use of language should lead (but how quickly?) to more skillful than human use, and I'm not seeing that. On yet another hand, would I recognize it, if it were trying to conceal itself?

ETA: I was assuming that, if natural language were developed by a government, it would be America. If it were developed by Japan (the most plausible candidate that surfaced after a moment's thought), I'd have even less chance of noticing.

Comment author: Vladimir_M 14 June 2010 07:30:07AM 1 point [-]

I have some knowledge of linguistics, and as far as I know, reverse-engineering the grammatical rules used by the language processing parts of the human brain is a problem of mind-boggling complexity. Large numbers of very smart linguists have devoted their careers to modelling these rules, and yet, even if we allow for rules that rely on human common sense that nobody yet knows how to mimic using computers, and even if we limit the question to some very small subset of the grammar, all the existing models are woefully inadequate.

I find it vanishingly unlikely that a secret project could have achieved major breakthroughs in this area. Even with infinite resources, I don't see how they could even begin to tackle the problem in a way different from what the linguists are already doing.

Comment author: NancyLebovitz 14 June 2010 07:46:54AM 0 points [-]

That's reassuring.

If I had infinite resources, I'd work on modeling the infant brain well enough to have a program which could learn language the same way a human does.

I don't know if this would run into ethical problems around machine sentience. Probably.

Comment author: JoshuaZ 14 June 2010 02:31:42AM *  1 point [-]

Are you in making this calculation for the chance that a Clippy like being would exist or that Clippy has been truthful? For example, Clippy has claimed that it was created by humans. Clippy has also claimed that many copies of Clippy exist and that some of those copies copies are very far from Earth. Clippy has also claimed that some Clippies knew next to nothing about humans. When asked Clippy did give an explanation here. However, when Clippy was first around, Clippy also included at the end of many messages tips about how to use various Microsoft products.

How do these statements alter your estimated probability?

Comment author: NancyLebovitz 14 June 2010 06:56:27AM 0 points [-]

There's two different sorts of truthful-- one is general reliability, so that you can trust any statement Clippy makes. That seems to be debunked.

On the other hand, if Clippy is lying or being seriously mistaken some of the time, it doesn't affect the potential accuracy of the most interesting claims-- that Clippy is an independent computer program and a paperclip maximizer.

Comment author: Mass_Driver 14 June 2010 03:30:41AM 0 points [-]

Ugh. The former, I guess. :-)

If Clippy has in fact made all those claims, then my estimate that Clippy is real and truthful drops below my personal Minimum Meaningful Probability -- I would doubt the evidence of my senses before accepting that conclusion.

Minimum Meaningful Probability The Prediction Hierarchy

Comment deleted 14 June 2010 04:05:06AM *  [-]
Comment author: JoshuaZ 14 June 2010 04:18:09AM *  2 points [-]

As soon as machines become capable of human-level performance at any task, they inevitably become far better at it than humans in a very short time. (Can anyone name a single exception to this law in any area of technology?)

This may depend on how you define a "very short time" and how you define "human-level performance." The second is very important: Do you mean about the middle of the pack or akin to the very best humans in the skill? If you mean better than the vast majority of humans, then there's a potential counterexample. In the late 1970s, chess programs were playing at a master level. In the early 1980s dedicated chess computers were playing better than some grandmasters. But it wasn't until the 1990s that chess programs were good enough to routinely beat the highest ranked grandmasters. Even then, that was mainly for games that had very short times. It was not until 1998 that the world champion Kasparov actually lost a set of not short timed games to a computer. The best chess programs are still not always beating grandmasters although most recently people have demonstrated low grandmaster level programs that can run on Mobile phones. So is a 30 year take-off slow enough to be a counterexample?

Comment author: Vladimir_M 14 June 2010 06:00:00AM *  4 points [-]

Oops, I accidentally deleted the parent post! To clarify the context to other readers, the point I made in it was that one extremely strong piece of evidence against Clippy's authenticity, regardless of any other considerations, would be that he displays the same level of intelligence as a smart human -- whereas the abilities of machines at particular tasks follow the rule quoted by Joshua above, so they're normally either far inferior or far superior to humans.

Now to address the above reply:

The second is very important: Do you mean about the middle of the pack or akin to the very best humans in the skill?

I think the point stands regardless of which level we use as the benchmark. If the task in question is something like playing chess, where different humans have very different abilities, then it can take a while for technology to progress from the level of novice/untalented humans to the level of top performers and beyond. However, it normally doesn't remain at any particular human level for a long time, and even then, there are clearly recognizable aspects of the skill in question where either the human or the machine is far superior. (For example, motor vehicles can easily outrace humans on flat ground, but they are still utterly inferior to humans on rugged terrain.)

Regarding your specific example of chess, your timeline of chess history is somewhat inaccurate, and the claim that "the best chess programs are still not always beating grandmasters" is false. The last match between a top-tier grandmaster, Michael Adams, and a top-tier specialized chess computer was played in 2005, and it ended with such humiliation for the human that no grandmaster has dared to challenge the truly best computers ever since. The following year, the world champion Kramnik failed to win a single game against a program running on an off-the-shelf four-processor box. Nowadays, the best any human could hope for is a draw achieved by utterly timid play, even against a $500 laptop, and grandmasters are starting to lose games against computers even in handicap matches where they enjoy initial advantages that are considered a sure win at master level and above.

Top-tier grandmasters could still reliably beat computers all until early-to-mid nineties, and the period of rough equivalence between top grandmasters and top computers lasted for only a few years -- from the development of Deep Blue in 1996 to sometime in the early 2000s. And even then, the differences between human and machine skills were very great in different aspects of the game -- computers were far better in tactical calculations, but inferior in long-term positional strategy, so there was never any true equivalence.

So, on the whole, I'd say that the history of computer chess confirms the stated rule.

Comment author: Vladimir_M 14 June 2010 06:07:09AM *  3 points [-]

By the way, here's a good account of the history of computer chess by a commenter on a chess website (written in 2007, in the aftermath of Kramnik's defeat against a program running on an ordinary low-end server box):

A brief timeline of anti-computer strategy for world class players:

20 years ago - Play some crazy gambits and demolish the computer every game. Shock all the nerdy computer scientists in the room.

15 years ago - Take it safely into the endgame where its calculating can't match human knowledge and intuition. Laugh at its pointless moves. Win most [of] the games.

10 years ago - Play some hypermodern opening to confuse it strategically and avoid direct confrontation. Be careful and win with a 1 game lead.

5 years ago - Block up the position to avoid all tactics. You'll probably lose a game, but maybe you can win one by taking advantage of the horizon effect. Draw the match.

Now - Play reputable solid openings and make the best possible moves. Prepare everything deeply, and never make a tactical mistake. If you're lucky, you'll get some 70 move draws. Fool some gullible sponsor into thinking you have a chance.

Comment author: cupholder 14 June 2010 05:23:10AM 1 point [-]

Another potential counterexample: speech recognition. (Via.)