# shminux comments on Assessing Kurzweil: the results - Less Wrong

45 16 January 2013 04:51PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Sort By: Best

You are viewing a single comment's thread.

Comment author: 15 January 2013 04:44:58PM 13 points [-]

even a true rate of 30% is much higher that than chance.

What is the chance rate, and how do you calculate it?

Comment author: 16 January 2013 07:59:29AM 12 points [-]

I'd also like to compare Kurzweil against the success rate of other predictors.

Some predictions might be very obvious.

Comment author: 16 January 2013 08:04:20PM 7 points [-]

Yes, if one has a source of abundant likely, obvious predictions one can arbitrarily 'juice' one's overall accuracy rate even if most of the surprising predictions go wrong. On the other hand, judging 'obviousness' in hindsight is very tricky.

One also has to pay attention to the independence of predictions. E.g. one could predict the continuation of Moore's Law as one prediction or as many predictions with connected answers: a prediction about chips in laptops, a prediction about chips in supercomputers, a prediction about the performance of algorithms with well-understood hardware scaling, etc. In the extreme, one could make 1000 predictions about computer performance in consecutive minutes, which would almost certainly rise or fall together.

Kurzweil's separate predictions aren't perfectly correlated (e.g. serial speed broke off from supercomputer performance in FLOPS) but many of them are far from independent.

Comment author: 29 January 2013 10:13:35PM -1 points [-]

Carl is basically pointing out that assessing predictions is tricky business, because it's hard to be objective.

Here are a few points that need to be taken into account:

1. People have a lot to gain from being pessimistically defensive. It prevents them from being disappointed at some point in the future. The option for being pleasantly surprised, remains open. Being defensively pessimistic also prevents you from looking crazy to your peers. After all... who wants to be the only one in a group of 10 to think that by 2030 we'll have nanobots in our brains?

2. The poster assessed Kurzweil's predictions because he felt the need to do so. Why did he feel the need to do so? Is this about defensive pessimism?

3. It is safe to assume that a random selection of assessors would be biased towards judging 'False' for two obvious reasons. The first is the fact that they are uninformed about technology and simply aren't able to properly judge the lion's share of all predictions. The second is defensive pessimism.

4. Why is it judged that a 30% 'Strong True' is a weak score? In comparison to the predictions of futurologists before Kurzweil, 30% seems like an excellent score to me. It strikes me as a score that a man with a blurred vision of the future would have. But blurred vision of the future is all you can ever have. If the future were here, you'd be able to see it sharply in focus. Having blurred vision of the future is a real skill. Most people (SL0) have no vision of the future whatsoever.

5. How many years does a prediction have to be off in order for it to be wrong? How would you determine this number of years objectively?

6. Why did the assessors have to go with the 5-step-true-to-false system? Is that really the best way of assessing a futurologists predictions? I understand that we are a group of rational people here, but sometimes, you've gotta let go of the numbers, the measurements, the data and the whole binary thinking. Sometimes, you have to go back to just being a guy with common sense.

Take Kurzweil's long standing predictions for solar power, for example. He's been predicting for years that the solar tipping point would be around 2010. Spain hit grid parity in 2012 and news outlets are saying that the USA and parts of Europe will hit grid parity in 2013.

Calling Kurzweil's prediction on solar power wrong just because it's happening 2 to 3 years after 2010, is wrong in my opinion.

Kurzweil deserves some slack here. In the 1980s he predicted a computer would beat a human chess player in 1998. And that ended up happening a year earlier in 1997.

Kurzweil has blurry vision of the future. He might be a genius, but he is also just a human being that doesn't have anything better to go on than big data. Simple as that.

Instead of bickering about his predictions, we would do better to just look at the big picture of things.

Nanotech, wireless, robotics, biotech, AI... all of it is happening.

And be honest about Google's self driving car, which came out 2 years ago already: that was just an unexpected leap into the future right there!

I don't think Kurzweil himself saw self driving cars coming in 2011 already.

And to really hammer the point home, the self driving car had thousands of registered miles when it was introduced at the start of 2011. So it was probably already finished in 2010.

For all we know, the Singularity will occur in 2030. We just don't know.

Kurzweil has brought awareness to the world. Rather than sit around and count all the right and the wrong ones as the years pass by, the world would do better if it tried turning those predictions into self fullfilling prophecies.

Comment author: 17 January 2013 08:07:26AM *  3 points [-]

I don't know which predictions, if any, are obvious, but by comparing Kurzweil to other predictors at the same time when he wrote the book (if there were any), we could see how much better he does.

Comment author: 16 January 2013 06:33:43PM 0 points [-]

Which predictions are very obvious?

Comment author: 22 January 2013 06:59:32PM 4 points [-]

As a (perhaps) trivial example, consider the pair of predictions:

• "Intelligent roads are in use, primarily for long-distance travel."
• "Local roads, though, are still predominantly conventional."

As one of the people who participated in this study, I marked the first as false and the second as true. Yet the second "true" prediction seems like it is only trivially true. (Or perhaps not; I might be suffering from hindsight bias here.)

Comment author: 30 January 2013 01:43:52PM 0 points [-]

But why was this counted as two separate predictions? The two statements are even syntactically linked by the "though" conjunction.

Comment author: 30 January 2013 02:56:08PM *  2 points [-]

Why oughtn't it be? The construction "A, though B" is an independent assertion of A and B. Syntactic linkage is not enough to establish contingency.

It is not like "A, because B" for example, where it's arguably unfair if B and A are both false to count it as two failures... in that case, the claim of A can be seen as contingent on the claim of B, and not independent.

To put this differently, "A, though B" makes the following claims:
A
B
You might (mistakenly) expect -B given A, which is why I mention B explicitly.

Whereas "A, because B" makes the following claims:
B
If B, then A

If A happens in the first case, the first claim is correct. If B happens, the second is correct. If both happen, both claims are correct.

If A happens in the first case but B doesn't, the first claim is correct and the second claim is unevaluatable.

(I suppose one could argue that the second case implicitly claims "if -B, then -A" as well... "because" is somewhat ambiguous in English.)

Comment author: 30 January 2013 03:06:32PM 2 points [-]

This is only a problem because we haven't been comparing the relative "difficulty" of predictions. Admittedly this is hard to do; but I think it's clear that:

1. "Intelligent roads are in use, primarily for long-distance travel." is a much more ambitious prediction than "Local roads, though, are still predominantly conventional."

2. Treating the two statements as a single prediction "A, though B" is more ambitious than either, and should be worth as many points as the two of them combined.

Moreover, any partial credit for "A, though B" would take into account that B happened though A didn't. Or rather, a prediction that intelligent roads are only somewhat in use should receive more credit than a prediction that intelligent roads are ubiquitous.

Comment author: 30 January 2013 05:33:19PM 0 points [-]

Agreed that understanding the "difficulty" of a prediction is key if we're going to evaluate the reliability of a predictor in a useful way.

Comment author: 31 January 2013 08:47:05PM 1 point [-]

In the future, we might distinguish "difficult" predictions from trivial ones by seeing if the predictions are unlike the predictions made by others at the same time. This is easy to do if we evaluate contemporary predictions.

But I have no idea how to accomplish this when looking back on past predictions. I can't help but to feel that some of Kurzweil's predictions are trivial, yet how can we tell for sure?

Comment author: 30 January 2013 05:13:22PM *  0 points [-]

Well, if you analyze the statements in terms of prepositional logic, then all the English language conjunctions "and", "but", "though", etc. map to the only type of logical conjunction ∧.

But natural language is richer than (directly mapped) prepositional logic. I interpret the statement "Local roads, though, are still predominantly conventional." as a clarification of "Intelligent roads are in use, primarily for long-distance travel.".

Formally, if you just claim:
"Intelligent roads are in use, primarily for long-distance travel."
it is equivalent to:
"Intelligent roads are in use, primarily for long-distance travel." ∧ ( "Local roads are still predominantly conventional" ∨ ¬"Local roads are still predominantly conventional" )
which is different from
"Intelligent roads are in use, primarily for long-distance travel." ∧ "Local roads are still predominantly conventional"

However, we can assume that if you claim:
"Intelligent roads are in use, primarily for long-distance travel."
you also wanted to communicate that
"Local roads are still predominantly conventional"
not that you are undecided between
"Local roads are still predominantly conventional", ¬"Local roads are still predominantly conventional"
otherwise you would have probably stated that explicitely.

Therefore, the information content of:
"Intelligent roads are in use, primarily for long-distance travel."
and:
"Intelligent roads are in use, primarily for long-distance travel. Local roads, though, are still predominantly conventional."
is rougly the same.

Comment author: 15 January 2013 05:51:14PM 11 points [-]

Subjective impression. The predictions are so varied and sometimes so ambiguous, there's no decent way of establishing a base rate. But picking some predictions at random, they appear to be quite specific (certainly better than a dart throwing chimp):

"For the most part, these truly personal computers have no moving parts." "Unused computes on the Internet are being harvested, creating virtual parallel supercomputers with human brain hardware capacity." "The security of computation and communication is the primary focus of the U.S. Department of Defense." "Military conflicts between nations are rare, and most conflicts are between nations and smaller bands of terrorists."

Comment author: 16 January 2013 07:28:14PM 2 points [-]

A chance rate isn't the right thing to compare to, I think. It would have to be randomly generated predictions, wouldn't it? But any non-expert human will do much better than that, since basic knowledge such as that the Earth will stay in orbit around the sun rules out most of these.

I think the right thing to compare to is if he did significantly better than I would have. Which he probably did, which means I can improve my vision of the future by reading Kurzweil.

Comment author: 16 January 2013 08:31:34PM 2 points [-]

I think the right thing to compare to is if he did significantly better than I would have.

How do you know how you would have done? have you tried?