Kindly comments on Assessing Kurzweil: the results - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (59)
Why oughtn't it be? The construction "A, though B" is an independent assertion of A and B. Syntactic linkage is not enough to establish contingency.
It is not like "A, because B" for example, where it's arguably unfair if B and A are both false to count it as two failures... in that case, the claim of A can be seen as contingent on the claim of B, and not independent.
To put this differently, "A, though B" makes the following claims:
A
B
You might (mistakenly) expect -B given A, which is why I mention B explicitly.
Whereas "A, because B" makes the following claims:
B
If B, then A
If A happens in the first case, the first claim is correct. If B happens, the second is correct. If both happen, both claims are correct.
If A happens in the first case but B doesn't, the first claim is correct and the second claim is unevaluatable.
(I suppose one could argue that the second case implicitly claims "if -B, then -A" as well... "because" is somewhat ambiguous in English.)
This is only a problem because we haven't been comparing the relative "difficulty" of predictions. Admittedly this is hard to do; but I think it's clear that:
"Intelligent roads are in use, primarily for long-distance travel." is a much more ambitious prediction than "Local roads, though, are still predominantly conventional."
Treating the two statements as a single prediction "A, though B" is more ambitious than either, and should be worth as many points as the two of them combined.
Moreover, any partial credit for "A, though B" would take into account that B happened though A didn't. Or rather, a prediction that intelligent roads are only somewhat in use should receive more credit than a prediction that intelligent roads are ubiquitous.
Agreed that understanding the "difficulty" of a prediction is key if we're going to evaluate the reliability of a predictor in a useful way.
In the future, we might distinguish "difficult" predictions from trivial ones by seeing if the predictions are unlike the predictions made by others at the same time. This is easy to do if we evaluate contemporary predictions.
But I have no idea how to accomplish this when looking back on past predictions. I can't help but to feel that some of Kurzweil's predictions are trivial, yet how can we tell for sure?