Technologos comments on Reference class of the unclassreferenceable - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (150)
I'm perfectly willing to grant that, over the scope of human history, the reference classes for cryo/AGI/Singularity have produced near-0 success rates. I'd modify the classes slightly, however:
The point is, each of these references classes, given a small tweak, has experienced infrequent but nonzero successes--and that over the course of all of human history! Once we update the "all of human history" reference class/prior to account for the last century--in which technology has developed faster than probably the previous millennium--the posterior ends up looking much more promising.
I think taw asked about reference classes of predictions. It's easy to believe in penicillin after it's been invented.
People invented it because they were LOOKING for antibiotics explicitly. Fleming had previously found interferon, had cultivated slides where he could see growth irregularities very well, etc. The claim of fortuitous discovery is basically false modesty (see "Discovering" by Robert Root-Bernstein).
Even if we prefer to frame the reference class that way, we can instead note that anybody who predicted that things would remain the way they are (in any of the above categories) would have been wrong. People making that prediction in the last century have been wrong with increasing speed. As Eliezer put it, "beliefs that the future will be just like the past" have a zero success rate.
Perhaps the inventions listed above suggest that it's unwise to assign 0% chance to anything on the basis of present nonexistence, even if you could construct a reference class that has that success rate.
Either way, people who predicted that human life would be lengthened considerably, that humanity would fundamentally change in structure, or that some people would interact with beings that appear nigh-omnipotent have all been right with some non-zero success rate, and there's no particular reason to reject those data.
The negation of "a Singularity will occur" is not "everything will stay the same", it's "a Singularity as you describe it probably won't occur". I've no idea why you (and Eliezer elsewhere in the thread) are making this obviously wrong argument.
Perhaps I was simply unclear. Both my immediately prior comment and its grandparent were arguing only that there should be a nonzero expectation of a technological Singularity, even from a reference class standpoint.
The reference class of predictions about the Singularity can, as I showed in the grandparent, include a wide variety of predictions about major changes in the human condition. The complement or negation of that reference class is a class of predictions that things will remain largely the same, technologically.
Often, when people appear to be making an obviously wrong argument in this forum, it's a matter of communication rather than massive logic failure.