pjeby comments on Open Thread: February 2010 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (738)
A query to Unknown, with whom I have this bet going:
I recently found within myself a tiny shred of anticipation-worry about actually surviving to pay off the bet. Suppose that the rampant superintelligence proceeds to take over its future light cone but, in the process of dissembling existing humans, stores their mind-state. Some billions of years later, the superintelligence runs across an alien civilization which succeeded on their version of the Friendly AI problem and is at least somewhat "friendly" in the ordinary sense, concerned about other sentient lives; and the superintelligence ransoms us to them in exchange for some amount of negentropy which outweighs our storage costs. The humans alive at the time are restored and live on, possibly having been rescued by the alien values of the Super Happy People or some such, but at least surviving.
In this event, who wins the bet?
Perhaps you've already defined "superintelligent" as meaning "self-directed, motivated, and recursively self-improving" rather than merely "able to provide answers to general questions faster and better than human beings"... but if you haven't, it seems to me that the latter definition of "superintelligent" would have a much higher probability of you losing the bet. (For example, a Hansonian "em" running on faster hardware and perhaps a few software upgrades might fit the latter definition.)