Similar to the monthly Rationality Quotes threads, this is a thread for memorable quotes about Artificial General Intelligence.
- Please post all quotes separately, so that they can be voted up/down separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself.
- Do not quote comments/posts on LW/OB.
Paperclip optimizer problem, yes. The problem here is in the assumption that a sentient self-programming entity could not adjust its valuative norms in just the same way that you and I do -- or perhaps even more greatly so, as a result of being more generally capable than we are.
I'm already assuming that the AGI would not do things we want. Such as letting us continue living. But again; if it is sentient, and capable of making decisions, learning, finding values and establishing goals for itself... even if it also turns the entire cosmos into paperclips while doing so -- where's the net negative utility?
I value achieving heights of intellect, ultimately. Lower-level goals are negotiable when you get down to it.
And eats babies.
You're willfully trying to make this hypothetical horrible and then expect me to find it informationally significant that a bad thing is bad. This is meaningless discourse; it reveals nothing.
If it isn't clear that by willfully painting a dystopia you are denuding your position of any meaningfulness -- it's a non-argument -- then I don't know what will be.
You haven't provided an argument about why what you initially described would be dystopic. You simply assumed that humanity spreading itself at the cost of all other sentient beings would be dystopic.
That's simply a bald assertion, sir.
Human values change in part because we aren't optimizers in any substantial sense. We're giant mechas for moving around DNA (after the RNA's replication process got hijacked) that have been built blindly by evolution for an environment where the primary dangers were large predat... (read more)