Larks comments on CEV-inspired models - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (41)
"if the AI did exactly what they wanted" as opposed to "if the universe went exactly as they wanted" to avoid issues with unbounded utility functions? This seems like it might not be enough if the universe itself were unbounded in the relivant sense.
For example, suppose my utility function is U(Universe) = #paperclips, which is unbounded in a big universe. Then you're going to normalise me as assigning U(AI becomes clippy) = 1, and U(individual paperclips) = 0.
Yep.
So most likely a certain proportion of the universe will become paperclips.