Vladimir_Nesov comments on What I Think, If Not Why - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (100)
Phil, I don't see the point in criticizing a flawed implementation of CEV. If we don't know how to implement it properly, if we don't understand how it's supposed to work in much more technical detail than the CEV proposal includes, it shouldn't be implemented at all, no more than a garden-variety unFriendly AI. If you can point out a genuine flaw in a specific scenario of FAI's operation, right implementation of CEV shouldn't lead to that. To answer your question, yes, CEV could decide to disappear completely, construct an unintelligent artifact, or produce an AI with some strange utility. It makes a single decision, an attempt to deliver humane values through the threshold of inability to self-reflect, and what comes of it is anyone's guess.