This is too confused to follow as a human, and much too confused to program an AI with.
Also ambiguity aside, (2) is just bad. I'm having trouble imagining a concrete interpretation of "don't over-optimize" that doesn't reduce to "fail to improve things that should be improved". And while short-sightedness is a problem for humans who have trouble modelling the future, I don't think AIs have that problem, and there are some interesting failure modes (of the destroys-humanity variety) that arise when an AI takes too much of a long view.
Then apparently Less Wrong readers are more stupid or more ignorant than your previous audience. In which case I am afraid you will have to dumb down your writing so that it is comprehensible and useful to your current target audience.
Then apparently Less Wrong readers are more stupid or more ignorant than your previous audience.
This is the type of strawman that frustrates me. I said nothing of the sort.
An equally valid interpretation (and my belief) is that LessWrong readers are much more intolerant of accepting common English phrases and prone to inventing strawmen to the point of making communication at any decent rate of speed nearly impossible. I'm starting to really get the lesson that LessWrong really is conservative to an extreme (this is not a criticism at all).
Your point...
In the spirit of Asimov’s 3 Laws of Robotics
It is my contention that Yudkowsky’s CEV converges to the following 3 points:
I further contend that, if this CEV is translated to the 3 Goals above and implemented in a Yudkowskian Benevolent Goal Architecture (BGA), that the result would be a Friendly AI.
It should be noted that evolution and history say that cooperation and ethics are stable attractors while submitting to slavery (when you don’t have to) is not. This formulation expands Singer’s Circles of Morality as far as they’ll go and tries to eliminate irrational Us-Them distinctions based on anything other than optimizing goals for everyone — the same direction that humanity seems headed in and exactly where current SIAI proposals come up short.
Once again, cross-posted here on my blog (unlike my last article, I have no idea whether this will be karma'd out of existence or not ;-)