This is too confused to follow as a human, and much too confused to program an AI with.
Also ambiguity aside, (2) is just bad. I'm having trouble imagining a concrete interpretation of "don't over-optimize" that doesn't reduce to "fail to improve things that should be improved". And while short-sightedness is a problem for humans who have trouble modelling the future, I don't think AIs have that problem, and there are some interesting failure modes (of the destroys-humanity variety) that arise when an AI takes too much of a long view.
The burden of clarity falls on the writer. Not all confusion is the writer's fault, but confused writing is a very major problem in philosophy. In fact, I would say it's more of a problem than falsehood is. There's no shame in being confused - almost everyone is, especially around complex topics like morality. But you can't expect to make novel contributions that are any good until you've untangled the usual confusions and understood the progress that's previously been made.
A good point and well written. My counter-point is that numerous other people have not had problems with my logic; have not needed to get special definitions of "terms" that were pretty clear standard English; have not insisted on throwing up strawmen, etc.
Your assumption is that I haven't untangled the usual confusions and that I haven't read the literature. It's an argument from authority but I can't help but point out that I was a Philosophy major 30 years ago and have been constantly reading and learning since then. Further, the outside view is generally that it is LessWrong that is generally confused and intolerant of outside views.
=== Your second argument is a classic case of a stupid super-intelligent AI.
Then apparently Less Wrong readers are more stupid or more ignorant than your previous audience. In which case I am afraid you will have to dumb down your writing so that it is comprehensible and useful to your current target audience.
In the spirit of Asimov’s 3 Laws of Robotics
It is my contention that Yudkowsky’s CEV converges to the following 3 points:
I further contend that, if this CEV is translated to the 3 Goals above and implemented in a Yudkowskian Benevolent Goal Architecture (BGA), that the result would be a Friendly AI.
It should be noted that evolution and history say that cooperation and ethics are stable attractors while submitting to slavery (when you don’t have to) is not. This formulation expands Singer’s Circles of Morality as far as they’ll go and tries to eliminate irrational Us-Them distinctions based on anything other than optimizing goals for everyone — the same direction that humanity seems headed in and exactly where current SIAI proposals come up short.
Once again, cross-posted here on my blog (unlike my last article, I have no idea whether this will be karma'd out of existence or not ;-)