This is too confused to follow as a human, and much too confused to program an AI with.
Also ambiguity aside, (2) is just bad. I'm having trouble imagining a concrete interpretation of "don't over-optimize" that doesn't reduce to "fail to improve things that should be improved". And while short-sightedness is a problem for humans who have trouble modelling the future, I don't think AIs have that problem, and there are some interesting failure modes (of the destroys-humanity variety) that arise when an AI takes too much of a long view.
Um. Do I have a choice about creating multiple top-level posts? (Yes, that is a serious question) Once a post is below threshold...
Yes, you have a choice about making top-level posts. If you keep making such poor ones, so often, with so little improvement, that choice will be taken away. If you made better ones you could become a valuable contributor; if you made poor ones infrequently, you'd be a very low-level nuisance not worth tackling. As is, I'm more tempted every time you post to ban the noise you throw around in defense of the signal-to-noise ratio.
I endorse everything Kingreaper said in the grandparent. You would do well to take such kind advice more seriously.
Got it. Believe it or not, I am trying to figure out the rules (which are radically different than a number of my initial assumptions) and not trying solely to be a pain in the ass.
I'll cool it on the top level posts.
Admittedly, a lot of my problem is that there is either a really huge double standard or I'm missing something critical. To illustrate . . . . Kingfisher's comment "Something is clear if it is easily understood by those with the necessary baseline knowledge." My posts are, elsewhere, considered very clear by people with less basel...
In the spirit of Asimov’s 3 Laws of Robotics
It is my contention that Yudkowsky’s CEV converges to the following 3 points:
I further contend that, if this CEV is translated to the 3 Goals above and implemented in a Yudkowskian Benevolent Goal Architecture (BGA), that the result would be a Friendly AI.
It should be noted that evolution and history say that cooperation and ethics are stable attractors while submitting to slavery (when you don’t have to) is not. This formulation expands Singer’s Circles of Morality as far as they’ll go and tries to eliminate irrational Us-Them distinctions based on anything other than optimizing goals for everyone — the same direction that humanity seems headed in and exactly where current SIAI proposals come up short.
Once again, cross-posted here on my blog (unlike my last article, I have no idea whether this will be karma'd out of existence or not ;-)