This is too confused to follow as a human, and much too confused to program an AI with.
Also ambiguity aside, (2) is just bad. I'm having trouble imagining a concrete interpretation of "don't over-optimize" that doesn't reduce to "fail to improve things that should be improved". And while short-sightedness is a problem for humans who have trouble modelling the future, I don't think AIs have that problem, and there are some interesting failure modes (of the destroys-humanity variety) that arise when an AI takes too much of a long view.
Um. Do I have a choice about creating multiple top-level posts? (Yes, that is a serious question) Once a post is below threshold . . . .
You can leave the subject lie, and carry on commenting on other people's.
I'm perfectly willing to accept that I'm not expressing myself in a fashion that this community is happy with (the definition of clear is up for grabs ;-)
No, the definition of clear is not up for grabs. Something is clear if it is easily understood by those with the necessary baseline knowledge. Your posts are not.
You are acting indignant, whether you are or not, and that is not endearing.
My explanation was clearly either poorly communicated or flawed. I disagree with necessarily flawed.
Your communication is an essential part of your explanation. If your communication is poor (aka flawed) then your explanation is poor (aka flawed)
the way LessWrong treats all newcomers is unnecessarily harsh to the extent that you all have an established reputation of having built an "echo chamber" but that this can be changed.
I've been here less time than you. I came in with the idea that I'd learn how the culture works, and behave appropriately within it while improving rationality.
I'm not exactly popular, and I've been in some rather heated debates, but you see me as part of the establishment. Why? Because I made an effort. Make that effort. Try and be part of the community, rather than setting yourself apart deliberately. Think things through before you do them.
In the spirit of Asimov’s 3 Laws of Robotics
It is my contention that Yudkowsky’s CEV converges to the following 3 points:
I further contend that, if this CEV is translated to the 3 Goals above and implemented in a Yudkowskian Benevolent Goal Architecture (BGA), that the result would be a Friendly AI.
It should be noted that evolution and history say that cooperation and ethics are stable attractors while submitting to slavery (when you don’t have to) is not. This formulation expands Singer’s Circles of Morality as far as they’ll go and tries to eliminate irrational Us-Them distinctions based on anything other than optimizing goals for everyone — the same direction that humanity seems headed in and exactly where current SIAI proposals come up short.
Once again, cross-posted here on my blog (unlike my last article, I have no idea whether this will be karma'd out of existence or not ;-)