This is too confused to follow as a human, and much too confused to program an AI with.
Also ambiguity aside, (2) is just bad. I'm having trouble imagining a concrete interpretation of "don't over-optimize" that doesn't reduce to "fail to improve things that should be improved". And while short-sightedness is a problem for humans who have trouble modelling the future, I don't think AIs have that problem, and there are some interesting failure modes (of the destroys-humanity variety) that arise when an AI takes too much of a long view.
Um. Do I have a choice about creating multiple top-level posts? (Yes, that is a serious question) Once a post is below threshold . . . .
I'm perfectly willing to accept that I'm not expressing myself in a fashion that this community is happy with (the definition of clear is up for grabs ;-)
I'm not willing to accept that my different posts are not clearly thought through (and the short time between them is an artifact of my having a lot of posts written during the several month time period that I wasn't updating my blog)
Indignant is your interpretation. I haven't felt that emotion after the first few days. ;-)
My explanation was clearly either poorly communicated or flawed. I disagree with necessarily flawed.
I will argue that making the criticism that LessWrong denizens are very quick to say "This is too confused" when they should be saying "I don't understand and don't care to take the time to try to understand" is much more in the line of a constructive criticism than an insult.
Yes, as a newbie and a boat-rocker, I will not get the benefit of the doubt regardless of what I do.
My positions are pretty clear and have been vetted by a decent number of other people. My admittedly biased view is that I am not taking an adversarial role (except for some idiot slips), that most of my statements (while necessarily biased) about bad argumentation practices are meant to be constructive not insulting, but that the way LessWrong treats all newcomers is unnecessarily harsh to the extent that you all have an established reputation of having built an "echo chamber" but that this can be changed.
Um. Do I have a choice about creating multiple top-level posts? (Yes, that is a serious question) Once a post is below threshold . . . .
You can leave the subject lie, and carry on commenting on other people's.
I'm perfectly willing to accept that I'm not expressing myself in a fashion that this community is happy with (the definition of clear is up for grabs ;-)
No, the definition of clear is not up for grabs. Something is clear if it is easily understood by those with the necessary baseline knowledge. Your posts are not.
You are acting indignant, whet...
In the spirit of Asimov’s 3 Laws of Robotics
It is my contention that Yudkowsky’s CEV converges to the following 3 points:
I further contend that, if this CEV is translated to the 3 Goals above and implemented in a Yudkowskian Benevolent Goal Architecture (BGA), that the result would be a Friendly AI.
It should be noted that evolution and history say that cooperation and ethics are stable attractors while submitting to slavery (when you don’t have to) is not. This formulation expands Singer’s Circles of Morality as far as they’ll go and tries to eliminate irrational Us-Them distinctions based on anything other than optimizing goals for everyone — the same direction that humanity seems headed in and exactly where current SIAI proposals come up short.
Once again, cross-posted here on my blog (unlike my last article, I have no idea whether this will be karma'd out of existence or not ;-)