Well, the first bold section is a true, general and relevant statement.
That doesn't mean the ink isn't green. In this particular case, he is persistently claiming that his remarks are being attacked due to various sorts of biases on the parts of those reading it, and he is doing so:
That's green ink.
Edited for pronouns.
Edited for pronouns again, properly this time. Curse you, Picornaviridae Rhinovirus!
I think http://en.wikipedia.org/wiki/Green_ink makes it pretty clear that green ink is barely-coherent rambling coming from nutcases.
Someone disagreeing with other people and explaining why he thinks they are wrong is not "green ink" - unless that individual is behaving in a crazy fashion.
I don't think anyone has any evidence that my behaviour is anything other than rational and sane in this case. At any rate, so far no such evidence has been presented AFAICS. So: I think "green ink" is a fairly clear mis-characterisation.
A friend of mine is about to launch himself heavily into the realm of AI programming. The details of his approach aren't important; probabilities dictate that he is unlikely to score a major success. He's asked me for advice, however, on how to design a safe(r) AI. I've been pointing him in the right directions and sending him links to useful posts on this blog and the SIAI.
Do people here have any recommendations they'd like me to pass on? Hopefully, these may form the basis of a condensed 'warning pack' for other AI makers.
Addendum: Advice along the lines of "don't do it" is vital and good, but unlikely to be followed. Coding will nearly certainly happen; is there any way of making it less genocidally risky?