A friend of mine is about to launch himself heavily into the realm of AI programming. The details of his approach aren't important; probabilities dictate that he is unlikely to score a major success. He's asked me for advice, however, on how to design a safe(r) AI. I've been pointing him in the right directions and sending him links to useful posts on this blog and the SIAI.
Do people here have any recommendations they'd like me to pass on? Hopefully, these may form the basis of a condensed 'warning pack' for other AI makers.
Addendum: Advice along the lines of "don't do it" is vital and good, but unlikely to be followed. Coding will nearly certainly happen; is there any way of making it less genocidally risky?
Really, never write comments proffering self-aggrandising explanations of why your comments are being badly received. You are way too smart and thoughtful to go all green ink on us like this.
Incidentally, I hope you don't mean the "self-aggrandising" / "green ink" comments literally!
Disagreeing with majorities is often a bad sign. Delusional individuals may create "green ink" explanations of why others are foolish enough to disagree with them. However, critics may also find themselves disagreeing with majorities - for example when in the company of the associates of those being criticised. That is fairly often my role here. I am someone not in the thrall of the prevailing reality distortion field. Under those circumstances disagreements do not have the same significance.