Hugo de Garis is around two orders of magnitude more harmless than Ben.
What about all the other people Ben might help obtain funding for, partly due to his position at SIAI?
And what about the public relations/education aspect? It's harmless that SIAI appears to not consider AI to be a serious existential risk?
And what about the public relations/education aspect? It's harmless that SIAI appears to not consider AI to be a serious existential risk?
This part was not answered. It may be a question to ask someone other than Eliezer. Or just ask really loudly. That sometimes works too.
A friend of mine is about to launch himself heavily into the realm of AI programming. The details of his approach aren't important; probabilities dictate that he is unlikely to score a major success. He's asked me for advice, however, on how to design a safe(r) AI. I've been pointing him in the right directions and sending him links to useful posts on this blog and the SIAI.
Do people here have any recommendations they'd like me to pass on? Hopefully, these may form the basis of a condensed 'warning pack' for other AI makers.
Addendum: Advice along the lines of "don't do it" is vital and good, but unlikely to be followed. Coding will nearly certainly happen; is there any way of making it less genocidally risky?