A friend of mine is about to launch himself heavily into the realm of AI programming. The details of his approach aren't important; probabilities dictate that he is unlikely to score a major success. He's asked me for advice, however, on how to design a safe(r) AI. I've been pointing him in the right directions and sending him links to useful posts on this blog and the SIAI.
Do people here have any recommendations they'd like me to pass on? Hopefully, these may form the basis of a condensed 'warning pack' for other AI makers.
Addendum: Advice along the lines of "don't do it" is vital and good, but unlikely to be followed. Coding will nearly certainly happen; is there any way of making it less genocidally risky?
Yay, that really helped!
Roko and I don't see eye to eye on this issue. From my POV, we have had 50 years of unsuccessful attempts. That is not exactly "getting it right the first time".
Google was not the first search engine, Microsoft was not the first OS maker - and Diffie–Hellman didn't invent public key crypto.
Being first does not necessarily make players uncatchable - and there's a selection process at work in the mean time, that weeds out certain classes of failures.
From my perspective, this is mainly a SIAI confusion. Because their funding is all oriented around the prospect of them saving the world from imminent danger, the execution of their mission apparently involves exaggerating the risks associated with that - which has the effect of stimulating funding from those who they convince that DOOM is imminent - and that the SIAI can help with averting in.
Humans will most likely get the machines they want - because people will build them to sell them - and because people won't buy bad machines.
Tim, I think that what worries me is the "detailed reliable inheritance from human morals and meta-morals" bit. The worry that there will not be "detailed reliable inheritance from human morals and meta-morals" is robust to what specific way you think the future will go. Ems can break the inheritance. The first, second or fifteenth AGI system can break it. Intelligence enhancement gone wrong can break it. Any super-human "power" that doesn't explicitly preserve it will break it.
All the examples you cite differ in the substant... (read more)