You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Evan_Gaensbauer comments on Room For More Funding In AI Safety Is Highly Uncertain - Less Wrong Discussion

12 Post author: Evan_Gaensbauer 12 May 2016 01:57PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (11)

You are viewing a single comment's thread. Show more comments above.

Comment author: AlexMennen 14 May 2016 12:09:13AM 1 point [-]

But most dangerous thing is creating of many incomparable theories of friendliness, and even AIs based on them which would result in AI wars and extinction.

I strongly disagree.

First, because there are multiple reasons that the creation of many distinct theories of friendliness would not be dangerous: The first one to get to superintelligence should be able to establish a monopoly on power, and then we wouldn't have to worry about the others. Even if that didn't happen, a reasonable decision theory should be able to cooperate with other agents with different reasonable decision theories, when it is in both of their interests to do so. And even if we end up with multiple friendly AIs that are not great at cooperation, it is a particularly easy problem to cooperate with agents that have similar goals (as is implied by all of them being friendly). And even if we end up with a "friendly AI" that is incapable of establishing a monopoly on power but that will cause a great deal of destruction when another similarly capable but differently designed agent comes into existence, even if both agents have broadly similar goals (I would not call this a successful friendly AI), convincing people not to create such AIs does not actually get much easier if the people planning to create the AI have not been thinking about how to make it friendly, so preventing people from developing different theories of friendliness still doesn't help.

But beyond all that, I would also say that not creating many incomparable theories of friendliness is dangerous. If there is only one that anyone is working on, it will likely be misguided, and by the time anyone notices, enough time may have been wasted that friendliness will have lost too much ground in the race against general AI.

Comment author: Evan_Gaensbauer 14 May 2016 06:54:19AM 0 points [-]

Just pointing out I upvoted Turchin's comment above, but I agree with your clarification above here, of the last part of his comment. Nothing I've read thus far raises concern about warring superintelligences.