WilliamKiely comments on Nick Bostrom's TED talk on Superintelligence is now online - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (10)
I'm not sure your argument proves your claim. I think what you've shown is that there exist reasons other than the inability to create perfect boxes to care about the value alignment problem.
We can flip your argument around and apply it to your claim: imagine a world where there was only one team with the ability to make superintelligent AI. I would argue that it'll still be extremely unsafe to build an AI and try to box it. I don't think that this lets me conclude that a lack of boxing ability is the true reason that the value alignment problem is so important.
I agree that there are several reasons why solving the value alignment problem is important.
Note that when I said that Bostrom should "modify" his reply I didn't mean that he should make a different point instead of the point he made, but rather meant that he should make another point in addition to the point he already made. As I said:
Ah, I see. Fair enough!