gjm comments on To contribute to AI safety, consider doing AI research - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (39)
The implication I'm reading in TRIZ-Ingenieur's words is that humans are weak, fallible, corruptible -- but a regulatory body is not. To quote him,
This is a common fallacy where some body (organization, committee, council, etc.) is considered to be immune to human weaknesses as if it were composed of selfless enlightened philosopher-kings.
Essentially, the argument here is that mere humans can't be trusted with AI development. Without opining on the truth of the subject claim, my point is that if they can't, having a regulatory body won't help.
I agree that if TRIZ-Ingenieur thinks regulatory bodies are strong, infallible, and incorruptible, then he is wrong. I don't see any particular reason to think he thinks that, though. It may in fact suffice for regulatory bodies' weaknesses, errors and corruptions to be different from those of the individual humans being regulated, which they often are.
(I do not get the impression that T-I thinks "mere humans can't be trusted with AI development" in any useful sense[1].)
[1] Example of a not-so-useful sense: it is probably true that mere humans can't with 100% confidence of safety be trusted with AI development, or with anything else, and indeed the same will be true of regulatory bodies. But this doesn't yield a useful argument against AI development for anyone who cares about averages and probabilities rather than only about the very worst case.