SaidAchmiz comments on No Universally Compelling Arguments in Math or Science - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (227)
That was well expressed, in a way, but seems to me to miss the central point. People who dthink there are universally compelling arguments in science or maths, don't mean the same thing by "universal". They don't think their universally compelling arguments would work on crazy people, and don't need to be told they wouldn't work on crazy AI's or pocket calculators either. They are just not including those in the set "universal".
ADDED:
It has been mooted that NUCA is intended as a counterblast to Why Can't an AGI Work Out Its Own Morality. It does work against a strong version of that argument: one that says any mind randomly selected from mindspace will be persuadable into morality, or be able to figure it out. Of course the proponents of WCAGIWOM (eg Wei Dai, Richard Loosemore) aren't asserting that.They are assuming that the AGI's in question will come out of an realistic research project , not a random dip into mindspace. They are assuming that the researchers are't malicious, and that the project is reasonably successful. Those constraints impact the argument. A successful AGI would be an intelligent AGI would be a rational AI would be a persuadable AI.
So... what if you try to build a rational/persuadable AGI, but fail, because building an AGI is hard and complicated?
This idea that because AI researchers are aiming for the rational/persuadable chunk of mindspace, they will therefore of course hit their target, seems to me absurd on its face. The entire point is that we don't know exactly how to build an AGI with the precise properties we want it to have, and AGIs with properties different from the ones we want it to have will possibly kill us.
What if you try to hardwire in friendliness and fail? Out of the two, the latter seems more brittle to me -- if it fails, it'll fail hard. A merely irrational AI would be about as dangerous as David Icke.
If you phrase it, as I didn't, in terms of necessity, yes. The actual point was that our probability of hitting a point in mindspace will be heavily weighted by what we are trying to do, and how we are doing it. An unweighted mindspace may be populated with many Lovercraftian horrors, but that theoretical possibility is no more significant than p-zombies.
Possibly , but with low probability, is a Pascal's Mugging. MIRI needs significant probability.
I see. Well, that reduces to the earlier argument, and I refer you to the mounds of stuff that Eliezer et al have written on this topic. (If you've read it and are unsatisfied, well, that is in any case a different topic.)
I refer you to the many unanswered objections.