I've never seen anyone address why that is not the case.
It's solving a different problem.
Problem One: You know exactly what you want your software to do, at a level of detail sufficient to write the software, but you are concerned that you may introduce bugs in the implementation or that it may be fed bad data by a malicious third party, and that in that case terrible consequences will ensue.
Problem Two: You know in a vague handwavy way what you want your software to do, but you yet don't know with enough precision to write the software. You are concerned that if you get this wrong, the software will do something subtly different from what you really wanted, and terrible consequences will ensue.
Software verification and crypto address Problem One. AI safety is an instance of Problem Two, and potentially an exceptionally difficult one.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Yes, verification is a strictly simpler problem, and one that's fairly thoroughly addressed by existing research -- which is why people working specifically on AI safety are paying attention to other things.
(Maybe they should actually be working on doing verification better first, but that doesn't seem obviously a superior strategy.)
Some AI takeover scenarios involve hacking (by the AI, of other systems). We might hope to make AI safer by making that harder, but that would require securing all the other important computer systems in the world. Even though making an AI safe is really hard, it may well be easier than that.
I would be somewhat more convinced that MIRI was up to it's mission if they could contribute to much simpler problems in prerequisite fields.