I study Logic and have worked on Machine Learning for a bit. Interested in transitioning to AI Safety. Reach out to me about Formal Verification of AI.
We simply require them to generate formal proofs of any designs or software that they create. These can then be easily and rigorously checked and they can be used without concern for the properties of the AI which generated them.
That would be neat! I'm overall quite excited about this approach, even though there are quite a few details to iron out. My main skepticism (as harfe pointed out before) is indeed how to specify the things we care about in a formal format which can then be formally verified. Do you know of any ongoing (or past) efforts which try to convert natural language specifications into formal ones?
I've heard of formal verification efforts in NASA where they gather a bunch of domain experts who, using a standardized template, write down the safety specifications of a spacecraft. Then, formal methods researchers invented a logic which was expressive enough to encode these specifications and formally verified the specifications.
That would be neat! I'm overall quite excited about this approach, even though there are quite a few details to iron out. My main skepticism (as harfe pointed out before) is indeed how to specify the things we care about in a formal format which can then be formally verified. Do you know of any ongoing (or past) efforts which try to convert natural language specifications into formal ones?
I've heard of formal verification efforts in NASA where they gather a bunch of domain experts who, using a standardized template, write down the safety specifications of a spacecraft. Then, formal methods researchers invented a logic which was expressive enough to encode these specifications and formally verified the specifications.