MattG comments on Open thread, Oct. 12 - Oct. 18, 2015 - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (250)
At least two major classes of existential risk, AI and physics experiments, are areas where a lot of math can come into play. In the case of AI, this is understanding whether hard take-offs are possible or likely and whether an AI can be provably Friendly. In the case of physics experiments, the issues connected to are analysis that the experiments are safe.
In both these cases, little attention is made to the precise axiomatic system being used for the results. Should this be concerning? If for example some sort of result about Friendliness is proven rigorously, but the proof lives in ZFC set theory, then there's the risk that ZFC may turn out to be inconsistent. Similar remarks apply to analysis that various physics experiments are unlikely to cause serious problems like a false vacuum collapse.
In this context, should more resources be spent on making sure that proofs occur in their absolute minimum axiomatic systems, such as conservative extensions of Peano Arithmetic or near conservative extension?
One of the open problems MIRI is working on for FAI is exactly this type of logical uncertainty. It should be able to modify itself if it finds out the logic underlying it's basic programming is incorrect.