This is the only such essay/post I've found and I really enjoyed reading it. I might go away and think a lot more about it, so thank you for writing it. If I come up with a rejoinder, I'll let you know.
There are a few aspects which are presented as dichotomous with respect to natural language and mathematics.
there is an important difference between mathematical reasoning and "informal" reasoning: the latter virtually always involves a component that is not conscious and that cannot be easily verbalized. So, although thinking always involves models, a lot of the time these models are fully or partially hidden, encoded somewhere in the neural networks of the brain. This hidden, unconscious part is often called "intuition".
It seems to be suggested that 'mathematical reasoning' often doesn't involve intuition (because the claim says that the difference is that "informal" reasoning virtually always does). OK. But understood in any normal, colloquial way, surely mathematical reasoning often does involve intuition? I might even say that it always almost involves intuition. I guess we might be able to get round this by saying that when one "does mathematics" (whatever that means), any part of your reasoning that is not attributable to either unconscious intuition or conscious, informal, verbal reasoning, is called "conscious mathematical reasoning", and that's what we're really talking about. i.e. we define it so as not to include any purely intuitive thoughts.
Regarding 'precision' as a difference between mathematics and natural language, the author says:
mathematical statements have unambiguous meaning within the ontology of mathematics....
natural language statements have meaning that often depends on context and background assumptions.
I don't really understand what the phrase "the ontology of mathematics" is 'doing' in this sentence, but if I skip over this phrase, I find it hard to agree with the point being made (perhaps there is more going on with how 'ontology' is used here than I can tell), because I think that most people who have done research in pure mathematics will agree that mathematical statements often (or even always) have meaning that depends on context and background assumptions.
Regarding 'objectivity' as a difference between mathematics and natural language, the author says:
Natural language is heavily biased towards a human-centric view of the world, and to some extent towards the conditions in which human existed historically. Natural language evolved in a process which was largely unconscious and not even quite human (in the same sense that biological evolution of humans is not in itself human).
I find it very hard to disagree with: " Mathematics is biased towards a human-centric view of the world". In some sense, doesn't plain anthropomorphic bias just suggest: How can it not be so? Moreover, mathematics is clearly rooted in the conditions in which we existed historically, I think the onus is again on the author's claim to say why this would not be the case; it's an area of human thought and inquiry and so is prima facie biased towards our condition. Here, I find it hard to really work out if this is a point I could ever believe (personally). Is there something 'more objective' about '1 + 2 = 3' compared with 'Seattle is in Washington'? I guess a fair number of people would say 'yes, the first statement is more objective'. But why? I don't really know.
It's interesting that, say, collaborators in mathematics are able to have long conversations in which they work out complex mathematical arguments without having to write down things using mathematical notation. Sure, they would use specialised terminology, but things like this really blur the distinction: Is it reasoning using natural language, verbally, but it is clearly conscious and mathematical. I'm not sure there is a good distinction here.
At first reading, one source of confusion I can imagine is conflation of 'mathematics' (whatever that may be) with something closer to 'the symbolic manipulation of the notation of basic, well-understood, applicable mathematics'. There is some kind of conscious process, which feels extremely 'objective' and unambigious when you go from '2x = 4' to 'x=2', or something (it doesn't have to be this basic, e.g. computing a partial derivative of an explicit function feel like this or solving a linear system). But in the big scheme of things, this is very basic mathematics for which the notation is very refined and which humans understand extremely well. I think it is a mistake to think that 'mathematical reasoning' more generally feels like this. I keep coming back to it but anyone who has done research in pure mathematics will understand how fuzzy and intuitive it can feel. I've been a mathematician for many years and something which is hard to convey to students is how you have to reason when you don't know the answer or even if there is an answer that is findable by you (or even by humans at all). Most people who study mathematics will never encounter and deal with this level of uncertainty in their mathematical reasoning but as a researcher, it is what you spend most of your energy dealing with and the way you think and communicate changes and develops to match what you are dealing with. For example:
Other people can evaluate your math and build on it, without any risk that they misunderstand your definitions or having to deal with difficult to convey intuitions
I cannot help but say that this is very unlikely to be the viewpoint of someone who has spent time doing research in pure mathematics.
Quantitative answers - and using computers to get those quantitative answers from data and big calculations - is one of the undeniable advantages of mathematics, I agree. But as you hint, these alone don't suggest that it is at all necessary for AI alignment/safety problems. As I said, I'm a mathematician, so I would actually like it if it were somehow necessary to develop deeper/more advanced mathematical theory in order to help with the problem, but at the moment that seems uncertain to my mind.
This is the only such essay/post I've found and I really enjoyed reading it. I might go away and think a lot more about it, so thank you for writing it. If I come up with a rejoinder, I'll let you know.
There are a few aspects which are presented as dichotomous with respect to natural language and mathematics.
It seems to be suggested that 'mathematical reasoning' often doesn't involve intuition (because the claim says that the difference is that "informal" reasoning virtually always does). OK. But understood in any normal, colloquial way, surely mathematical reasoning often does involve intuition? I might even say that it always almost involves intuition. I guess we might be able to get round this by saying that when one "does mathematics" (whatever that means), any part of your reasoning that is not attributable to either unconscious intuition or conscious, informal, verbal reasoning, is called "conscious mathematical reasoning", and that's what we're really talking about. i.e. we define it so as not to include any purely intuitive thoughts.
Regarding 'precision' as a difference between mathematics and natural language, the author says:
I don't really understand what the phrase "the ontology of mathematics" is 'doing' in this sentence, but if I skip over this phrase, I find it hard to agree with the point being made (perhaps there is more going on with how 'ontology' is used here than I can tell), because I think that most people who have done research in pure mathematics will agree that mathematical statements often (or even always) have meaning that depends on context and background assumptions.
Regarding 'objectivity' as a difference between mathematics and natural language, the author says:
I find it very hard to disagree with: " Mathematics is biased towards a human-centric view of the world". In some sense, doesn't plain anthropomorphic bias just suggest: How can it not be so? Moreover, mathematics is clearly rooted in the conditions in which we existed historically, I think the onus is again on the author's claim to say why this would not be the case; it's an area of human thought and inquiry and so is prima facie biased towards our condition. Here, I find it hard to really work out if this is a point I could ever believe (personally). Is there something 'more objective' about '1 + 2 = 3' compared with 'Seattle is in Washington'? I guess a fair number of people would say 'yes, the first statement is more objective'. But why? I don't really know.
It's interesting that, say, collaborators in mathematics are able to have long conversations in which they work out complex mathematical arguments without having to write down things using mathematical notation. Sure, they would use specialised terminology, but things like this really blur the distinction: Is it reasoning using natural language, verbally, but it is clearly conscious and mathematical. I'm not sure there is a good distinction here.
At first reading, one source of confusion I can imagine is conflation of 'mathematics' (whatever that may be) with something closer to 'the symbolic manipulation of the notation of basic, well-understood, applicable mathematics'. There is some kind of conscious process, which feels extremely 'objective' and unambigious when you go from '2x = 4' to 'x=2', or something (it doesn't have to be this basic, e.g. computing a partial derivative of an explicit function feel like this or solving a linear system). But in the big scheme of things, this is very basic mathematics for which the notation is very refined and which humans understand extremely well. I think it is a mistake to think that 'mathematical reasoning' more generally feels like this. I keep coming back to it but anyone who has done research in pure mathematics will understand how fuzzy and intuitive it can feel. I've been a mathematician for many years and something which is hard to convey to students is how you have to reason when you don't know the answer or even if there is an answer that is findable by you (or even by humans at all). Most people who study mathematics will never encounter and deal with this level of uncertainty in their mathematical reasoning but as a researcher, it is what you spend most of your energy dealing with and the way you think and communicate changes and develops to match what you are dealing with. For example:
I cannot help but say that this is very unlikely to be the viewpoint of someone who has spent time doing research in pure mathematics.
Quantitative answers - and using computers to get those quantitative answers from data and big calculations - is one of the undeniable advantages of mathematics, I agree. But as you hint, these alone don't suggest that it is at all necessary for AI alignment/safety problems. As I said, I'm a mathematician, so I would actually like it if it were somehow necessary to develop deeper/more advanced mathematical theory in order to help with the problem, but at the moment that seems uncertain to my mind.