I think it might be helpful to have a variant of 3a that likewise says the orthogonality thesis is false, but is not quite so optimistic as to say the alternative is that AI will be "benevolent by default". One way the orthogonality thesis could be false would be that an AI capable of human-like behavior (and which could be built using near-future computing power, say less than or equal to the computing power needed for mind uploading) would have to be significantly more similar to biological brains than current AI approaches, and in particular would have to go through an extended period of embodied social learning similar to children, with this learning process depending on certain kinds of sociable drives along with other similar features like curiosity, playfulness, a bias towards sensory data a human might consider "complex" and "interesting", etc. This degree of convergence with biological structure and drives might make it unlikely it would end up optimizing for arbitrary goals we would see as boring and monomaniacal like paperclip-maximizing, but wouldn't necessarily guarantee friendliness towards humans either. It'd be more akin to reaching into a parallel universe where a language-using intelligent biological species had evolved from different ancestors, grabbing a bunch of their babies and raising them in human society--they might be similar enough to learn language and engage in the same kind of complex-problem solving as humans, but even if they didn't pursue what we would see as boring/monomaniacal goals, their drives and values might be different enough to cause conflict.
Eliezer Yudkowsky's 2013 post at https://www.facebook.com/yudkowsky/posts/10152068084299228 imagined a "cosmopolitan cosmist transhumanist" who would be OK with a future dominated by beings significantly different from us, but who still wants future minds to "fall somewhere within a large space of possibilities that requires detailed causal inheritance from modern humans" as opposed to minds completely outside of this space like paperclip maximizers (in his tweet this May at https://twitter.com/ESYudkowsky/status/1662113079394484226 he made a similar point). So one could have a scenario where orthogonality is false in the sense that paperclip maximizer type AIs aren't overwhelmingly likely even if we fail to develop good alignment techniques, but where even if the degree of convergence with biological brains is sufficient that we're likely to get a mind that a cosmopolitan cosmist transhumanist would be OK with (they would still pursue science, art etc.), we can't be confident we'll get something completely benevolent by default towards human beings. I'm a sort of Star Trek style optimist about different intelligent beings with broadly similar goals being able to live in harmony, especially in some kind of post-scarcity future of widespread abundance, but it's just a hunch--even if orthogonality is false in the way I suggested, I don't think there's any knock-down argument that creating a new form of intelligence would be free of risk to humanity.
Perhaps one can think of a sort of continuum where on one end you have a full understanding that it's a characteristic of language that "everything has a name" as in the Anne Sullivan quote, and on the other end, an individual knows certain gestures are associated with getting another person to exhibit certain behaviors like bringing desired objects to them, but no intuition that there's a whole system of gestures that they mostly haven't learned yet (as an example, a cat might know that rattling its food bowl will cause its owner to come over and refill it). Even if Hellen Keller was not all the way on the latter end of the continuum at the beginning of the story--she could already request new gestures for things she regularly wanted Anne Sullivan to bring to her or take her to--in the course of the story she might have made some significant leap in the direction of the former end of the continuum. In particular she might have realized that she could ask for names of all sorts of things even if there was no regular instrumental purpose for requesting that Sullivan would bring them over to her (e.g. being thirsty and wanting water).
On the general topic of what the Helen Keller story can tell us about AI and whether complex sensory input is needed for humanlike understanding of words, a while ago I read an article at https://web.archive.org/web/20161010021853/http://www.dichotomistic.com/mind_readings_helen%20keller.html that suggests some reasons for caution. It notes that she was not born blind and deaf, but "lost her sight and hearing after an illness at the age of two", so even if she had no conscious memory of what vision and hearing were like, they would have figured into her brain development until that point, as would her exposure to language to that age. The end of the article discusses the techniques developed in Soviet institutions to help people who were actually born blind and deaf, like developing their sense of space by "gradually making the deaf/blind child reach further and further for a spoon of food." It says that eventually they can learn simple fingerspelt commands, and do basic bodily tasks like getting dressed, but only those children who lost their sight and hearing a few years after birth ever develop complex language abilities.
I don't think it's quite right to say the idea of the universe being in some sense mathematical is purely a carry-over of Judeo-Christian heritage--what about the Greek atomists like Leucippus and Democritus for example? Most of their writings have been lost but we do know that Democritus made a distinction similar to the later notion of primary (quantitative) vs. secondary (qualitative) properties discussed at https://plato.stanford.edu/entries/qualities-prim-sec/ with his comment about qualitative sensations being matters of human convention: "By convention sweet and by convention bitter, by convention hot, by convention cold, by convention colours; but in reality atoms and void." CCW Taylor's book "The Atomists: Leucippus and Democritus" gathers together all the known fragments from the first two major atomists as well as commentary by other ancient Greek philosophers, it says that various other philosophers attributed to them the position that the only properties of atoms were geometric ones like size and shape and relative position, for example Aristotle's "Metaphysics" says at http://www.perseus.tufts.edu/hopper/text?doc=Perseus%3Atext%3A1999.01.0052%3Abook%3D1%3Asection%3D985b that for the atomists the "differences" between atoms and groups of atoms were the explanation for all physical reality, and that "These differences, they say, are three: shape, arrangement, and position". Aristotle's "On the Heavens" at http://classics.mit.edu/Aristotle/heavens.3.iii.html also says of the atomists "Now this view in a sense makes things out to be numbers or composed of numbers. The exposition is not clear, but this is its real meaning."
Personally I'm sympathetic to certain forms of panpsychism but I don't think it's inconsistent with a mathematical view of nature. Ever since I read Roger Penrose's book "Shadows of the Mind" as a teenager I've been interested in the notion of the three interconnected "worlds" we have to deal with in any broad philosophical account of reality: the physical world, the world of subjective experience, and the world of mathematical truth (you can see Penrose's memorable diagram of the three worlds and their connections at https://astudentforever.wordpress.com/2015/03/13/roger-penroses-three-worlds-and-three-deep-mysteries-theory/ ). I suppose I have an instinctive monist streak because it always seemed to me philosophers should try to unify these three worlds, the way physicists seek to unify the forces of nature. The notion of "structure" might be a good starting point, since there are good cases for the structuralist perspective (where each part is defined wholly by its relation to other parts, with no purely intrinsic properties) in all three: see mathematical structuralism at https://plato.stanford.edu/entries/structuralism-mathematics/ and structural realism in physics at https://plato.stanford.edu/entries/structural-realism/ (Ladyman and Ross' book "Every Thing Must Go" makes a good extended case for this) and the idea of a "structuralist" view of qualia at https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3957492/ (and also against the idea that this is just Judeo-Christian, the structuralist view of the mind also has some parallels with branches of Mahayana Buddhism that say that all parts of experience and reality exist only in an interdependent way, using the metaphor of "Indra's Net", see http://dharma-rain.org/wp-content/uploads/2016/02/Hua_Yen_Buddhism_Emptiness_Identity_Inte.pdf and note that p. 66 even cites a Buddhist text that can be interpreted as applying this view to numbers as well).
Finally, I'd say that the notion of "reductionism" at the level of predicting physical behavior (the idea that all behavior of more complex systems is in principle derivable from fundamental physical laws acting on basic physical states, whatever those turn out to be exactly) is not primarily a matter of philosophical preconceptions, but more a matter of how this has been a successful paradigm in science which continually expands the range of how many phenomenon can be explained, even if we are far from being able to predict everything in a reductionist way in practice. For example, the range of molecular/chemical behaviors that can be explained in an "ab initio" way from quantum laws has continually expanded over time, likewise the range of cell behaviors that can be explained in terms of biochemical interactions and physical forces, the range of simple brain behaviors or aspects of early embryological development that can be explained in terms of local interactions between cells with one another and with their chemical environment, etc.
I'd make a comparison here to the idea that all adaptive structures in the bodies of living organisms have developed through a process of natural selection acting on mutations that are random with respect to fitness (allowing for the possibility that some adaptive features might be side-effects of others, 'spandrels', like the brain's pattern-seeking abilities being applied in new scenarios not part of an organism's evolutionary history). We can't hope in practice to have strong evidence this is true for every adaptive structure in every organism, but evolutionary biologists continually expand the evidence that this is true in all sorts of specific cases, which makes for a good Occam's razor style case that this is true for all of them. I think the same can be said about the reductionist view that all physical behavior is in principle reducible to physics.