If the people involved are good naturalists, they will agree that both the symbolic and the connectionist approaches are making claims about high-level descriptions that can apply to things made of atoms. Jerry Fodor, famous proponent that brains have a "language of thought," would still say that the language of thought is a high-level description of collections of low-level things like atoms-bumping into other atoms.
My point is that arguments about what high-level descriptions are useful are also arguments about what things "are." When a way of thinking about the world is powerful enough, we call its building blocks real.
I would still make distinctions between describing human minds and trying to build artificial ones, here. You might have different opinions about how useful different ideas are for the different tasks. Someone will at some point say "We didn't build airplanes that flap their wings." I think a lot of the "old guard" of AI researchers have picked sides in this battle over the years, and the heavy-symbolicist side is in disrepute, but a pretty wide spectrum of views from "mostly symbolic reasoning with some learned components" to "all learned" are represented.
I think there's plenty of machine learning that doesn't look like connectionism. SVMs were successful for a long time and they're not very neuromorphic. I would expect ML that extracts the maximum value from TPUs to be more dense / nonlocal than actual brains, and probably violate the analogy to brains in some other ways too.
I have found that comprehensive overviews of artificial intelligence (Wikipedia, SEP article, Norvig and Russel's AI: A Modern Approach) make reference to symbolic AI and statistical AI in their historical context of the former preceding the latter, their corresponding limitations etc. But I have found it really difficult to dissect this from the question of whether the divide / cooperation between these paradigms are about the implementation of engineering of intelligent agents, or if they are getting at something more fundamental about the space of possible minds (I use this term to be as broad as possible considering anything we would label as a mind, regardless of ontogeny, architecture, physical components etc)?
I have given a list of questions below, but some of them are mutually exclusive, i.e. some answers to one question make other questions irrelevant. The fact that I have a list of questions is a demonstration of the fact I find it difficult to find what the boundaries of the discussion are supposed to be. Basically, I haven't been able to find anything that begins to answer the title question. And so I wouldn't expect any comment to answer each of my subquestions one by one, but to treat them as an expression of my confusion to maybe try an point me in some good directions. Immense thanks in advance, this has been one of those questions strangling me for a while now.