In the context of cognitive science, a 'domain' is a subject matter, or class of problems, that is being reasoned about by some algorithm or agent. A robotic car is operating in the domain of "legally and safely driving in a city". The subpart of the robotic car that plots a route might be operating in the domain "finding low-cost traversals between two nodes in a graph with cost-labeled edges".
In the context of AI alignment, we may care a lot about the degree to which competence in two different domains is separable, or alternatively highly tangled, relative to the class of algorithms reasoning about them:
For example: If the domains X and Y are 'blue cars' and 'red cars', then it seems unlikely that X and Y would be well-separated domains because an agent that knows how to reason well about blue cars is almost surely extremely close to being an agent that can reason well about red cars, in the sense that:
In more complicated cases, which domains are truly close or far from each other, or can be compactly separated out, is a theory-laden assertion. Few people are likely to disagree that blue cars and red cars are very close domains (if they're not specifically trying to be disagreeable). Researchers are more likely to disagree in their predictions about:
A key parameter in these potential disagreements may be what the speaker thinks about the notion of general_intelligence. Specifically, to what extent the natural or the most straightforward approach to get par-human or superhuman performance in key domains, is to take relatively general learning algorithms and deploy them on learning the domain as a special case.
If you think that it would take a weird or twisted design to build a mind that was superhumanly good at designing cars including writing their software, without using general algorithms and methods that could with minor or little adaptation stare at mathematical proof problems and figure them out, then you think 'design cars' and 'prove theorems' and many other domains are in some sense naturally not all that separated. Which (arguendo) is why humans are so much better than chimpanzees at so many apparently different cognitive domains: the same competency, general intelligence, solves all of them.
If on the other hand you are more inspired by the way that superhuman chess AIs can't play Go and AlphaGo can't drive a car, you may think that humans using general intelligence on everything is just an instance of us having a single hammer and trying to treat everything as a nail; and predict that specialized mind designs that were superhuman engineers, but very far in mind design space from being a kind of mind that could prove Fermat's Last Theorem, would actually be a more natural or efficient way to create a superhuman engineer.
It seems likely that a key general parameter in disagreements about "How many domains that look dissimilar on the surface can be solved by pretty similar agents?" is a belief about "How much of the real work or deep cognitive labor in solving sufficiently rich domains is pretty internally similar?"
Arguendo by Yudkowsky, greater knowledge of the human mind's deep architecture seems likely to predictably point in the direction of our learning about more similar or shared factors of cognitive labor. If you're an ancient Greek who doesn't know anything about the brain having a visual cortex, then all the machinery of general intelligence fades into the invisible background; you just see that shipbuilders and smiths seem to have different jobs and different roles in society, or that people who learn one kind of game still need to spend time learning another game. (You also don't ponder how you're all better at every cognitive problem than chimpanzees, because you don't know that humans and chimps are related.) Only after learning about the existence of a cerebellar cortex are you likely to think, "Huh, these tasks all have realtime motor control and error correction in common, and those realtime control tasks are similar enough to all be performed by the same cerebellar architecture, even if the specific motor patterns need to be learned independently." The more we know about how the deep work actually gets performed, the more that data is predictably likely to show us new factors of commonality like the cerebellar cortex, as distinct from our intuitive sense driven by the surface motor patterns being different.