Yeah, I agree with a lot of this. Especially:
If you want to have some fun, you can reach for Rice's theorem (basically following from Turing's halting problem) which shows that you can't logically infer any semantic properties whatsoever from the code of an undocumented computer program. Various existing property inferrer groups like hackers and biologists will nod along and then go back to poking the opaque mystery blobs with various clever implements and taking copious notes of what they do when poked, even though full logical closure is not available.
I ...
Good - though I'd want to clarify that there are some reductionists who think that there must be a reductive explanation for all natural phenomena, even if some will remain unknowable to us (for practical or theoretical reasons).
Other non-reductionists believe that the idea of giving a causal explanation of certain facts is actually confused - it's not that there is no such explanation, it's that the very idea of giving certain kinds of explanation means we don't fully understand the propositions involved. E.g. if someone were to ask why certain mathematical facts are true, hoping for a causal explanation in terms of brain-facts or historical-evolutionary facts, we might wonder whether they understood what math is about.
If you think there's some impossible gap between the human and the nonhuman worlds, then how do you think actual humans got here?
There are many types of explanatory claims in our language. Some are causal (how did something come to be), others are constitutive (what is it to be something), others still are normative (why is something good or right). Most mathematical explanation is constitutive, most action explanation is rational, and most material explanation is causal. It's totally possible to think there's a plain causal explanation about how hum...
Totally get it. There are lots of folks practicing philosophy of mind and technology today in that aussie tradition who I think take these questions seriously and try to cache out what we mean when we talk about agency, mentality, etc. as part of their broader projects.
I'd resist your characterization that I'm insisting words shouldn't be used a particular way, though I can understand why it might seem that way. I'm rather hoping to shed more light on the idea raised by this post that we don't actually know what many of these words even mean when the...
Naturalizing normativity just means explaining normative phenomena in terms of other natural phenomena whose existence we accept as part of our broader metaphysics. E.g. explaining biological function in terms of evolution by natural selection, where natural selection is explained by differential survival rates and other statistical facts. Or explaining facts about minds, beliefs, attitudes, etc., in terms of non-humoncular goings-on in the brain. The project is typically aimed at humans, but shows up as soon as you get to biology and the handful of normative concepts (life, function, health, fitness, etc.) that constitute its core subject matter.
Hope that helps.
No - but perhaps I'm not seeing how they would make the case. Is the idea that somehow their existence augurs a future in which tech gets more autonomous to a point where we can no longer control it? I guess I'd say, why should we believe that's true? Its probably uncontroversial to believe many of our tools will get more autonomous - but why should we think that'll lead to the kind of autonomy we enjoy?
Even if you believe that the intelligence and autonomy we enjoy exist on a kind of continuum, from like single celled organisms through chess-playing...
I'm perpetually surprised by the amount of thought that goes into this sort of thing coupled with the lack of attention to the philosophical literature on theories of mind and agency in the past, let's just say 50 years. I mean look at the entire debate around whether or not it's possible to naturalize normativity - most of the philosophical profession has given up on this or accepts the question was at best too hard to answer, at worst, ill-conceived from the start.
These literatures are very aware of, and conversant with, the latest and greatest in cogsci...
This is great work. Glad that folks here take these Ryle-influenced ideas seriously and understand what it means for a putative problem about mind or agency to dissolve. Bravo.
To take the next (and I think, final step) towards dissolution, I would recommend reading and reacting to a 1998 paper by John McDowell called "The Content of Perceptual Experience" which is critical of Dennett's view and even more Rylian and Wittgensteinian in it's spirit (Gilbert Ryle was one of Dennett's teachers).
I think it's the closest you'll get to de-mystification and "... (read more)