Paradiddle

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

Section 1.6 is another appendix about how this series relates to Philosophy Of Mind. My opinion of Philosophy Of Mind is: I’m against it! Or rather, I’ll say plenty in this series that would be highly relevant to understanding the true nature of consciousness, free will, and so on, but the series itself is firmly restricted in scope to questions that can be resolved within the physical universe (including physics, neuroscience, algorithms, and so on). I’ll leave the philosophy to the philosophers.

At the risk of outing myself as a thin-skinned philosopher, I want to push back on this a bit. If we are taking "philosophy of mind" to mean, "the kind of work philosophers of mind do" (which I think we should), then your comment seems misplaced. Crucially, one need not be defending particular views on "big questions" about the true nature of consciousness, free will, and so on to be doing philosophy of mind. Rather, much of the work philosophers of mind do is continuous with scientific inquiry. Indeed, I would say some philosophy of mind is close to indistinguishable from what you do in this post! For example, lots of this work involves trying to carve up conceptual space in a way that coheres with empirical findings, suggests avenues for further research, and renders fruitful discussion easier. Your section 1.3 in this post features exactly the kind of conceptual work that is the bread-and-butter of philosophy. So, far from leaving philosophy to the philosophers, I actually think your work would fit comfortably into the more empirically informed end of contemporary philosophy of mind. To end on a positive note, I think it's really clearly written, fascinating, and fun to read. So thanks! 

I strongly believe that step 1 is sufficient or almost sufficient for step 2, i.e., that it's impossible to give an adequate account of human phenomenology without figuring out most of the computational aspects of consciousness.

Apologies for nitpicking, but your strong belief that step 1 is (almost) sufficient for step 2 would be more faithfully re-phrased as: it will (probably) be possible/easy to give an adequate account of human phenomenology by figuring out most of the computational aspects of consciousness. The way you phrased it (viz., "impossible...without") is equivalent to saying that step 1 is necessary for step 2, an importantly different claim (on this phrasing, something besides the computational aspects may be required). Of course, you may think it is both necessary and sufficient, I'm just pointing out the distinction. 

I agree with the thrust of this comment, which I read as saying something like "our current physics is not sufficient to explain, predict, and control all macroscopic phenomena". However, this is a point which Sean Carroll would agree with. From the paper under discussion (p.2): "This is not to claim that physics is nearly finished and that we are close to obtaining a Theory of Everything, but just that one particular level in one limited regime is now understood." 

The claim he is making, then, is totally consistent with the need to find further approximations and abstractions to model macroscopic phenomena. His point is that none of that will dictate modifications to the core theory (effective quantum field theory) when applied to "everyday" phenomena which occur in regions of the universe which we currently interact with (because the boundary conditions of this region of the universe are compatible with EQFT). Another way to put this is that Carroll claims no possible experiment can be conducted within the "everyday regime" which will falsify the core theory. Do you still disagree?

For the record, this is just to clarify what Carroll's claim is. I totally agree that that none of this is relevant to overcoming the limitations of formal verification, which very clearly depend on many abstractions and approximations and will continue to do so for the foreseeable future.

I see. I'm afraid I don't have much great literature to recommend on computational semantics (though Josh Tenenbaum's PhD dissertation seems relevant). I still wonder whether, even if you disagree with the approaches you have seen in that domain, those might be the kind of people well-placed to help with your project. But that's your call of course. 

Depending on your goals with this project, you might get something out of reading work by relevance theorists like Sperber, Wilson, and Carston (if you haven't before). I find Carston's reasoning about how various aspects of language works quite compelling. You won't find much to help solve your mathematical problems there, but you might find considerations that help you disambiguate between possible things you want your model of semantics to do (e.g., do you really care about semantics, per se, or rather concept formation?). 

Thanks for the response. Personally, I think your opening sentence as written is much, much too broad to do the job you want it to do. For example, I would consider "natural language semantics as studied in linguistics" to include computational approaches, including some Bayesian approaches which are similar to your own. If I were a computational linguist reading your opening sentence, I would be pretty put off (presumably, these are the kind of people you are hoping not to put off). Perhaps including a qualification that it is classical semantics you are talking about (with optional explanatory footnote) would be a happy medium.

I enjoyed the content of this post, it was nicely written, informative, and interesting. I also realise that the "less bullshit" framing is just a bit of fun that shouldn't be taken too seriously. Those caveats aside, I really dislike your framing and want to explain why! Reasons below.

First, the volume of work on "semantics" in linguistics is enormous and very diverse. The suggestion that all of it is bullshit comes across as juvenile, especially without providing further indication as to what kind of work you are talking about (the absence of a signal that you are familiar with the work you think is bullshit is a bit galling).

Second, this work might interest people who work on similar things. Indeed, this seems like something you are explicitly after. However, your casual dismissal of prior work on semantics as bullshit combined with a failure to specify the nature of the project you are pursuing in terms a linguist would recognise (i.e., your project is far more specific than "semantics") could prevent engagement and useful feedback from the very people who are best-placed to provide it. 

Third, on the object level, I think there is a gulf in numerosity (from more to less numerous) separating 1) human concepts (these might roughly be the "latent variables in probabilistic generative models" in your and Steven Byrnes comment chain), 2) communicable human concepts (where communicability is some kind of equivalence, as in your model), and 3) human concepts with stable word meanings in the current lexicon (like common nouns). I think your framing in this post conflates the three (even if you yourself do not). The reason I include this object level worry here is that, if my worry is indicative of how others might react, then it could be more of a turn off for potential collaborators to see these notions conflated in the same post as deriding other work as not capturing what semantics is really about (if you think my distinctions are reasonable, which of them do you think semantics is really about and what exactly do you worry linguists have been erroneously working on all this time?).

Again, interesting work, hope this didn't come off too combative!

Fair enough if literally any approach using symbolic programs (e.g. a python interpreter) is considered neurosymbolic, but then there isn't any interesting weight behind the claim "neurosymbolic methods are necessary".

If somebody achieved a high-score on the ARC challenge by providing the problems to an LLM as prompts and having it return the solutions as output, then the claim "neurosymbolic methods are necessary" would be falsified. So there is weight to the claim. Whether it is interesting or not is obviously in the eye of the beholder. 

I think the kind of sensible goalpost-moving you are describing should be understood as run-of-the-mill conceptual fragmentation, which is ubiquitous in science. As scientific communities learn more about the structure of complex domains (often in parallel across disciplinary boundaries), numerous distinct (but related) concepts become associated with particular conceptual labels (this is just a special case of how polysemy works generally). This has already happened with scientific concepts like gene, species, memory, health, attention and many more. 

In this case, it is clear to me that there are important senses of the term "general" which modern AI satisfies the criteria for. You made that point persuasively in this post. However, it is also clear that there are important senses of the term "general" which modern AI does not satisfy the criteria for. Steven Byrnes made that point persuasively in his response. So far as I can tell you will agree with this. 

If we all agree with the above, the most important thing is to disambiguate the sense of the term being invoked when applying it in reasoning about AI. Then, we can figure out whether the source of our disagreements is about semantics (which label we prefer for a shared concept) or substance (which concept is actually appropriate for supporting the inferences we are making).

What are good discourse norms for disambiguation? An intuitively appealing option is to coin new terms for variants of umbrella concepts. This may work in academic settings, but the familiar terms are always going to have a kind of magnetic pull in informal discourse. As such, I think communities like this one should rather strive to define terms wherever possible and approach discussions with a pluralistic stance. 

I actually think what you are going for is closer to JL Austin's notion of an illocutionary act than anything in Wittgenstein, though as you say, it is an analysis of a particular token of the type ("believing in"), not an analysis of the type. Quoting Wikipedia:

"According to Austin's original exposition in How to Do Things With Words, an illocutionary act is an act:

  • (1) for the performance of which I must make it clear to some other person that the act is performed (Austin speaks of the 'securing of uptake'), and
  • (2) the performance of which involves the production of what Austin calls 'conventional consequences' as, e.g., rights, commitments, or obligations (Austin 1975, 116f., 121, 139)."

Your model of "believing in" is essentially an unpacking of the "conventional consequences" produced by using the locution in various contexts. I think it is a good unpacking, too!

I do think that some of the contrasts you draw (belief vs. believing in) would work equally well (and with more generality) as contrasts between beliefs and illocutionary acts, though.

In Leibniz’ case, he’s known almost exclusively for the invention of calculus.

Was this supposed to be a joke (if so, consider me well and truly whooshed)? At any rate, it is most certainly not the case. Leibniz is known for a great many things (both within and without mathematics) as can be seen from a cursory glance at his Wikipedia page

Load More