I don't think so. Compare the following two requests:
(1) Describe a refrigerator without using the word refrigerator or near-synonyms.
(2) Describe the structure of a refrigerator in terms of moving parts and/or subprocesses.
The first request demands the tabooing of words; the second request demands an answer of a particular (theory-laden) form. I think the OPs request is like request 2. What's more, I expect submitting request 2 to a random sample of people would license the same erroneous conclusion about "refrigerator" as it did about "consciousnes...
Section 1.6 is another appendix about how this series relates to Philosophy Of Mind. My opinion of Philosophy Of Mind is: I’m against it! Or rather, I’ll say plenty in this series that would be highly relevant to understanding the true nature of consciousness, free will, and so on, but the series itself is firmly restricted in scope to questions that can be resolved within the physical universe (including physics, neuroscience, algorithms, and so on). I’ll leave the philosophy to the philosophers.
At the risk of outing myself as a thin-skinned philosop...
I strongly believe that step 1 is sufficient or almost sufficient for step 2, i.e., that it's impossible to give an adequate account of human phenomenology without figuring out most of the computational aspects of consciousness.
Apologies for nitpicking, but your strong belief that step 1 is (almost) sufficient for step 2 would be more faithfully re-phrased as: it will (probably) be possible/easy to give an adequate account of human phenomenology by figuring out most of the computational aspects of consciousness. The way you phrased it (viz., "impossible......
I agree with the thrust of this comment, which I read as saying something like "our current physics is not sufficient to explain, predict, and control all macroscopic phenomena". However, this is a point which Sean Carroll would agree with. From the paper under discussion (p.2): "This is not to claim that physics is nearly finished and that we are close to obtaining a Theory of Everything, but just that one particular level in one limited regime is now understood."
The claim he is making, then, is totally consistent with the need to find further appro...
I see. I'm afraid I don't have much great literature to recommend on computational semantics (though Josh Tenenbaum's PhD dissertation seems relevant). I still wonder whether, even if you disagree with the approaches you have seen in that domain, those might be the kind of people well-placed to help with your project. But that's your call of course.
Depending on your goals with this project, you might get something out of reading work by relevance theorists like Sperber, Wilson, and Carston (if you haven't before). I find Carston's reasoning about how...
Thanks for the response. Personally, I think your opening sentence as written is much, much too broad to do the job you want it to do. For example, I would consider "natural language semantics as studied in linguistics" to include computational approaches, including some Bayesian approaches which are similar to your own. If I were a computational linguist reading your opening sentence, I would be pretty put off (presumably, these are the kind of people you are hoping not to put off). Perhaps including a qualification that it is classical semantics you are talking about (with optional explanatory footnote) would be a happy medium.
I enjoyed the content of this post, it was nicely written, informative, and interesting. I also realise that the "less bullshit" framing is just a bit of fun that shouldn't be taken too seriously. Those caveats aside, I really dislike your framing and want to explain why! Reasons below.
First, the volume of work on "semantics" in linguistics is enormous and very diverse. The suggestion that all of it is bullshit comes across as juvenile, especially without providing further indication as to what kind of work you are talking about (the absence of a signal th...
Fair enough if literally any approach using symbolic programs (e.g. a python interpreter) is considered neurosymbolic, but then there isn't any interesting weight behind the claim "neurosymbolic methods are necessary".
If somebody achieved a high-score on the ARC challenge by providing the problems to an LLM as prompts and having it return the solutions as output, then the claim "neurosymbolic methods are necessary" would be falsified. So there is weight to the claim. Whether it is interesting or not is obviously in the eye of the beholder.
I think the kind of sensible goalpost-moving you are describing should be understood as run-of-the-mill conceptual fragmentation, which is ubiquitous in science. As scientific communities learn more about the structure of complex domains (often in parallel across disciplinary boundaries), numerous distinct (but related) concepts become associated with particular conceptual labels (this is just a special case of how polysemy works generally). This has already happened with scientific concepts like gene, species, memory, health, attention and many more. ...
I actually think what you are going for is closer to JL Austin's notion of an illocutionary act than anything in Wittgenstein, though as you say, it is an analysis of a particular token of the type ("believing in"), not an analysis of the type. Quoting Wikipedia:
"According to Austin's original exposition in How to Do Things With Words, an illocutionary act is an act:
In Leibniz’ case, he’s known almost exclusively for the invention of calculus.
Was this supposed to be a joke (if so, consider me well and truly whooshed)? At any rate, it is most certainly not the case. Leibniz is known for a great many things (both within and without mathematics) as can be seen from a cursory glance at his Wikipedia page.
Rather, they might be mere empty machines. Should you still tolerate/respect/etc them, then?"
My sense is that I'm unusually open to "yes," here.
I think the discussion following from here is a little ambiguous (perhaps purposefully so?). In particular, it is unclear which of the following points are being made:
1: Sufficient uncertainty with respect to the sentience (I'm taking this as synonymous with phenomenal consciousness) of future AIs should dictate that we show them tolerance/respect etc...
2: We should not be confident that sentience is a good c...
Apologies, I had thought you would be familiar with the notion of functionalism. Meaning no offence at all but it's philosophy of mind 101, so if you're interested in consciousness, it might be worth reading about it. To clarify further, you seem to be a particular kind of computational functionalist. Although it might seem unlikely to you, since I am one of those "masturbatory" philosophical types who thinks it matters how behaviours are implemented, I am also a computational functionalist! What does this mean? It means that computational functionalism is...
I am sorry that you got the impression I was trolling. Actually I was trying to communicate to you. None of the candidate criteria I suggested were conjured ex nihilo out of a hat or based on anything that I just made up. Unfortunately, collecting references for all of them would be pretty time consuming. However, I can say that the global projection phrase was gesturing towards global neuronal workspace theory (and related theories). Although you got the opposite impression, I am very familiar with consciousness research (including all of the references y...
I think you're missing something important.
Obviously I can't speak to the reason there is a general consensus that LLM-based chatbots aren't conscious (and therefore don't deserve rights). However, I can speak to some of the arguments that are still sufficient to convince me that LLM-based chatbots aren't conscious.
Generally speaking, there are numerous arguments which essentially have the same shape to them. They consist of picking out some property that seems like it might be a necessary condition for consciousness, and then claiming that LLM-based...
I kinda feel like you have to be trolling with some of these?
The very first one, and then some of the later ones are basically "are you made of meat". This would discount human uploads for silly reasons. Like if I uploaded and was denied rights for lack of any of these things they I would be FUCKING PISSED OFF (from inside the sim where I was hanging out, and would be very very likely to feel like I had a body, depending on how the upload and sim worked, and whether they worked as I'd prefer). This is just "meat racism" I think?
...Metabolism, Nociceptors, Hor
Enjoyable post, I'll be reading the rest of them. I especially appreciate the effort that went into warding off the numerous misinterpretations that one could easily have had (but I'm going to go ahead an ask something that may signal I have misinterpreted you anyhow).
Perhaps this question reflects poor reading comprehension, but I'm wondering whether you are thinking of valence as being implemented by something specific at a neurobiological level or not? To try and make the question clearer (in my own head as much as anything), let me lay out two al...
In other words, you think that even in a world where the distribution of mathematical methods were very specific to subject areas, this methodology would have failed to show that? If so, I think I disagree (though I agree the evidence of the paper is suggestive, not conclusive). Can you explain in more detail why you think that? Just to be clear, I think the methodology of the paper is coarse, but not so coarse as to be unable to pick out general trends.
Perhaps to give you a chance to say something informative, what exactly did you have in mind by "united around methodology" when you made the original comment I quoted above?
Ok, I do really like that move, and generally think of fields as being much more united around methodology than they are around subject-matter. So maybe I am just lacking a coherent pointer to the methodology of complex-systems people.
The extent to which fields are united around methodologies is an interesting question in its own right. While there are many ways we could break this question down which would probably return different results, a friend of mine recently analysed it with respect to mathematical formalisms (paper: https://link.springer.com/arti...
I don't have an answer for your question about how you might become confident that something really doesn't exist (other than a generic 'reason well about social behaviour in general, taking all possible failure modes into account'). However, I would point out that the example you give is about your group of friends in particular, which is a very different case from society at large. Shapeshifting lizardmen are almost certainly not evenly distributed across friendship groups such that every group of a certain size has one, but rather clumped together as we would expect due to homophily.
Edit: I see this point was already addressed in Bezzi's response on filter bubbles.
Thanks for the response.
Personally I'm confident that whatever people are managing to refer to by "consciousness" is a process than runs on matter
I don't disagree that consciousness is a process that runs on matter, but that is a separate question from whether the typical referent of consciousness is that process. If it turned out my consciousness was being implemented on a bunch of grapes it wouldn't change what I am referring to when I speak of my own consciousness. The referents are the experiences themselves from a first-person perspective.
...I asked peop
Really interesting stuff, thanks for sharing it!
I'm afraid I'm sceptical that you methodology licenses the conclusions you draw. You state that you pushed people away from "using common near-synonyms like awareness or experience" and "asked them to instead describe the structure of the consciousness process, in terms of moving parts and/or subprocesses". You end up concluding, on the basis of people's radically divergent responses when so prompted, that they are referring to different things with the term 'consciousness'.
The problem I see is that the...
This seems like an important comment to me. Before the discovery of atoms, if you asked people to talk about "the thing stuff was made out of," in terms of moving parts and subprocesses, you'd probably get a lot of different confused responses, and focus on different aspects. However, that doesn't mean people are necessarily referring to different concepts - they just have different underlying models of the thing they're all pointing,
The distinction is that without the initial 0-1 phase transition, none of the other stuff is possible. They are all instances of cumulative cultural accretion, whereas the transition constitutes entering the regime of cumulative cultural accretion (other biological organisms and extant AI systems are not in this regime). If I understand the author correctly, the creation of AGI will increase the pace of cumulative cultural accretion, but will not lead us (or them) to exit that regime (since, according to the point about universality, there is no further re...
I have to say I agree that there is vagueness in the transition to universality. That is hardly surprising seeing as it is a confusing and contentious subject that involves integrating perspectives on a number of other confusing and contentious subjects (language, biological evolution, cultural evolution, collective intelligence etc...). However, despite the vagueness, I personally still see this transition, from being unable to accrete cultural innovations to being able to do so, as a special one, different in kind from particular technologies that have b...
Okay, sure. If my impression of the original post is right, the author would not disagree with you, but would rather claim that there is an important distinction to be made among these innovations. Namely, one of them is the 0-1 transition to universality, and the others are not. So, do you disagree that such a distinction may be important at all, or merely that it is not a distinction that supports the argument made in the original post?
At the risk of going round in circles, you begin your post by saying you don't care which ones are special or qualitative, and end it by wondering why the author is confident certain kinds of transition are not "major". Is this term, like the others, just standing in for 'significant enough to play a certain kind of role in an "AI leads to doom" argument'? Or does it mean something else?
I get the impression that you want to avoid too much wrangling over which labels should be applied to which kinds of thing, but then, you brought up the worry about the original post, so I don't quite know what your point is.
I think this is partially a matter of ontological taste. I mean, you are obviously correct that many innovations coming after the transition the author is interested in seem to produce qualitative shifts in the collective intelligence of humanity. On the other hand, if you take the view that all of these are fundamentally enabled by that first transition, then it seems reasonable to treat that as special in a way that the other innovations are not.
I suppose where the rubber meets the road, if one grants both the special status of the transition to un...
One distinction I think is important to keep in mind here is between precision with respect to what software will do and precision with respect to the effect it will have. While traditional software engineering often (though not always) involves knowing exactly what software will do, it is very common that the real-world effects of deploying some software in a real-world environment are impossible to predict with perfect accuracy. This reduces the perceived novelty of unintended consequences (though obviously, a fully-fledged AGI would lead to significantly more novelty than anything that preceded it).
I don't want to cite anyone as your 'leading technical opposition'. My point is that many people who might be described as having 'coherent technical views' would not consider your arguments for what to expect from AGI to be 'technical' at all. Perhaps you can just say what you think it means for a view to be 'technical'?
As you say, readers can decide for themselves what to think about the merits of your position on intelligence versus Chollet's (I recommend this essay by Chollet for a deeper articulation of some of his views: https://arxiv.org/pdf/1911.01...
Yes, I've read it. Perhaps that does make it a little unfair of me to criticise lack of engagement in this case. I should be more preicse: Kudos to Yudkowsky for engaging, but no kudos for coming to believe that someone having a very different view to the one he has arrived at must not have a 'coherent technical view'.
I'd consider myself to have easily struck down Chollet's wack ideas about the informal meaning of no-free-lunch theorems, which Scott Aaronson also singled out as wacky. As such, citing him as my technical opposition doesn't seem good-faith; it's putting up a straw opponent without much in the way of argument and what there is I've already stricken down. If you want to cite him as my leading technical opposition, I'm happy enough to point to our exchange and let any sensible reader decide who held the ball there; but I would consider it intellectually dishonest to promote him as my leading opposition.
Eliezer: Well, the person who actually holds a coherent technical view, who disagrees with me, is named Paul Christiano.
What does Yudkowsky mean by 'technical' here? I respect the enormous contribution Yudkowsky has made to these discussions over the years, but I find his ideas about who counts as a legitimate dissenter from his opinions utterly ludicrous. Are we really supposed to think that Francois Chollet, who created Keras, is the main contributor to TensorFlow, and designed the ARC dataset (demonstrating actual, operationalizable knowledge about the ...
He wrote a whole essay responding specifically to Chollet! https://intelligence.org/2017/12/06/chollet/
I upvoted, because these are important concerns overall, but this sentence stuck out to me:
The fact that Yudkowsky doesn't even know enough about Chollet to pronounce his name displays a troubling lack of effort to engage seriously with opposing views.
I'm not claiming that Yudkowsky does display a troubling lack of effort to engage seriously with opposing views or he does not display such, but surely this can be decided more accurately by looking at his written output online than at his ability to correctly pronounce names in languages he is not native in....
This analogy is misleading because it pumps the intuition that we know how to generate the algorithmic innovations that would improve future performance, much as we know how to tie our shoelaces once we notice they are untied. This is not the case. Research programmes can and do stagnate for long periods because crucial insights are hard to come by and hard to implement correctly at scale. Predicting the timescale on which algorithmic innovations occur is a very different proposition from predicting the timescale on which it will be feasible to increase parameter count.
As some other commenters have said, the analogy with other species (flowers, ants, beavers, bears) seems flawed. Human beings are already (limited) generally intelligent agents. Part of what that means is that we have the ability to direct our cognitive powers to arbitrary problems in a way that other species do not (as far as we know!). To my mind, the way we carelessly destroy other species' environments and doom them to extinction is a function of both the disparity in both power and the disparity in generality, not just the former. That is not to say t...
I think you are very confused about how to interpret disagreements around which mental processes ground consciousness. These disagreements do not entail a fundamental disagreement about what consciousness is as a phenomenon to be explained.
Regardless of that though, I just want to focus on one of your "referents of consciousness" here, because I also think the reasoning you provide for your particular claims is extremely weak. You write the following
... (read more)