All of Paradiddle's Comments + Replies

I think you are very confused about how to interpret disagreements around which mental processes ground consciousness. These disagreements do not entail a fundamental disagreement about what consciousness is as a phenomenon to be explained. 

Regardless of that though, I just want to focus on one of your "referents of consciousness" here, because I also think the reasoning you provide for your particular claims is extremely weak. You write the following

#9: Symbol grounding.  Even within a single interaction, an LLM can learn to associate a new symb

... (read more)

I don't think so. Compare the following two requests:

(1) Describe a refrigerator without using the word refrigerator or near-synonyms. 

(2) Describe the structure of a refrigerator in terms of moving parts and/or subprocesses.

The first request demands the tabooing of words; the second request demands an answer of a particular (theory-laden) form. I think the OPs request is like request 2. What's more, I expect submitting request 2 to a random sample of people would license the same erroneous conclusion about "refrigerator" as it did about "consciousnes... (read more)

Section 1.6 is another appendix about how this series relates to Philosophy Of Mind. My opinion of Philosophy Of Mind is: I’m against it! Or rather, I’ll say plenty in this series that would be highly relevant to understanding the true nature of consciousness, free will, and so on, but the series itself is firmly restricted in scope to questions that can be resolved within the physical universe (including physics, neuroscience, algorithms, and so on). I’ll leave the philosophy to the philosophers.

At the risk of outing myself as a thin-skinned philosop... (read more)

4Steven Byrnes
Thanks for the kind words! The thing you quoted was supposed to be very silly and self-deprecating, but I wrote it very poorly, and it actually wound up sounding kinda judgmental. Oops, sorry. I just rewrote it. I agree with everything you wrote in this comment.

I strongly believe that step 1 is sufficient or almost sufficient for step 2, i.e., that it's impossible to give an adequate account of human phenomenology without figuring out most of the computational aspects of consciousness.

Apologies for nitpicking, but your strong belief that step 1 is (almost) sufficient for step 2 would be more faithfully re-phrased as: it will (probably) be possible/easy to give an adequate account of human phenomenology by figuring out most of the computational aspects of consciousness. The way you phrased it (viz., "impossible...... (read more)

2Rafael Harth
Mhh, I think "it's not possible to solve (1) without also solving (2)" is equivalent to "every solution to (1) also solves (2)", which is equivalent to "(1) is sufficient for (2)". I did take some liberty in rephrasing step (2) from "figure out what consciousness is" to "figure out its computational implementation".

I agree with the thrust of this comment, which I read as saying something like "our current physics is not sufficient to explain, predict, and control all macroscopic phenomena". However, this is a point which Sean Carroll would agree with. From the paper under discussion (p.2): "This is not to claim that physics is nearly finished and that we are close to obtaining a Theory of Everything, but just that one particular level in one limited regime is now understood." 

The claim he is making, then, is totally consistent with the need to find further appro... (read more)

I see. I'm afraid I don't have much great literature to recommend on computational semantics (though Josh Tenenbaum's PhD dissertation seems relevant). I still wonder whether, even if you disagree with the approaches you have seen in that domain, those might be the kind of people well-placed to help with your project. But that's your call of course. 

Depending on your goals with this project, you might get something out of reading work by relevance theorists like Sperber, Wilson, and Carston (if you haven't before). I find Carston's reasoning about how... (read more)

Thanks for the response. Personally, I think your opening sentence as written is much, much too broad to do the job you want it to do. For example, I would consider "natural language semantics as studied in linguistics" to include computational approaches, including some Bayesian approaches which are similar to your own. If I were a computational linguist reading your opening sentence, I would be pretty put off (presumably, these are the kind of people you are hoping not to put off). Perhaps including a qualification that it is classical semantics you are talking about (with optional explanatory footnote) would be a happy medium.

3johnswentworth
I would make a similar critique of basically-all the computational approaches I've seen to date. They generally try to back out "semantics" from a text corpus, which means their "semantics" grounds out in relations between words; neither the real world nor mental content make any appearance. They may use Bayes' rule and latents like this post does, but such models can't address the kinds of questions this post is asking at all. (I suppose my complaints are more about structuralism than about model-theoretic foundations per se. Internally I'd been thinking of it more as an issue with model-theoretic foundations, since model theory is the main route through which structuralism has anything at all to say about the stuff which I would consider semantics.) Of course you might have in mind some body of work on computational linguistics/semantics with which I am unfamiliar, in which case I would be quite grateful for my ignorance to be corrected!

I enjoyed the content of this post, it was nicely written, informative, and interesting. I also realise that the "less bullshit" framing is just a bit of fun that shouldn't be taken too seriously. Those caveats aside, I really dislike your framing and want to explain why! Reasons below.

First, the volume of work on "semantics" in linguistics is enormous and very diverse. The suggestion that all of it is bullshit comes across as juvenile, especially without providing further indication as to what kind of work you are talking about (the absence of a signal th... (read more)

5johnswentworth
Not at all, you are correctly critiquing the downsides of a trade-off which we consciously made.  There was a moment during writing when David suggested we soften the title/opening to avoid alienating classical semantics researchers. I replied that I expected useful work on the project to mostly come, not from people with a background in classical semantics, but from people who had bounced off of classical semantics because they intuited that it "wasn't addressing the real problem". Those are the people who've already felt, on a gut level, a need for the sort of models the post outlines. (Also, as one person who reviewed the post put it: "Although semantics would suggest that this post would be interesting to logicians, linguists and their brethren [...] I think they would not find it interesting because it is a seemingly nonsymbolic attempt to approach semantics. Symbolical methods are their bread and butter, without them they would be lost.") To that end, the title and opening are optimized to be a costly signal to those people who bounced off classical semantics, signalling that they might be interested in this post even though they've been unsatisfied before with lots of work on semantics. The cost of that costly signal is alienating classical semantics researchers. And having made that trade-off upfront, naturally we didn't spend much time trying to express this project in terms more familiar to people in the field. If we were writing a version more optimized for people already in the field, I might have started by saying that the core problem is the use of model theory as the primary foundation for semantics (i.e. Montague semantics and its descendants as the central example). That foundation explicitly ignores the real world, and is therefore only capable of answering questions which don't require any notion of the real world - e.g. Montague nominally focused on how truth values interact with syntax. Obviously that is a rather narrow and impoverished subse

Fair enough if literally any approach using symbolic programs (e.g. a python interpreter) is considered neurosymbolic, but then there isn't any interesting weight behind the claim "neurosymbolic methods are necessary".

If somebody achieved a high-score on the ARC challenge by providing the problems to an LLM as prompts and having it return the solutions as output, then the claim "neurosymbolic methods are necessary" would be falsified. So there is weight to the claim. Whether it is interesting or not is obviously in the eye of the beholder. 

Paradiddle199

I think the kind of sensible goalpost-moving you are describing should be understood as run-of-the-mill conceptual fragmentation, which is ubiquitous in science. As scientific communities learn more about the structure of complex domains (often in parallel across disciplinary boundaries), numerous distinct (but related) concepts become associated with particular conceptual labels (this is just a special case of how polysemy works generally). This has already happened with scientific concepts like gene, species, memory, health, attention and many more. ... (read more)

I actually think what you are going for is closer to JL Austin's notion of an illocutionary act than anything in Wittgenstein, though as you say, it is an analysis of a particular token of the type ("believing in"), not an analysis of the type. Quoting Wikipedia:

"According to Austin's original exposition in How to Do Things With Words, an illocutionary act is an act:

  • (1) for the performance of which I must make it clear to some other person that the act is performed (Austin speaks of the 'securing of uptake'), and
  • (2) the performance of which involves the pr
... (read more)

In Leibniz’ case, he’s known almost exclusively for the invention of calculus.

Was this supposed to be a joke (if so, consider me well and truly whooshed)? At any rate, it is most certainly not the case. Leibniz is known for a great many things (both within and without mathematics) as can be seen from a cursory glance at his Wikipedia page

4ryan_b
One measure of status is how far outside the field of accomplishment it extends. Using American public education as the standard, Leibniz is only known for calculus.

Rather, they might be mere empty machines. Should you still tolerate/respect/etc them, then?"

My sense is that I'm unusually open to "yes," here.


I think the discussion following from here is a little ambiguous (perhaps purposefully so?). In particular, it is unclear which of the following points are being made:

1: Sufficient uncertainty with respect to the sentience (I'm taking this as synonymous with phenomenal consciousness) of future AIs should dictate that we show them tolerance/respect etc... 
2: We should not be confident that sentience is a good c... (read more)

Apologies, I had thought you would be familiar with the notion of functionalism. Meaning no offence at all but it's philosophy of mind 101, so if you're interested in consciousness, it might be worth reading about it. To clarify further, you seem to be a particular kind of computational functionalist. Although it might seem unlikely to you, since I am one of those "masturbatory" philosophical types who thinks it matters how behaviours are implemented, I am also a computational functionalist! What does this mean? It means that computational functionalism is... (read more)

2JenniferRM
If the way we use words makes both of us "computational functionalists" in our own idiolects, then I think that word is not doing what I want it to do here and PERHAPS we should play taboo instead? But maybe not. In a very literal sense you or I could try to talk about "f: X->Y" where the function f maps inputs of type X to outputs of type Y. Example 1: If you provide inputs of "a visual image" and the output has no variation then the entity implementing the function is blind. Functionally. We expect it to have no conscious awareness of imagistic data. Simple. Easy... maybe wrong? (Human people could pretend to be blind, and therefore so can digital people. Also, apparent positive results for any given performance could be falsified by finding "a midget hiding in the presumed machine" and apparent negatives could be sandbagging.) Example 2: If you provide inputs of "accusations of moral error that are reasonably well founded" and get "outputs questioning past behavior and then <durable behavioral change related to the accusation's topic>" then the entity is implementing a stateful function that has some kind of "conscience". (Maybe not mature? Maybe not aligned with good? But still a conscience.) Example 3: If you provide inputs of "the other entity's outputs in very high fidelity as a perfect copy of a recent thing they did that has quite a bit of mismatch to environment" (such that the reproduction feels "cheap and mechanically reflective" (like the old Dr Sbaitso chatbot) rather than "conceptually adaptively reflective" (like what we are presumably trying to do here in our conversation with each other as human persons)) do they notice and ask you to stop parroting? If they notice you parroting and say something, then the entity is demonstrably "aware of itself as a function with outputs in an environment where other functions typically generate other outputs". I. A Basic Input/Output Argument You write this: Resolution has almost nothing to do with it, I t

I am sorry that you got the impression I was trolling. Actually I was trying to communicate to you. None of the candidate criteria I suggested were conjured ex nihilo out of a hat or based on anything that I just made up. Unfortunately, collecting references for all of them would be pretty time consuming. However, I can say that the global projection phrase was gesturing towards global neuronal workspace theory (and related theories). Although you got the opposite impression, I am very familiar with consciousness research (including all of the references y... (read more)

2JenniferRM
I like that you've given me a coherent response rather than a list of ideas! Thank you! You've just used the word "functional" seven times, with it not appearing in (1) the OP, (2) any comments by people other than you and me, (3) my first comment, (4) your response, (5) my second comment. The idea being explicitly invoked is new to the game, so to speak :-) When I google for [functionalist theory of consciousness] I get dropped on a encyclopedia of philosophy whose introduction I reproduce in full (in support of a larger claim that I am just taking functionalism seriously in a straightforward way and you... seem not to be?): Here is the core of the argument, by analogy, spelled out later in the article: If something can talk, then, to a functionalist like me, that means it has assembled and coordinated all necessary hardware and regulatory elements and powers (that is, it has assembled all necessary "functionality" (by whatever process is occurring in it which I don't actually need to keep track of (just as I don't need to understand and track exactly how the brain implements language))) to do what it does in the way that it does. Once you are to the point of "seeing something talk fluently" and "saying that it can't really talk the way we can talk, with the same functional meanings and functional implications for what capacities might be latent in the system" you are off agreeing with someone as silly as Searle. You're engaged in some kind of masturbatory philosophy troll where things don't work and mean basically what they seem to work and mean using simple interactive tests. I do think that I go a step further than most people, in that I explicitly think of Personhood as something functional, as a mental process that is inherently "substrate independent (if you can find another substrate with some minimally universal properties (and program it right))". In defense of this claim, I'd say that tragic deeply feral children show that the human brain is not suf

I think you're missing something important.

Obviously I can't speak to the reason there is a general consensus that LLM-based chatbots aren't conscious (and therefore don't deserve rights). However, I can speak to some of the arguments that are still sufficient to convince me that LLM-based chatbots aren't conscious. 

Generally speaking, there are numerous arguments which essentially have the same shape to them. They consist of picking out some property that seems like it might be a necessary condition for consciousness, and then claiming that LLM-based... (read more)

I kinda feel like you have to be trolling with some of these?

The very first one, and then some of the later ones are basically "are you made of meat". This would discount human uploads for silly reasons. Like if I uploaded and was denied rights for lack of any of these things they I would be FUCKING PISSED OFF (from inside the sim where I was hanging out, and would be very very likely to feel like I had a body, depending on how the upload and sim worked, and whether they worked as I'd prefer). This is just "meat racism" I think?

Metabolism, Nociceptors, Hor

... (read more)

Enjoyable post, I'll be reading the rest of them. I especially appreciate the effort that went into warding off the numerous misinterpretations that one could easily have had (but I'm going to go ahead an ask something that may signal I have misinterpreted you anyhow). 

Perhaps this question reflects poor reading comprehension, but I'm wondering whether you are thinking of valence as being implemented by something specific at a neurobiological level or not? To try and make the question clearer (in my own head as much as anything), let me lay out two al... (read more)

1S Benfield
I don't know if I buy that valence is based on dopamine neurons but I do believe valance is delta between current state and possible future state. Very much like action potential or potential energy. If one possible outcome could grant you the world, then you will have a very high valance to do the actions needed. Likewise if you life is on the line, that is very high valance. That turns anger to rage. Unfortunately, my model also says that too many positive thoughts, lead to a race condition between dopamine generation and thought analysis can can lead to mania/psychosis. When you want things too much (desire) or too little (doubt/despair), the valences can get too high. And even evaluation of innocuous things can lead you to forming emotions or actions out of line with the current evaluation. That is, valance does not go to zero easily. And the valence of now, informs the valence of later. And I believe it is more like a 1/x function so when you get to extremes of valance, the desire to act or desire to not act, gets really high and is hard to over come. 
5Steven Byrnes
Thanks! I think in much much simpler animals, valence is a literal specific signal in the brain, basically the collective spiking activity of a population of dopamine neurons. In mammals, that’s still sorta-close-to-true, but I would need to add a whole bunch of caveats and footnotes to that, for reasons hinted at in §1.5.6–1.5.7. (I have a bunch of idiosyncratic opinions about what exactly the basal ganglia is doing and how, but I don’t want to get into it here, sorry!) I reject both the “first” and the “second” thing you mention. I’m much closer to “valence is pretty straightforwardly encoded by spikes going down specific known axons”. Separately, I might or might not agree with “the neural bases of emotions are widely distributed”, depending on how we define the word “emotions” (and also how we define “neural bases”, I suppose!), see here.

In other words, you think that even in a world where the distribution of mathematical methods were very specific to subject areas, this methodology would have failed to show that? If so, I think I disagree (though I agree the evidence of the paper is suggestive, not conclusive). Can you explain in more detail why you think that? Just to be clear, I think the methodology of the paper is coarse, but not so coarse as to be unable to pick out general trends.

Perhaps to give you a chance to say something informative, what exactly did you have in mind by "united around methodology" when you made the original comment I quoted above? 

Ok, I do really like that move, and generally think of fields as being much more united around methodology than they are around subject-matter. So maybe I am just lacking a coherent pointer to the methodology of complex-systems people.


The extent to which fields are united around methodologies is an interesting question in its own right. While there are many ways we could break this question down which would probably return different results, a friend of mine recently analysed it with respect to mathematical formalisms (paper: https://link.springer.com/arti... (read more)

2habryka
Alas, I don't think that study really shows much. The result seems almost certainly caused by the measure of mathematical methods they used (something kind of like by-character-similarity of LaTeX equations), since they mostly failed to find any kind of structure. 

I don't have an answer for your question about how you might become confident that something really doesn't exist (other than a generic 'reason well about social behaviour in general, taking all possible failure modes into account'). However, I would point out that the example you give is about your group of friends in particular, which is a very different case from society at large. Shapeshifting lizardmen are almost certainly not evenly distributed across friendship groups such that every group of a certain size has one, but rather clumped together as we would expect due to homophily.
 

Edit: I see this point was already addressed in Bezzi's response on filter bubbles.

Thanks for the response.

Personally I'm confident that whatever people are managing to refer to by "consciousness" is a process than runs on matter

I don't disagree that consciousness is a process that runs on matter, but that is a separate question from whether the typical referent of consciousness is that process. If it turned out my consciousness was being implemented on a bunch of grapes it wouldn't change what I am referring to when I speak of my own consciousness. The referents are the experiences themselves from a first-person perspective.

I asked peop

... (read more)
1[anonymous]
This is somewhat of a drive-by comment, but this post mostly captures the totality about the extrinsics of "consciousness" AFAICT. From my own informal discussions, most of the crux of disagreement in seems to revolve around what's going on in the moment when we perform the judgement "I'm obviously conscious" or "Of course I exist." At the very least, disentangling that performative action from the gut-level value judgement and feeling tends to clear up a lot of my own internal confusion. Indeed, the part 1 categorization lines up with a smorgasbord of internal processes I also personally identify, but I also honestly don't know what to even look for when asked to observe or describe subjective consciousness. I feel like a discussion of "life essence" would have mostly a similar structure if the cultural zeitgeist in analytical philosophy got us interested in that. Sure, I agree that I'm alive, which might come out like "I have life essence" under a different linguistic ontology, but attempting to operationalize "life essence" doesn't seem like a fruitful exercise to me.

Really interesting stuff, thanks for sharing it! 

I'm afraid I'm sceptical that you methodology licenses the conclusions you draw. You state that you pushed people away from "using common near-synonyms like awareness or experience" and "asked them to instead describe the structure of the consciousness process, in terms of moving parts and/or subprocesses". You end up concluding, on the basis of people's radically divergent responses when so prompted, that they are referring to different things with the term 'consciousness'.

The problem I see is that the... (read more)

2ESRogs
Isn't this just the standard LessWrong-endorsed practice of tabooing words, and avoiding semantic stopsigns?
8Andrew_Critch
Thanks for raising this.  It's one of the reasons I spelled out my methodology, to the extent that I had one.  You're right that, as I said, my methodology explicitly asks people to pay attention to the internal structure of what they were experiencing in themselves and calling consciousness, and to describe it on a process level.  Personally I'm confident that whatever people are managing to refer to by "consciousness" is a process than runs on matter.  If you're not confident of that, then you shouldn't be confident in my conclusion, because my methodology was premised on that assumption. Why do you say "of course" here?  It could have turned out that people were all referring to the same structure, and their subjective sense of its presence would have aligned.  That turned out not to be the case. I disagree with this claim.  Consciousness is almost certainly a process that runs on matter, in the brain.  Moreover, the belief that "consciousness exists" — whatever that means — is almost always derived from some first-person sense of awareness of that process, whatever it is.  In my investigations, I asked people to attend to the process there were referring to, and describe it.  As far as I can tell, they usually described pretty coherent things that were (almost certainly) actually happening inside their minds.  This raises a question: why is the same word used to refer to these many different subject experiences of processes that are almost certainly physically real, and distinct, in the brain? The standard explanation is that they're all facets or failed descriptions of some other elusive "thing" called "consciousness", which is somehow perpetually elusive and hard for scientists to discover.  I'm rejecting that explanation, in favor of a simpler one: consciousness is a word that people use to refer to mental processes that they consider intrinsically valuable upon introspective observation, so they agree with each other when they say "consciousness is valuab

This seems like an important comment to me. Before the discovery of atoms, if you asked people to talk about "the thing stuff was made out of," in terms of moving parts and subprocesses, you'd probably get a lot of different confused responses, and focus on different aspects.  However, that doesn't mean people are necessarily referring to different concepts - they just have different underlying models of the thing they're all pointing,

The distinction is that without the initial 0-1 phase transition, none of the other stuff is possible. They are all instances of cumulative cultural accretion, whereas the transition constitutes entering the regime of cumulative cultural accretion (other biological organisms and extant AI systems are not in this regime). If I understand the author correctly, the creation of AGI will increase the pace of cumulative cultural accretion, but will not lead us (or them) to exit that regime (since, according to the point about universality, there is no further re... (read more)

2TekhneMakre
Ok. I think you're confused though; other things we've discussed are pretty much as 0 to 1 as cultural accumulation.

I have to say I agree that there is vagueness in the transition to universality. That is hardly surprising seeing as it is a confusing and contentious subject that involves integrating perspectives on a number of other confusing and contentious subjects (language, biological evolution, cultural evolution, collective intelligence etc...). However, despite the vagueness, I personally still see this transition, from being unable to accrete cultural innovations to being able to do so, as a special one, different in kind from particular technologies that have b... (read more)

2TekhneMakre
Innovations that unlock a broad swath of further abilities could be called "qualitatively more intelligent". But 1. things that seem "narrow", such as many math ideas, are qualitative increases in intelligence in this sense; and 2. there's a lot of innovations that sure seem to obviously be qualitative increases.
4TekhneMakre
No, I don't see a real distinction here. If you increase skull size, you increase the rate at which new abilities are invented and combined. If you come up with a mathematical idea, you advance a whole swath of ability-seeking searches. I listed some other things that increase meta-ability. What's the distinction between various things that hit back to the meta-level?

Okay, sure. If my impression of the original post is right, the author would not disagree with you, but would rather claim that there is an important distinction to be made among these innovations. Namely, one of them is the 0-1 transition to universality, and the others are not. So, do you disagree that such a distinction may be important at all, or merely that it is not a distinction that supports the argument made in the original post?

4TekhneMakre
It would be a large, broad increase in intelligence. There may be other large broad increases in intelligence. I think there are also other large narrow increases, and small broad increases. Jacob seems to be claiming that there aren't further large increases to be had. I think the transition to universality is pretty vague. Wouldn't increasing memory capacity also be a sort of increase in universality?

At the risk of going round in circles, you begin your post by saying you don't care which ones are special or qualitative, and end it by wondering why the author is confident certain kinds of transition are not "major". Is this term, like the others, just standing in for 'significant enough to play a certain kind of role in an "AI leads to doom" argument'? Or does it mean something else? 

I get the impression that you want to avoid too much wrangling over which labels should be applied to which kinds of thing, but then, you brought up the worry about the original post, so I don't quite know what your point is. 

4TekhneMakre
It just means specific innovations that have especially big increases in intelligence. But I think that lots of innovations, such as mathematical ideas, have big increases in intelligence.

I think this is partially a matter of ontological taste. I mean, you are obviously correct that many innovations coming after the transition the author is interested in seem to produce qualitative shifts in the collective intelligence of humanity. On the other hand, if you take the view that all of these are fundamentally enabled by that first transition, then it seems reasonable to treat that as special in a way that the other innovations are not. 

I suppose where the rubber meets the road, if one grants both the special status of the transition to un... (read more)

4TekhneMakre
I don't necessarily care too much about which ones are "special" or "qualitative", though I did say qualitative. The practical question at hand is how much more intelligence can you pack into given compute, and how quickly can you get there. If a mathematical insight allows you to write code that's shorter, and runs significantly faster and with less memory requirements, and gives outputs that are more effective, then we've answered most of the practical question. History seems chock full of such things. But yeah I also agree that there's other more "writ large" sorts of transitions. Nathan points out large scale / connective plasticity. Another one would be full reflectivity: introspection and self-reprogramming. Another one would be the ability copy chunks of code and A/B test them as they function in the whole agent. I don't get why Jacob is so confident that these sorts of things aren't major and/or that there aren't more of them than we've thought of.

One distinction I think is important to keep in mind here is between precision with respect to what software will do and precision with respect to the effect it will have. While traditional software engineering often (though not always) involves knowing exactly what software will do, it is very common that the real-world effects of deploying some software in a real-world environment are impossible to predict with perfect accuracy. This reduces the perceived novelty of unintended consequences (though obviously, a fully-fledged AGI would lead to significantly more novelty than anything that preceded it).

I don't want to cite anyone as your 'leading technical opposition'. My point is that many people who might be described as having 'coherent technical views' would not consider your arguments for what to expect from AGI to be 'technical' at all. Perhaps you can just say what you think it means for a view to be 'technical'?

As you say, readers can decide for themselves what to think about the merits of your position on intelligence versus Chollet's (I recommend this essay by Chollet for a deeper articulation of some of his views: https://arxiv.org/pdf/1911.01... (read more)

Yes, I've read it. Perhaps that does make it a little unfair of me to criticise lack of engagement in this case. I should be more preicse: Kudos to Yudkowsky for engaging, but no kudos for coming to believe that someone having a very different view to the one he has arrived at must not have a 'coherent technical view'.

I'd consider myself to have easily struck down Chollet's wack ideas about the informal meaning of no-free-lunch theorems, which Scott Aaronson also singled out as wacky.  As such, citing him as my technical opposition doesn't seem good-faith; it's putting up a straw opponent without much in the way of argument and what there is I've already stricken down.  If you want to cite him as my leading technical opposition, I'm happy enough to point to our exchange and let any sensible reader decide who held the ball there; but I would consider it intellectually dishonest to promote him as my leading opposition.

Eliezer: Well, the person who actually holds a coherent technical view, who disagrees with me, is named Paul Christiano.

What does Yudkowsky mean by 'technical' here? I respect the enormous contribution Yudkowsky has made to these discussions over the years, but I find his ideas about who counts as a legitimate dissenter from his opinions utterly ludicrous. Are we really supposed to think that Francois Chollet, who created Keras, is the main contributor to TensorFlow, and designed the ARC dataset (demonstrating actual, operationalizable knowledge about the ... (read more)

3Lauro Langosco
Maybe Francois Chollet has coherent technical views on alignment that he hasn't published or shared anywhere (the blog post doesn't count, for reasons that are probably obvious if you read it), but it doesn't seem fair to expect Eliezer to know / mention them.

He wrote a whole essay responding specifically to Chollet! https://intelligence.org/2017/12/06/chollet/

Taleuntum1914

I upvoted, because these are important concerns overall, but this sentence stuck out to me:

The fact that Yudkowsky doesn't even know enough about Chollet to pronounce his name displays a troubling lack of effort to engage seriously with opposing views.

I'm not claiming that Yudkowsky does display a troubling lack of effort to engage seriously with opposing views or he does not display such, but surely this can be decided more accurately by looking at his written output online than at his ability to correctly pronounce names in languages he is not native in.... (read more)

This analogy is misleading because it pumps the intuition that we know how to generate the algorithmic innovations that would improve future performance, much as we know how to tie our shoelaces once we notice they are untied. This is not the case. Research programmes can and do stagnate for long periods because crucial insights are hard to come by and hard to implement correctly at scale. Predicting the timescale on which algorithmic innovations occur is a very different proposition from predicting the timescale on which it will be feasible to increase parameter count.

As some other commenters have said, the analogy with other species (flowers, ants, beavers, bears) seems flawed. Human beings are already (limited) generally intelligent agents. Part of what that means is that we have the ability to direct our cognitive powers to arbitrary problems in a way that other species do not (as far as we know!). To my mind, the way we carelessly destroy other species' environments and doom them to extinction is a function of both the disparity in both power and the disparity in generality, not just the former. That is not to say t... (read more)