Sequences

Why Everyone (Else) Is a Hypocrite: Evolution and the Modular Mind
Concept Safety
Multiagent Models of Mind
Keith Stanovich: What Intelligence Tests Miss

Wiki Contributions

Comments

Good question! I would find it plausible that it would have changed, except maybe if the people you'd call would be in their fifties or older.

Based on the link, it seems you follow the Theravada tradition. 

For what it's worth, I don't really follow any one tradition, though Culadasa does indeed have a Theravada background.

Yeah, some Buddhist traditions do make those claims. The teachers and practitioners who I'm the most familiar with and trust the most tend to reject those models, sometimes quite strongly (e.g. Daniel Ingram here). Also near the end of his life, Culadasa came to think that even though it might at one point have seemed like he had predominantly positive emotions in the way that some schools suggested, in reality he had just been repressing them with harmful consequences.

Culadasa: As a result of my practice, I had reached a point where emotions would arise but they really had no power over me, but I could choose to allow those emotions to express themselves if they served a purpose. Well, it’s sort of a downweighting of emotions – negative emotions were strongly downweighted, and positive emotions were not downweighted at all. So this was the place I was coming from as a meditation teacher. I just never really experienced anger; when something would cause some anger to arise, I’d notice it and let go of it, and, you know, it wasn’t there. Negative emotions in general were just not part of my life anymore. So it was a process of getting in touch with a  lot of these emotions that, you know, I hadn’t been making space for because I saw them as unhealthy, unhelpful, so on and so forth.

Michael: So, in essence, you had bypassed them.

Culadasa: Yes, it’s a bypassing. I think it’s a very common bypassing, too, when somebody reaches this particular stage on the path. I mean, this is a big of a digression, but I think it maybe helps to put the whole thing into perspective, the rest of our conversation into perspective…

Michael: Please digress.

Culadasa: Okay. So this is a stage at which the sense of being a separate self completely disappears. I mean, prior to that, at stream entry, you know, there’s no more attachment to the ego, the ego becomes transparent, but you still have this feeling that I’m a separate self; it still produces craving; you have to work through that in the next path, and so on and so forth. But this is a stage where that very primitive, that very primal sense of being a separate self falls away. Now, what I know about this from a neuroscience point of view is that there’s a part of the brainstem which was the earliest concentration of neurons that was brain-like in the evolution of brains, and there are nuclei there that were responsible for maintaining homeostasis of the body, and they still do that today. One of their major purposes is to regulate homeostasis in the body, blood pressure, heart rate, oxygenation of the blood, you name it, just every aspect of internal bodily maintenance. With the subsequent development of the emotional brain, the structures that are referred to as the limbic system, evolution provided a way to guide animals’ behaviors on the basis of emotions and so these same nuclei then created ascending fibers into this limbic system, from the brainstem into these new neural structures that constituted the emotional brain.

Michael: So this very old structure that regulated the body linked up with the new emotional structures.

Culadasa: Right. It linked up with it, and the result was a sense of self. Okay? You can see the enormous value of this to an animal, to an organism. A sense of self. My goodness. So now these emotions can operate in a way that serves to improve the survival, reproduction, everything else of this self, right? Great evolutionary advance. So now we have organisms with a sense of self. Then the further evolution of cerebral cortex, all of these other higher structures, then that same sense of self became integrated into that as well. So there we have the typical human being with this very strong, very primal sense that “I am me. I am a separate self.” We can create all kinds of mental constructs around this, but even cats and dogs and deer and mice and lizards and things like that have this sense of self. We elaborate an ego on top of it. So there’s these two aspects to self in a human being. One is the ego self, the mental construct that’s been built around this more primal sense of self. So this is a stage at which that primal sense of self disappears and what usually seems to happen is, at the same time, there is a temporary disappearance of all emotions. I think that we’ll probably eventually find out that the neural mechanism by which we bring about this shift, that these two things are linked, because the sense of self is – its passageway to the higher brain centers, which constitute the field of conscious awareness that we live in and all of the unconscious drives that we’re responding to, the limbic system, the emotional brain, is the link.

Michael: Yes.

Culadasa: So something happens that interrupts that link. The emotions come back online, but they come back online in a different way from that point. So instead of being overcome by fear, anger, lust, joy, whatever, these things arise and they’re something that you can either let go of or not. [laughs] That’s the place where I was.

Michael: They seem very ephemeral…

Culadasa: Yes, right. They’re very ephemeral, and very easy to deal with, and there is a tendency for other people to see you as less emotional and truly you are because you’ve downregulated a lot of more negative emotions. But you’re by no means nonemotional; you’re still human, you still have the full gamut of human emotions available to you. But you do get out of the habit of giving much leeway to certain kinds of emotions. And the work that I was doing with Doug pushed me in the direction of, “Let’s go ahead and let’s experience some of those emotions. Let’s see what it feels like to experience the dukkha of wanting things to be different than the way they are.” So that’s what we did. And I started getting in touch with these emotions and their relationship to my current life situation where I wasn’t fulfilling my greatest aspirations because I was doing a lot of things that – stuff that had to be done, but that I had no interest in, but I had to do it and that’s what occupied my time.

I'm guessing that something similar is what's actually happening for a lot of the schools claiming complete elimination of all negative feelings. Insight practices can be used in ways that end up bypassing or suppressing a lot of one's emotions, but actually negative feelings are still having effects in the person, they just go unnoticed.

If you think about it, you can't be sad and not mind it. You can't be angry but not mind it. 

This disagrees with my experience, and with the experience of several other people I know.

The biggest question on my mind right now is, what does your friend think of this post now that you've written it? 

Agree. This connects to why I think that the standard argument for evolutionary misalignment is wrong: it's meaningless to say that evolution has failed to align humans with inclusive fitness, because fitness is not any one constant thing. Rather, what evolution can do is to align humans with drives that in specific circumstances promote fitness. And if we look at how well the drives we've actually been given generalize, we find that they have largely continued to generalize quite well, implying that while there's likely to still be a left turn, it may very well be much milder than is commonly implied.

Ending a relationship/marriage doesn't necessarily imply that you no longer love someone (I haven't been married but I do still love several of my ex-partners), it just implies that the arrangement didn't work out for one reason or another.

I would guess that getting space colonies to the kind of a state where they could support significant human inhabitation would be a multi-decade project, even with superintelligence? Especially taking into account that they won't have much nature without significant terraforming efforts, and quite a few people would find any colony without any forests etc. to be intrinsically dystopian.

hmm, I don't understand something, but we are closer to the crux :)

Yeah I think there's some mutual incomprehension going on :)

  1. To the question, "Would you update if this experiment is conducted and is successful?" you answer, "Well, it's already my default assumption that something like this would happen". 
  2. To the question, "Is it possible at all?" You answer 70%. 

So, you answer 99-ish% to the first question and 70% to the second question, this seems incoherent.

For me "the default assumption" is anything with more than 50% probability. In this case, my default assumption has around 70% probability.

It seems to me that you don't bite the bullet for the first question if you expect this to happen. Saying, "Looks like I was right," seems to me like you are dodging the question.

Sorry, I don't understand this. What question am I dodging? If you mean the question of "would I update", what update do you have in mind? (Of course, if I previously gave an event 70% probability and then it comes true, I'll update from 70% to ~100% probability of that event happening. But it seems pretty trivial to say that if an event happens then I will update to believing that the event has happened, so I assume you mean some more interesting update.)

Hum, it seems there is something I don't understand; I don't think this violates the law.

I may have misinterpreted you; I took you to be saying "if you expect to see this happening, then you might as well immediately update to what you'd believe after you saw it happen". Which would have directly contradicted "Equivalently, the mere expectation of encountering evidence—before you’ve actually seen it—should not shift your prior beliefs".

I agree I only gave the skim of the proof, it seems to me that if you can build the pyramid, brick by brick, then this solved the meta-problem.

for example, when I give the example of meta-cognition-brick, I say that there is a paper that already implements this in an LLM (and I don't find this mysterious because I know how I would approximately implement a database that would behave like this).

Okay. But that seems more like an intuition than even a sketch of a proof to me. After all, part of the standard argument for the hard problem is that even if you explained all of the observable functions of consciousness, the hard problem would remain. So just the fact that we can build individual bricks of the pyramid isn't significant by itself - a non-eliminativist might be perfectly willing to grant that yes, we can build the entire pyramid, while also holding that merely building the pyramid won't tell us anything about the hard problem nor the meta-problem. What would you say to them to convince them otherwise?

  1. Let's say we implement this simulation in 10 years and everything works the way I'm telling you now. Would you update?

Well, it's already my default assumption that something like this would happen, so the update would mostly just be something like "looks like I was right".

2. What is the probability that this simulation is possible at all? 

You mean one where AIs that were trained with no previous discussion of the concept of consciousness end up reinventing the hard problem on their own? 70% maybe.

If you expect to update in the future, just update now.  

That sounds like it would violate conservation of expected evidence:

... for every expectation of evidence, there is an equal and opposite expectation of counterevidence.

If you expect a strong probability of seeing weak evidence in one direction, it must be balanced by a weak expectation of seeing strong evidence in the other direction. If you’re very confident in your theory, and therefore anticipate seeing an outcome that matches your hypothesis, this can only provide a very small increment to your belief (it is already close to 1); but the unexpected failure of your prediction would (and must) deal your confidence a huge blow. On average, you must expect to be exactly as confident as when you started out. Equivalently, the mere expectation of encountering evidence—before you’ve actually seen it—should not shift your prior beliefs.

 

To me, this thought experiment solves the meta-problem and so dissolves the hard problem.

I don't see how it does? It just suggests that a possible approach by which the meta-problem could be solved in the future.

Suppose you told me that you had figured out how to create cheap and scalable source of fusion power. I'd say oh wow great! What's your answer? And you said that, well, you have this idea for a research program that might, in ten years, produce an explanation of how to create cheap and scalable fusion power.

I would then be disappointed because I thought you had an explanation that would let me build fusion power right now. Instead, you're just proposing another research program that hopes to one day achieve fusion power. I would say that you don't actually have it figured it out yet, you just think you have a promising lead.

Likewise, if you tell me that you have a solution to the meta-problem, then I would expect an explanation that lets me understand the solution to the meta-problem today. Not one that lets me do it ten years in the future, when we investigate the logs of the AIs to see what exactly it was that made them think the hard problem was a thing.

I also feel like this scenario is presupposing the conclusion - you feel that the right solution is an eliminativist one, so you say that once we examine the logs of the AIs, we will find out what exactly made them believe in the hard problem in a way that solves the problem. But a non-eliminativist might just as well claim that once we examine the logs of the AIs, we will eventually be forced to conclude that we can't find an answer there, and that the hard problem still remains mysterious.

Now personally I do lean toward thinking that examining the logs will probably give us an answer, but that's just my/your intuition against the non-eliminativist's intuition. Just having a strong intuition that a particular experiment will prove us right isn't the same as actually having the solution.

I quite liked the way that this post presented your intellectual history on the topic, it was interesting to read to see where you're coming from.

That said, I didn't quite understand your conclusion. Starting from Chap. 7, you seem to be saying something like, "everyone has a different definition for what consciousness is; if we stop treating consciousness as being a single thing and look at each individual definition that people have, then we can look at different systems and figure out whether those systems have those properties or not".

This makes sense, but - as I think you yourself said earlier in the post - the hard problem isn't about explaining every single definition of consciousness that people might have? Rather it's about explaining one specific question, namely:

The explanatory gap in the philosophy of mind, represented by the cross above, is the difficulty that physicalist theories seem to have in explaining how physical properties can give rise to a feeling, such as the perception of color or pain.

You cite Critch's list of definitions people have for consciousness, but none of the three examples that you quoted seem to be talking about this property, so I don't see how they're related or why you're bringing them up.

With regard to this part:

If they do reinvent the hard problem, it would be a big sign that the AIs in the simulation are “conscious” (in the reconstructed sense).

I assert that this experiment would solve the hard problem, because we could look at the logs,[4] and the entire causal history of the AI that utters the words "Hard pro-ble-m of Con-scious-ness" would be understandable. Everything would just be plainly understandable mechanistically, and David Chalmer would need to surrender.

This part seems to be quite a bit weaker than what I read you to be saying earlier. I interpreted most of the post to be saying "I have figured out the solution to the problem and will explain it to you". But this bit seems to be weakening it to "in the future, we will be able to create AIs that seem phenomenally conscious and solve the hard problem by looking at how they became that". Saying that we'll figure out an answer in the future when we have better data isn't actually giving an answer now.

Load More