I think I assign close to zero probability to the first hypothesis. Brains are not that fast at thinking, and while sometimes your system 1 can make snap judgements, brains don't reevaluate huge piles of evidence in milliseconds. These kinds of things take time, and that means if you are dying, you will die before you get to finish your life review.
My guess is that our main crux lies somewhere around here. If I'd thought the life review experience involved tons and tons of "thinking", or otherwise some form of active cognitive processing, I would als...
I'm curious for your models for why people might experience these kinds of states.
One crucial aspect of my model is that these kinds of states get experienced when the psychological defense mechanisms that keep us dissociated get disarmed. If Alice and Bob are married, and Bob is having an affair with Carol, it's very common for Alice to filter out all the evidence that Bob is having an affair. When Alice finally confronts the reality of Bob's affair, the psychological motive for filtering out the evidence that Bob is having an affair gets rendered o...
While I am pretty skeptical of most variations on this hypothesis, I do think it makes sense to distinguish between at least two different hypotheses:
I thin...
@habryka, responding to your agreement with this claim:
a majority of the anecdata about reviewing the details of one's life from a broader vantage point are just culturally-mediated hallucinations, like alien abductions.
I think my real crux is that I've had experiences adjacent to near-death experiences on ayahuasca, during which I've directly experienced some aspects of phenomena reported in life reviews (like re-experiencing memories in relatively high-res from a place where my usual psychological defenses weren't around to help me dissociate...
Thanks a lot for sharing your thoughts! A couple of thoughts in response:
I suspect that the principles you describe around the "experience of tanha" go well beyond human or even mammalian psychology.
That's how I see it too. Buddhism says tanha is experienced by all non-enlightened beings, which probably includes some unicellular organisms. If I recall correctly, some active inference folk I've brainstormed with consider tanha a component of any self-evidencing process with counterfactual depth.
...Forgiveness (non-judgment?) may then need a c
I really like the directions that both of you are thinking in.
But I think the "We suffered and we forgive, why can't you?" is not the way to present the idea.
I agree. I think of it more as like "We suffered and we forgave and found inner peace in doing so, and you can too, as unthinkable as that may seem to you".
I think the turbo-charged version is "We suffered and we forgave, and we were ultimately grateful for the opportunity to do so, because it just so deeply nourishes our souls to know that we can inspire hope and inner peace in others goi...
Here's something possibly relevant I wrote in a draft of this post that I ended up cutting out, because people seemed to keep getting confused about what I was trying to say. I'm including this in the hopes that it will clarify rather than further confuse, but I will warn in advance that the latter may happen instead...
...The Goodness of Reality hypothesis is closely related to the Buddhist claim of non-self, which says that any fixed and unchanging sense of self we identify with is illusory; I partially interpret “illusory” to mean “causally downstream
Your section on "tanha" sounds roughly like projecting value into the world, and then mentally latching on to an attractive high-value fabricated option.
I would say that the core issue has more to do with the mental latching (or at least a particular flavor of it, which is what I'm claiming tanha refers to) than with projecting value into the world. I'm basically saying that any endorsed mental latching is downstream of an active blind spot, regardless of whether it's making the error of projecting value into the world.
I think this probably brings us...
I'm open to the hypothesis that the life review is basically not a real empirical phenomenon, although I don't currently find that very plausible. I do think it's probably true that a lot of the detailed characteristics ascribed to life reviews are not nearly as universal as some near-death experience researchers claim they are, but it seems pretty implausible to me that a majority of the anecdata about reviewing the details of one's life from a broader vantage point are just culturally-mediated hallucinations, like alien abductions. (That's what I'm under...
For what it's worth, I found myself pretty compelled by a theory someone told me years ago, that alien abductions are flashbacks to birth and/or diaper changes:
Regarding your second point, I'm leaving this comment as a placeholder to indicate my intention to give a proper response at some point. My views here have some subtlely that I want to make sure I unpack correctly, and it's getting late here!
In response to your third point, I want to echo ABlue's comment about the compatibility of the trapped prior view and the evopsych view. I also want to emphasize that my usage of "trapped prior" includes genetically pre-specified priors, like a fear of snakes, which I think can be overriden.
In any case, I don't see why priors that predispose us to e.g. adultery couldn't be similarly overriden. I wonder if our main source of disagreement has to do with the feasibility of overriding "hard-wired" evolutionary priors?
In response to your first point, I think of moral codes as being contextual more than I think of them as being subjective, but I do think of them as fundamentally being about pragmatism ("let's all agree to coordinate in ABC way to solve PQR problem in XYZ environment, and socially punish people who aren't willing to do so"). I also think religions often make the mistake of generalizing moral codes beyond the contexts in which they arose as helpful adaptations.
I think of decision theory as being the basis for morality -- see e.g. Critch's take here a...
I do draw a distinction between value and ethics. Although my current best guess is that decision theory does in some sense reduce ethics to a subset of value, I do think it's a subset worth distinguishing. For example, I still have a concept of evaluating how ethical someone is, based on how good they are at paying causal costs for larger acausal gains.
I think the Goodness of Reality principle is maybe a bit confusingly named, because it's not really a claim about the existence of some objective notion of Good that applies to reality per se, and is ...
Thanks a lot for sharing your experience! I would be very curious for you to further elaborate on this part:
Eventually this led to some moments of insight when I realized just how trapped by my own ontology I had become, and then found a way threw to a new way of seeing the world. These happened almost instantly, like a dam breaking and releasing all the realizations that had been held back.
But in order for that to be plausible, you would need a reason why the almost-truths they found are so goddamn antimemetic that the most studied and followed people in history weren't able to make them stick.
A few thoughts:
There are important insights and claims from religious sources that seem to capture psychological and social truths that aren't yet fully captured by science. At least some of these phenomenon might be formalizable via a better understanding of how the brain and the mind work, and to that end predictive processing (and other theories of that sort) could be useful to explain the phenomenon in question.
Yes, I agree with this claim.
...You spoke of wanting formalization but I wonder if the main thing is really the creation of a science, though of cour
I'm not sure what you mean by that, but the claim "many interpretations of religious mystical traditions converge because they exploit the same human cognitive flaws" seems plausible to me. I mostly don't find such interpretations interesting, and don't think I'm interpreting religious mystical traditions in such a way.
If I change "i.e. the pluralist focus Alex mentions" to "e.g. the pluralist focus Alex mentions" does that work? I shouldn't have implied that all people who believe in heuristics recommended by many religions are pluralists (in your sense). But it does seem reasonable to say that pluralists (in your sense) believe in heuristics recommended by many religions, unless I'm misunderstanding you. (In the examples you listed these would be heuristics like "seek spiritual truth", "believe in (some version of) God", "learn from great healers", etc.)
If your main po...
So my overall position here is something like: we should use religions as a source of possible deep insights about human psychology and culture, to a greater extent than LessWrong historically has (and I'm grateful to Alex for highlighting this, especially given the social cost of doing so).
Thanks a lot for the kind words!
IMO this all remains true even if we focus on the heuristics recommended by many religions, i.e. the pluralistic focus Alex mentions.
I think we're interpreting "pluralism" differently. Here are some central illustrations of wh...
Perhaps these concerns would be addressed by examples of the kind of statement you have in mind.
I'm not sure exactly what you're asking -- I wonder how much my reply to Adam Shai addresses your concerns?
I will also mention this quote from the category theorist Lawvere, whose line of thinking I feel pretty aligned with:
...It is my belief that in the next decade and in the next century the technical advances forged by category theorists will be of value to dialectical philosophy, lending precise form with disputable mathematical models to ancient ph
I'm not sure how much this answers your question, but:
It's relevant that I think of the type signature of religious metaphysical claims as being more like "informal descriptions of the principles of consciousnes / the inner world" (analogously to informal descriptions of the principles of the natural world) than like "ideology or narrative". Lots of cultures independently made observations about the natural world, and Newton's Laws in some sense could be thought of as a "Rosetta Stone" for these informal observations about the natural world.
Yeah, I also see broad similarities between my vision and that of the Meaning Alignment people. I'm not super familiar with the work they're doing, but I'm pretty positive on the the little bits of it I've encountered. I'd say that our main difference is that I'm focusing on ungameable preference synthesis, which I think will be needed to robustly beat Moloch. I'm glad they're doing what they're doing, though, and I wouldn't be shocked if we ended up collaborating at some point.
Thanks for the elaboration. Your distinction about creating vs reconciling preferences seems to hinge on the distinction between "ur-want" and "proper want". I'm not really drawing a type-level distinction between "ur-want" and "proper want", and think of each flower as itself being a flowerbud that could further bloom. In my example of Alice wanting X, Bob wanting Y, and Carol proposing Z, I'd thought of X and Y as both "proper wants" and "ur-wants that bloomed into Z"
Thanks, this really warmed my heart to read :) I'm glad you appreciated all those details!
I don't really get how what you just said relates to creating vs reconciling preferences. Can you elaborate on that a bit more?
I'm not sure how you're interpreting the distinction between creating a preference vs reconciling a preference.
Suppose Alice wants X and Bob wants Y, and X and Y appear to conflict, but Carol shows up and proposes Z, which Alice and Bob both feel like addresses what they'd initially wanted from X and Y. Insofar as Alice and Bob both prefer Z over X and Y and hadn't even considered Z beforehand, in some sense Carol created this preference for them; but I also think of this preference for Z as reconciling their conflicting preferences X and Y.
People sometimes say that AGI will be like a second species; sometimes like electricity. The truth, we suspect, lies somewhere in between. Unless we have concepts which let us think clearly about that region between the two, we may have a difficult time preparing.
I just want to strongly endorse this remark made toward the end of the post. In my experience, the standard fears and narratives around AI doom invoke "second species" intuitions that I think stand on much shakier ground than is commonly acknowledged. (Things can still get pretty bad without a "se...
Thanks, Alex. Any connections between this and CTMU? (I'm in part trying to evaluate CTMU by looking at whether it has useful implications for an area that I'm relatively familiar with.)
No direct connections that I'm aware of (besides non-classical logics being generally helpful for understanding the sorts of claims the CTMU makes).
Re: point 7, I found Jessica Taylor's take on counterfactuals in terms of linear logic pretty compelling.
Good question! Yeah, there's nothing fundamentally quantum about this effect. But if the simulator wants to focus on universes with 1 & 2 fixed (e.g. if they're trying to calculate the distribution of superintelligences across Tegmark IV), the PNRG (along with the initial conditions of the universe) seem like good places for a simulator to tweak things.
It is not clear to me that this would result in a lower Kolmogorov complexity at all. Such an algorithm could of course use a pseudo-random number generator for the vast majority quantum events which do not affect p(ASI) (like the creation of CMB photons), but this is orthogonal to someone nudging the relevant quantum events towards ASI. For these relevant events, I am not sure that the description "just do whatever favors ASI" is actually shorter than just the sequence of events.
Hmm, I notice I may have been a bit unclear in my original post. When I'd sai...
This. Physics runs on falsifiable predictions. If 'consciousness can affect quantum outcomes' is any more true than the classic 'there is an invisible dragon in my garage', then discovering that fact would seem easy from an experimentalist standpoint. Sources of quantum randomness (e.g. weak source+detector) are readily available, so any claimant who thinks they can predict or affect their outcomes could probably be tested initially for a few 100$.
Yes, I'm also bearish on consciousness affecting quantum outcomes in ways that are as overt and measurab...
I'll take a stab at this. Suppose we had strong a priori reasons for thinking it's in our logical past that we'll have created a superintelligence of some sort. Let's suppose that some particular quantum outcome in the future can get chaotically amplified, so that in one Everett branch humanity never builds any superintelligence because of some sort of global catastrophe (say with 99% probability, according to the Born rule), and in some other Everett branch humanity builds some kind of superintelligence (say with 1% probability, according to the Born rule...
If we performed a trillion 50/50 quantum coin flips, and found a program with K-complexity far less than a trillion that could explain these outcomes, that would be an example of evidence in favor of this hypothesis. (I don't think it's very likely that we'll be able to find a positive result if we run that particular experiment; I'm naming it more to illustrate the kind of thing that would serve as evidence.) (EDIT: This would only serve as evidence against quantum outcomes being truly random. In order for it to serve as evidence in favor of quantum outco...
Shortly after publishing this, I discovered something written by John Wheeler (whom Chris Langan cites) that feels thematically relevant. From Law Without Law:
I was hoping people other than Jessica would share some specific curated insights they got. Syndiffeonesis is in fact a good insight.
I finally wrote one up! It ballooned into a whole LessWrong post.
It seems if I only read the main text, the obvious interpretation is that points are events and the circles restrict which other events they can interact with.
This seems right to me, as far as I can tell, with the caveat that "restrict" (/ "filter") and "construct" are two sides of the same coin, as per constructive-filtrative duality.
From the diagram text, it seems he is instead saying that each circle represents entangled wavefunctions of some subset of objects that generated the circle.
I think each circle represents the entangled wavefunctions of ...
Great. Yes, I think that's the thing to do. Start small! I (and presumably others) would update a lot from a new piece of actual formal mathematics from Chris's work. Even if that work was, by itself, not very impressive.
(I would also want to check that that math had something to do with his earlier writings.)
I think we're on exactly the same page here.
...Please be prepared for the possibility that Chris is very smart and creative, and that he's had some interesting ideas (e.g. Syndiffeonesis), but that his framework is more of a interlocked collection
Except, I can already predict you're going to say that no piece of his framework can be understood without the whole. Not even by making a different smaller framework that exists just to showcase the well-ordering alternative. It's a little suspicious.
False! :P I think no part of his framework can be completely understood without the whole, but I think the big pictures of some core ideas can be understood in relative isolation. (Like syndiffeonesis, for example.) I think this is plausibly true for his alternatives to well-ordering as well.
...If you're g
I'd categorize this section as "not even wrong"; it isn't doing anything formal enough to have a mistake in it.
I think it's an attempt to gesture at something formal within the framework of the CTMU that I think you can only really understand if you grok enough of Chris's preliminary setup. (See also the first part of my comment here.)
(Perhaps you'd run into issues with making the sets well-ordered, but if so he's running headlong into the same issues.)
A big part of Chris's preliminary setup is around how to sidestep the issues around making the sets well-...
Thanks a lot for posting this, Jessica! A few comments:
It's an alternative ontology, conceiving of reality as a self-processing language, which avoids some problems of more mainstream theories, but has problems of its own, and seems quite underspecified in the document despite the use of formal notation.
I think this is a reasonable take. My own current best guess is that the contents of the document uniquely specifies a precise theory, but that it's very hard to understand what's being specified without grokking the details of all the arguments...
In particular, I think this manifests in part as an extreme lack of humility.
I just want to note that, based on my personal interactions with Chris, I experience Chris's "extreme lack of humility" similarly to how I experience Eliezer's "extreme lack of humility":
I agree with this.
I've spent 40+ hours talking with Chris directly, and for me, a huge part of the value also comes from seeing how Chris synthesizes all these ideas into what appears to be a coherent framework.
Here's my current understanding of what Scott meant by "just a little off".
I think exact Bayesian inference via Solomonoff induction doesn't run into the trapped prior problem. Unfortunately, bounded agents like us can't do exact Bayesian inference via Solomonoff induction, since we can only consider a finite set of hypotheses at any given point. I think we try to compensate for this by recognizing that this list of hypotheses is incomplete, and appending it with new hypotheses whenever it seems like our current hypotheses are doing a sufficiently te...
Yep! I addressed this point in footnote [3].
I just want to share another reason I find this n=1 anecdote so interesting -- I have a highly speculative inside view that the abstract concept of self provides a cognitive affordance for intertemporal coordination, resulting in a phase transition in agentiness only known to be accessible to humans.
Hmm, I'm not sure I understand what point you think I was trying to make. The only case I was trying to make here was that much of our subjective experience which may appear uniquely human might stem from our langauge abilites, which seems consistent with Helen Keller undergoing a phase transition in her subjective experience upon learning a single abstract concept. I'm not getting what age has to do with this.
Questions #2 and #3 seem positively correlated – if the thing that humans have is important, it's evidence that architectural changes matter a lot.
Not necessarily. For example, it may be that language ability is very important, but that most of the heavy lifting in our language ability comes from general learning abilities + having a culture that gives us good training data for learning language, rather than from architectural changes.
I remembered reading about this a while back and updating on it, but I'd forgotten about it. I definitely think this is relevant, so I'm glad you mentioned it -- thanks!
Isn't the more analogous argument "If I'm thinking about how to pick up tofu with a fork, and it feels good when I imagine doing that, then when I analogize to picking up feta with a fork, it would also feel good when I imagine that"? This does seem valid to me, and also seems more analogous to the argument you'd compared the counter-to-common-s... (read more)