All of zhukeepa's Comments + Replies

“If I’m thinking about how to pick up tofu with a fork, I might analogize to how I might pick up feta with a fork, and so if tofu is yummy then I’ll get a yummy vibe and I’ll wind up feeling that feta is yummy too.”

Isn't the more analogous argument "If I'm thinking about how to pick up tofu with a fork, and it feels good when I imagine doing that, then when I analogize to picking up feta with a fork, it would also feel good when I imagine that"? This does seem valid to me, and also seems more analogous to the argument you'd compared the counter-to-common-s... (read more)

2Steven Byrnes
Hmm, maybe we should distinguish two things: * (A) I find the feeling of picking up the tofu with the fork to be intrinsically satisfying—it feels satisfying and empowering to feel the tongs of the fork slide into the tofu. * (B) I don’t care at all about the feeling of the fork sliding into the tofu; instead I feel motivated to pick up tofu with the fork because I’m hungry and tofu is yummy. For (A), the analogy to picking up feta is logically sound—this is legitimate evidence that picking up the feta will also feel intrinsically satisfying. And accordingly, my brain, having made the analogy, correctly feels motivated to pick up feta. For (B), the analogy to picking up feta is irrelevant. The dimension along which I’m analogizing (how the fork slides in) is unrelated to the dimension which constitutes the source of my motivation (tofu being yummy). And accordingly, if I like the taste of tofu but dislike feta, then I will not feel motivated to pick up the feta, not even a little bit, let alone to the point where it’s determining my behavior. The lesson here (I claim) is that our brain algorithms are sophisticated enough to not just note whether an analogy target has good or bad vibes, but rather whether the analogy target has good or bad vibes for reasons that legitimately transfer back to the real plan under consideration.   So circling back to empathy, if I was a sociopath, then “Ahmed getting punched” might still kinda remind me of “me getting punched”, but the reason I dislike “me getting punched” is because it’s painful, whereas “Ahmed getting punched” is not painful. So even if “me getting punched” momentarily popped into my sociopathic head, I would then immediately say to myself “ah, but that’s not something I need to worry about here”, and whistle a tune and carry on with my day. Remember, empathy is a major force. People submit to torture and turn their lives upside down over feelings of empathy. If you want to talk about phenomena like “somethin
zhukeepa137

I think I assign close to zero probability to the first hypothesis. Brains are not that fast at thinking, and while sometimes your system 1 can make snap judgements, brains don't reevaluate huge piles of evidence in milliseconds. These kinds of things take time, and that means if you are dying,  you will die before you get to finish your life review.

My guess is that our main crux lies somewhere around here. If I'd thought the life review experience involved tons and tons of "thinking", or otherwise some form of active cognitive processing, I would als... (read more)

1Ivan Vendrov
I know this isn't the central point of your life reviews section but curious if your model has any lower bound on life review timing - if not minutes to hours, at least seconds? milliseconds? (1 ms being a rough lower bound on the time for a signal to travel between two adjacent neurons). If it's at least milliseconds it opens the strange metaphysical possibility of certain deaths (e.g. from very intense explosions) being exempt from life reviews.
4Kaj_Sotala
That doesn't seem to match the account in the trip report you linked, though, which seems to involve processing a lot of things in a time-consuming linear fashion. E.g.:

I'm curious for your models for why people might experience these kinds of states. 

One crucial aspect of my model is that these kinds of states get experienced when the psychological defense mechanisms that keep us dissociated get disarmed. If Alice and Bob are married, and Bob is having an affair with Carol, it's very common for Alice to filter out all the evidence that Bob is having an affair. When Alice finally confronts the reality of Bob's affair, the psychological motive for filtering out the evidence that Bob is having an affair gets rendered o... (read more)

habryka102

While I am pretty skeptical of most variations on this hypothesis, I do think it makes sense to distinguish between at least two different hypotheses: 

  • People brains meaningfully re-evaluate and propagate a huge amount of evidence in a "split second" when they are close to dying
  • When people get close to dying their perspective shifts in a way that causes lasting psychological change, which they in-retrospect interpret as a kind of life-review, but the actual cognitive processing going on here is happening over the course of minutes and maybe hours

I thin... (read more)

@habryka, responding to your agreement with this claim: 

a majority of the anecdata about reviewing the details of one's life from a broader vantage point are just culturally-mediated hallucinations, like alien abductions. 

I think my real crux is that I've had experiences adjacent to near-death experiences on ayahuasca, during which I've directly experienced some aspects of phenomena reported in life reviews (like re-experiencing memories in relatively high-res from a place where my usual psychological defenses weren't around to help me dissociate... (read more)

8habryka
It wouldn't surprise me very much if there were psychological states that seem strongly stress-mediated or can be drug-induced which feel a lot like life-reviews, or which get remembered as something like life-reviews.  The thing that I think is unlikely is that those states are particularly strongly correlated with someone actually almost dying (or like, I expect it to be a kind of specific subset of people dying, maybe indeed people who have heart-attacks in-particular). I am very skeptical that these experiences are in some sense 'caused' by almost dying in a way that your explanation would require: 

Thanks a lot for sharing your thoughts! A couple of thoughts in response: 

I suspect that the principles you describe around the "experience of tanha" go well beyond human or even mammalian psychology. 

That's how I see it too. Buddhism says tanha is experienced by all non-enlightened beings, which probably includes some unicellular organisms. If I recall correctly, some active inference folk I've brainstormed with consider tanha a component of any self-evidencing process with counterfactual depth

Forgiveness (non-judgment?) may then need a c

... (read more)

I really like the directions that both of you are thinking in. 

But I think the "We suffered and we forgive, why can't you?" is not the way to present the idea.

I agree. I think of it more as like "We suffered and we forgave and found inner peace in doing so, and you can too, as unthinkable as that may seem to you". 

I think the turbo-charged version is "We suffered and we forgave, and we were ultimately grateful for the opportunity to do so, because it just so deeply nourishes our souls to know that we can inspire hope and inner peace in others goi... (read more)

Here's something possibly relevant I wrote in a draft of this post that I ended up cutting out, because people seemed to keep getting confused about what I was trying to say. I'm including this in the hopes that it will clarify rather than further confuse, but I will warn in advance that the latter may happen instead...

The Goodness of Reality hypothesis is closely related to the Buddhist claim of non-self, which says that any fixed and unchanging sense of self we identify with is illusory; I partially interpret “illusory” to mean “causally downstream

... (read more)
1Mateusz Bagiński
Can't one's terminal values be exactly (mechanistically implemented as) active blind spots? I predict that you would say something like "The difference is that active blind spots can be removed/healed/refactored 'just' by (some kind of) learning, so they're not unchanging as one's terminal values would be assumed to be."?

Your section on "tanha" sounds roughly like projecting value into the world, and then mentally latching on to an attractive high-value fabricated option.

I would say that the core issue has more to do with the mental latching (or at least a particular flavor of it, which is what I'm claiming tanha refers to) than with projecting value into the world. I'm basically saying that any endorsed mental latching is downstream of an active blind spot, regardless of whether it's making the error of projecting value into the world. 

I think this probably brings us... (read more)

6zhukeepa
Here's something possibly relevant I wrote in a draft of this post that I ended up cutting out, because people seemed to keep getting confused about what I was trying to say. I'm including this in the hopes that it will clarify rather than further confuse, but I will warn in advance that the latter may happen instead...

I'm open to the hypothesis that the life review is basically not a real empirical phenomenon, although I don't currently find that very plausible. I do think it's probably true that a lot of the detailed characteristics ascribed to life reviews are not nearly as universal as some near-death experience researchers claim they are, but it seems pretty implausible to me that a majority of the anecdata about reviewing the details of one's life from a broader vantage point are just culturally-mediated hallucinations, like alien abductions. (That's what I'm under... (read more)

For what it's worth, I found myself pretty compelled by a theory someone told me years ago, that alien abductions are flashbacks to birth and/or diaper changes:

  • laid on a table, bare walls, bright lights you're staring up at (unnecessary, and unpleasant for a baby, but common in hospitals and some homes)
  • one or more figures crowded around you (parents and/or doctors)
  • these figures are empathetic & warm towards you (or are at worst kind of apathetic, not malevolent)
  • communicating telepathically (in a way you can't make sense of, perhaps wearing masks if doc
... (read more)
4zhukeepa
@habryka, responding to your agreement with this claim:  I think my real crux is that I've had experiences adjacent to near-death experiences on ayahuasca, during which I've directly experienced some aspects of phenomena reported in life reviews (like re-experiencing memories in relatively high-res from a place where my usual psychological defenses weren't around to help me dissociate, especially around empathizing with others' experiences), which significantly increases my credence on there being something going on in these life review accounts beyond just culturally-mediated hallucinations.  Abduction by literal physical aliens is obviously a culturally-mediated hallucination, but I suspect the general experience of "alien abductions" is an instantiation of an unexplained psychological phenomenon that's been reported throughout the ages. I don't feel comfortable dismissing this general psychological phenomenon as 100% chaff, given that I've had comparably strange experiences of "receiving teachings from a plant spirit" that nevertheless seem explainable within the scientific worldview.  In general, I think we have significantly different priors about the extent to which the things you dismiss as confabulations actually contain veridical content of a type signature that's still relatively foreign to the mainstream scientific worldview, in addition to chaff that's appropriate to dismiss. I'll concede that you're much better than I am at rejecting false positives, but I think my "epistemic risk-neutrality" makes me much better than you are at rejecting false negatives. 🙂

Regarding your second point, I'm leaving this comment as a placeholder to indicate my intention to give a proper response at some point. My views here have some subtlely that I want to make sure I unpack correctly, and it's getting late here! 

In response to your third point, I want to echo ABlue's comment about the compatibility of the trapped prior view and the evopsych view. I also want to emphasize that my usage of "trapped prior" includes genetically pre-specified priors, like a fear of snakes, which I think can be overriden. 

In any case, I don't see why priors that predispose us to e.g. adultery couldn't be similarly overriden. I wonder if our main source of disagreement has to do with the feasibility of overriding "hard-wired" evolutionary priors? 

In response to your first point, I think of moral codes as being contextual more than I think of them as being subjective, but I do think of them as fundamentally being about pragmatism ("let's all agree to coordinate in ABC way to solve PQR problem in XYZ environment, and socially punish people who aren't willing to do so"). I also think religions often make the mistake of generalizing moral codes beyond the contexts in which they arose as helpful adaptations. 

I think of decision theory as being the basis for morality -- see e.g. Critch's take here a... (read more)

zhukeepa110

I do draw a distinction between value and ethics. Although my current best guess is that decision theory does in some sense reduce ethics to a subset of value, I do think it's a subset worth distinguishing. For example, I still have a concept of evaluating how ethical someone is, based on how good they are at paying causal costs for larger acausal gains. 

I think the Goodness of Reality principle is maybe a bit confusingly named, because it's not really a claim about the existence of some objective notion of Good that applies to reality per se, and is ... (read more)

Thanks a lot for sharing your experience! I would be very curious for you to further elaborate on this part: 

Eventually this led to some moments of insight when I realized just how trapped by my own ontology I had become, and then found a way threw to a new way of seeing the world. These happened almost instantly, like a dam breaking and releasing all the realizations that had been held back.

6Gordon Seidoh Worley
Sure. This happened several times to me, each of which I interpret as a transition from one developmental level to the next, e.g. Kegan 3 -> 4 -> 5 -> Cook-Greuter 5/6 -> 6. Might help to talk about just one of these transitions. In the Summer of 2015 I was thinking a lot about philosophy and trying to make sense of the world and kept noticing that, no matter what I did, I'd always run into some kind of hidden assumption that acted as a free variable in my thinking that was not constrained by anything and thus couldn't be justified. I had been going in circles around this for a couple years at this point. I was also, coincidentally, trying to figure out how to manage the work of a growing engineering team and struggling because, to me, other people looked like black boxes that I only kind of understood. In the midst of this I read The e-Myth on the recommendation of a coworker, and in the middle of it there was this line about how effective managers are neither always high or low status, but change how they act based on the situation, and combined with a lot of other reading I was doing this caused a lot of things to click into place. The phenomenology of it was the same as every time I've had one of these big insights. It felt like my mind stopped for several seconds while I hung out in an empty state, and then I came back online with a deeper understanding of the world. In this case, it was something like "I can believe anything I want" in the sense that there really was some unjustified assumptions being made in my thinking, this was unavoidable, and it was okay because there was no other choice. All I could do was pick the assumptions to be the ones that would be most likely to make me have a good map of the world. It then took a couple years to really integrate this insight, and it wasn't until 2017 that I really started to grapple with the problems of the next one I would have.
zhukeepa125

But in order for that to be plausible, you would need a reason why the almost-truths they found are so goddamn antimemetic that the most studied and followed people in history weren't able to make them stick.

A few thoughts: 

  1. I think many of the truths do stick (like "it's never too late to repent for your misdeeds"), but end up getting wrapped up in a bunch of garbage. 
  2. The geeks, mops, and sociopaths model feels very relevant, with the great spiritual leaders / people who were serious about doing inner work being the geeks. 
  3. In some sense, the
... (read more)
1Christian Z R
I think you accidentally pointed the link about geeks, mops, and sociopaths  to this article. I googled the term instead. It does a really good work of explaining what happened in most religions in late antiquity, for evidence about Christianity actually being a better subculture than paganism back then you just have to look at how envious the last pagan emperor, Julian the Apostate, was of their spontaneous altruism.

There are important insights and claims from religious sources that seem to capture psychological and social truths that aren't yet fully captured by science.  At least some of these phenomenon might be formalizable via a better understanding of how the brain and the mind work, and to that end predictive processing (and other theories of that sort) could be useful to explain the phenomenon in question.

Yes, I agree with this claim. 

You spoke of wanting formalization but I wonder if the main thing is really the creation of a science, though of cour

... (read more)

I'm not sure what you mean by that, but the claim "many interpretations of religious mystical traditions converge because they exploit the same human cognitive flaws" seems plausible to me. I mostly don't find such interpretations interesting, and don't think I'm interpreting religious mystical traditions in such a way. 

0romeostevensit
I'm saying it's difficult to distinguish causation

If I change "i.e. the pluralist focus Alex mentions" to "e.g. the pluralist focus Alex mentions" does that work? I shouldn't have implied that all people who believe in heuristics recommended by many religions are pluralists (in your sense). But it does seem reasonable to say that pluralists (in your sense) believe in heuristics recommended by many religions, unless I'm misunderstanding you. (In the examples you listed these would be heuristics like "seek spiritual truth", "believe in (some version of) God", "learn from great healers", etc.)

If your main po... (read more)

So my overall position here is something like: we should use religions as a source of possible deep insights about human psychology and culture, to a greater extent than LessWrong historically has (and I'm grateful to Alex for highlighting this, especially given the social cost of doing so).


Thanks a lot for the kind words! 

IMO this all remains true even if we focus on the heuristics recommended by many religions, i.e. the pluralistic focus Alex mentions. 

I think we're interpreting "pluralism" differently. Here are some central illustrations of wh... (read more)

7Richard_Ngo
If I change "i.e. the pluralist focus Alex mentions" to "e.g. the pluralist focus Alex mentions" does that work? I shouldn't have implied that all people who believe in heuristics recommended by many religions are pluralists (in your sense). But it does seem reasonable to say that pluralists (in your sense) believe in heuristics recommended by many religions, unless I'm misunderstanding you. (In the examples you listed these would be heuristics like "seek spiritual truth", "believe in (some version of) God", "learn from great healers", etc.) I personally don't have a great way of distinguishing between "trying to reach these people" and "trying to manipulate these people". In general I don't even think most people trying to do such outreach genuinely know whether their actual motivations are more about outreach or about manipulation. (E.g. I expect that most people who advocate for luxury beliefs sincerely believe that they're trying to help worse-off people understand the truth.) Because of this I'm skeptical of elite projects that have outreach as a major motivation, except when it comes to very clearly scientifically-grounded stuff.

Perhaps these concerns would be addressed by examples of the kind of statement you have in mind.

I'm not sure exactly what you're asking -- I wonder how much my reply to Adam Shai addresses your concerns? 

I will also mention this quote from the category theorist Lawvere, whose line of thinking I feel pretty aligned with: 

It is my belief that in the next decade and in the next century the technical advances forged by category theorists will be of value to dialectical philosophy, lending precise form with disputable mathematical models to ancient ph

... (read more)
2Joel Burget
Very helpful, thank you.

I'm not sure how much this answers your question, but: 

  1. I actually think Buddhism's metaphysics is quite well-fleshed-out, and AFAIK has the most fleshed-out metaphysical system out of all the religious traditions. I think it would be sufficient for my goals to find a formalization of Buddhist metaphysics, which I think would be detailed and granular enough to transcend and include the metaphysics of other religious traditions. 
  2. I think a lot of Buddhist claims can be described in the predictive processing framework -- see e.g. this paper giving a
... (read more)
2Adam Shai
Thanks this was clarifying. I am wondering if you agree with the following (focusing on the predictive processing parts since that's my background): There are important insights and claims from religious sources that seem to capture psychological and social truths that aren't yet fully captured by science.  At least some of these phenomenon might be formalizable via a better understanding of how the brain and the mind work, and to that end predictive processing (and other theories of that sort) could be useful to explain the phenomenon in question. You spoke of wanting formalization but I wonder if the main thing is really the creation of a science, though of course math is a very useful tool to do science with and to create a more complete understanding.  At the end of the day we want our formalizations to comport to reality - whatever aspects of reality we are interested in understanding.

It's relevant that I think of the type signature of religious metaphysical claims as being more like "informal descriptions of the principles of consciousnes / the inner world" (analogously to informal descriptions of the principles of the natural world) than like "ideology or narrative". Lots of cultures independently made observations about the natural world, and Newton's Laws in some sense could be thought of as a "Rosetta Stone" for these informal observations about the natural world. 

Yeah, I also see broad similarities between my vision and that of the Meaning Alignment people. I'm not super familiar with the work they're doing, but I'm pretty positive on the the little bits of it I've encountered. I'd say that our main difference is that I'm focusing on ungameable preference synthesis, which I think will be needed to robustly beat Moloch. I'm glad they're doing what they're doing, though, and I wouldn't be shocked if we ended up collaborating at some point. 

Thanks for the elaboration. Your distinction about creating vs reconciling preferences seems to hinge on the distinction between "ur-want" and "proper want". I'm not really drawing a type-level distinction between "ur-want" and "proper want", and think of each flower as itself being a flowerbud that could further bloom. In my example of Alice wanting X, Bob wanting Y, and Carol proposing Z, I'd thought of X and Y as both "proper wants" and "ur-wants that bloomed into Z"

Thanks, this really warmed my heart to read :) I'm glad you appreciated all those details! 

I don't really get how what you just said relates to creating vs reconciling preferences. Can you elaborate on that a bit more? 

2TsviBT
I'll try a bit but it would take like 5000 words to fully elaborate, so I'd need more info on which part is unclear or not trueseeming. One piece is thinking of individual humans vs collectives. If an individual can want in the fullest sense, then a collective is some sort of combination of wants from constituents--a reconciliation. If an individual can't want in the fullest sense, but a collective can, then: If you take several individuals with their ur-wants and create a collective with proper wants, then a proper want has been created de novo. The theogenic/theopoetic faculty points at creating collectives-with-wants, but it isn't a want itself. A flowerbud isn't a flower. The picture is complicated of course. For example, individual humans can do this process on their own somewhat, with themselves. And sometimes you do have a want, and you don't understand the want clearly, and then later come to understand the want more clearly. But part of what I'm saying is that many episodes that you could retrospectively describe that way are not really like that; instead, you had a flowerbud, and then by asking for a flower you called the flowerbud to bloom.

I'm not sure how you're interpreting the distinction between creating a preference vs reconciling a preference. 

Suppose Alice wants X and Bob wants Y, and X and Y appear to conflict, but Carol shows up and proposes Z, which Alice and Bob both feel like addresses what they'd initially wanted from X and Y. Insofar as Alice and Bob both prefer Z over X and Y and hadn't even considered Z beforehand, in some sense Carol created this preference for them; but I also think of this preference for Z as reconciling their conflicting preferences X and Y. 

1TsviBT
I'm saying that a religious way of being is one where the minimal [thing that can want, in the fullest sense] is a collective.
zhukeepa177

People sometimes say that AGI will be like a second species; sometimes like electricity. The truth, we suspect, lies somewhere in between. Unless we have concepts which let us think clearly about that region between the two, we may have a difficult time preparing.

I just want to strongly endorse this remark made toward the end of the post. In my experience, the standard fears and narratives around AI doom invoke "second species" intuitions that I think stand on much shakier ground than is commonly acknowledged. (Things can still get pretty bad without a "se... (read more)

zhukeepaΩ330

Thanks, Alex. Any connections between this and CTMU? (I'm in part trying to evaluate CTMU by looking at whether it has useful implications for an area that I'm relatively familiar with.)

No direct connections that I'm aware of (besides non-classical logics being generally helpful for understanding the sorts of claims the CTMU makes). 

4Wei Dai
Thanks, Alex. Any connections between this and CTMU? (I'm in part trying to evaluate CTMU by looking at whether it has useful implications for an area that I'm relatively familiar with.) BTW, @jessicata, do you still endorse this post, and what other posts should I read to get up to date on your current thinking about decision theory?
zhukeepa10

Good question! Yeah, there's nothing fundamentally quantum about this effect. But if the simulator wants to focus on universes with 1 & 2 fixed (e.g. if they're trying to calculate the distribution of superintelligences across Tegmark IV), the PNRG (along with the initial conditions of the universe) seem like good places for a simulator to tweak things. 

zhukeepa10

It is not clear to me that this would result in a lower Kolmogorov complexity at all. Such an algorithm could of course use a pseudo-random number generator for the vast majority quantum events which do not affect p(ASI) (like the creation of CMB photons), but this is orthogonal to someone nudging the relevant quantum events towards ASI. For these relevant events, I am not sure that the description "just do whatever favors ASI" is actually shorter than just the sequence of events.

Hmm, I notice I may have been a bit unclear in my original post. When I'd sai... (read more)

zhukeepa10

This. Physics runs on falsifiable predictions. If 'consciousness can affect quantum outcomes' is any more true than the classic 'there is an invisible dragon in my garage', then discovering that fact would seem easy from an experimentalist standpoint. Sources of quantum randomness (e.g. weak source+detector) are readily available, so any claimant who thinks they can predict or affect their outcomes could probably be tested initially for a few 100$. 

Yes, I'm also bearish on consciousness affecting quantum outcomes in ways that are as overt and measurab... (read more)

zhukeepa10

I'll take a stab at this. Suppose we had strong a priori reasons for thinking it's in our logical past that we'll have created a superintelligence of some sort. Let's suppose that some particular quantum outcome in the future can get chaotically amplified, so that in one Everett branch humanity never builds any superintelligence because of some sort of global catastrophe (say with 99% probability, according to the Born rule), and in some other Everett branch humanity builds some kind of superintelligence (say with 1% probability, according to the Born rule... (read more)

zhukeepa*30

If we performed a trillion 50/50 quantum coin flips, and found a program with K-complexity far less than a trillion that could explain these outcomes, that would be an example of evidence in favor of this hypothesis. (I don't think it's very likely that we'll be able to find a positive result if we run that particular experiment; I'm naming it more to illustrate the kind of thing that would serve as evidence.) (EDIT: This would only serve as evidence against quantum outcomes being truly random. In order for it to serve as evidence in favor of quantum outco... (read more)

zhukeepa40

Shortly after publishing this, I discovered something written by John Wheeler (whom Chris Langan cites) that feels thematically relevant. From Law Without Law

zhukeepa30

I was hoping people other than Jessica would share some specific curated insights they got. Syndiffeonesis is in fact a good insight.

I finally wrote one up! It ballooned into a whole LessWrong post. 

It seems if I only read the main text, the obvious interpretation is that points are events and the circles restrict which other events they can interact with.

This seems right to me, as far as I can tell, with the caveat that "restrict" (/ "filter") and "construct" are two sides of the same coin, as per constructive-filtrative duality. 

From the diagram text, it seems he is instead saying that each circle represents entangled wavefunctions of some subset of objects that generated the circle.

I think each circle represents the entangled wavefunctions of ... (read more)

Great. Yes, I think that's the thing to do. Start small! I (and presumably others) would update a lot from a new piece of actual formal mathematics from Chris's work. Even if that work was, by itself, not very impressive.

(I would also want to check that that math had something to do with his earlier writings.)

I think we're on exactly the same page here. 

Please be prepared for the possibility that Chris is very smart and creative, and that he's had some interesting ideas (e.g. Syndiffeonesis), but that his framework is more of a interlocked collection

... (read more)

Except, I can already predict you're going to say that no piece of his framework can be understood without the whole. Not even by making a different smaller framework that exists just to showcase the well-ordering alternative. It's a little suspicious.

False! :P I think no part of his framework can be completely understood without the whole, but I think the big pictures of some core ideas can be understood in relative isolation. (Like syndiffeonesis, for example.) I think this is plausibly true for his alternatives to well-ordering as well. 

If you're g

... (read more)
3justinpombrio
Great. Yes, I think that's the thing to do. Start small! I (and presumably others) would update a lot from a new piece of actual formal mathematics from Chris's work. Even if that work was, by itself, not very impressive. (I would also want to check that that math had something to do with his earlier writings.) Uh oh. The "formal grammar" that I checked used formal language, but was not even close to giving a precise definition. So Chris either (i) doesn't realize that you need to be precise to communicate with mathematicians, or (ii) doesn't understand how to be precise. Please be prepared for the possibility that Chris is very smart and creative, and that he's had some interesting ideas (e.g. Syndiffeonesis), but that his framework is more of a interlocked collection of ideas than anything mathematical (despite using terms from mathematics). Litany of Tarsky and all that.

I'd categorize this section as "not even wrong"; it isn't doing anything formal enough to have a mistake in it.

I think it's an attempt to gesture at something formal within the framework of the CTMU that I think you can only really understand if you grok enough of Chris's preliminary setup. (See also the first part of my comment here.)

(Perhaps you'd run into issues with making the sets well-ordered, but if so he's running headlong into the same issues.)

A big part of Chris's preliminary setup is around how to sidestep the issues around making the sets well-... (read more)

1justinpombrio
"gesture at something formal" -- not in the way of the "grammar" it isn't. I've seen rough mathematics and proof sketches, especially around formal grammars. This isn't that, and it isn't trying to be. There isn't even an attempt at a rough definition for which things the grammar derives. Nonsense! If Chris has an alternative to well-ordering, that's of general mathematical interest! He would make a splash simply writing that up formally on its own, without dragging the rest of his framework along with it. Except, I can already predict you're going to say that no piece of his framework can be understood without the whole. Not even by making a different smaller framework that exists just to showcase the well-ordering alternative. It's a little suspicious. If you're going to fund someone to do something, it should be to formalize Chris's work. That would not only serve as a BS check, it would make it vastly more approachable. I was hoping people other than Jessica would share some specific curated insights they got. Syndiffeonesis is in fact a good insight.

Thanks a lot for posting this, Jessica! A few comments: 

It's an alternative ontology, conceiving of reality as a self-processing language, which avoids some problems of more mainstream theories, but has problems of its own, and seems quite underspecified in the document despite the use of formal notation. 

I think this is a reasonable take. My own current best guess is that the contents of the document uniquely specifies a precise theory, but that it's very hard to understand what's being specified without grokking the details of all the arguments... (read more)

2jessicata
Regarding quantum, I'd missed the bottom text. It seems if I only read the main text, the obvious interpretation is that points are events and the circles restrict which other events they can interact with. He says "At the same time, conspansion gives the quantum wave function of objects a new home: inside the conspanding objects themselves" which implies the wave function is somehow located in the objects. From the diagram text, it seems he is instead saying that each circle represents entangled wavefunctions of some subset of objects that generated the circle. I still don't see how to get quantum non-locality from this. The wave function can be represented as a complex valued function on configuration space; how could it be factored into a number of entanglements that only involve a small number of objects? In probability theory you can represent a probability measure as a factor graph, where each factor only involves a limited subset of variables, but (a) not all distributions can be efficiently factored this way, (b) generalizing this to quantum wave functions is additionally complicated due to how wave functions differ from probability distributions.

In particular, I think this manifests in part as an extreme lack of humility.

I just want to note that, based on my personal interactions with Chris, I experience Chris's "extreme lack of humility" similarly to how I experience Eliezer's "extreme lack of humility": 

  1. in both cases, I think they have plausibly calibrated beliefs about having identified certain philosophical questions that are of crucial importance to the future of humanity, that most of the world is not taking seriously,[1] leading them to feel a particular flavor of frustration that
... (read more)
1YimbyGeorge
Thanks was looking for that link to his resolution of newcombs' paradox. Too funny! "You are "possessed" by Newcomb's Demon, and whatever self-interest remains to you will make you take the black box only. (Q.E.D.)"

I agree with this.

I've spent 40+ hours talking with Chris directly, and for me, a huge part of the value also comes from seeing how Chris synthesizes all these ideas into what appears to be a coherent framework. 

Here's my current understanding of what Scott meant by "just a little off". 

I think exact Bayesian inference via Solomonoff induction doesn't run into the trapped prior problem. Unfortunately, bounded agents like us can't do exact Bayesian inference via Solomonoff induction, since we can only consider a finite set of hypotheses at any given point. I think we try to compensate for this by recognizing that this list of hypotheses is incomplete, and appending it with new hypotheses whenever it seems like our current hypotheses are doing a sufficiently te... (read more)

6AnnaSalamon
I agree an algorithm could do as you describe. I don't think that's what's happening in me or other people. Or at least, I don't think it's a full description. One reason I don't, is that after I've e.g. been camping for a long time, with a lot of room for quiet, it becomes easier than it has been to notice that I don't have to see things the way I've been seeing them. My priors become "less stuck", if you like. I don't see why that would be, on your (zhukeepa's) model. Introspectively, I think it's more like, that sometimes facing an unknown hypothesis (or rather, a hypothesis that'll send the rest of my map into unknownness) is too scary to manage to see as a possibility at all.

Yep! I addressed this point in footnote [3].

4Ankesh Anand
The raw neural network does use search during training though, and does not rely on search only during evaluation.
zhukeepaΩ240

I just want to share another reason I find this n=1 anecdote so interesting -- I have a highly speculative inside view that the abstract concept of self provides a cognitive affordance for intertemporal coordination, resulting in a phase transition in agentiness only known to be accessible to humans.

zhukeepaΩ230

Hmm, I'm not sure I understand what point you think I was trying to make. The only case I was trying to make here was that much of our subjective experience which may appear uniquely human might stem from our langauge abilites, which seems consistent with Helen Keller undergoing a phase transition in her subjective experience upon learning a single abstract concept. I'm not getting what age has to do with this.

4zhukeepa
I just want to share another reason I find this n=1 anecdote so interesting -- I have a highly speculative inside view that the abstract concept of self provides a cognitive affordance for intertemporal coordination, resulting in a phase transition in agentiness only known to be accessible to humans.
zhukeepaΩ110
Questions #2 and #3 seem positively correlated – if the thing that humans have is important, it's evidence that architectural changes matter a lot.

Not necessarily. For example, it may be that language ability is very important, but that most of the heavy lifting in our language ability comes from general learning abilities + having a culture that gives us good training data for learning language, rather than from architectural changes.

zhukeepaΩ110

I remembered reading about this a while back and updating on it, but I'd forgotten about it. I definitely think this is relevant, so I'm glad you mentioned it -- thanks!

Load More