I was a co-founder of CFAR in 2012. I'd been actively trying to save the world for about a decade at that point. I left in 2018 to seriously purify my mind & being. I realized in 2020 that I'd been using the fear of the end of the world like an addictive drug and did my damnedest to quit cold-turkey. I'm now doing my best to embody an answer to the global flurry in a way that's something like a fusion of game theory and Buddhist Tantra.
Find my non-rationalist writing, social media, and projects at my Linktree.
Ah yeah, I think "gaining independence" is a better descriptor of (what I meant by) that solution type.
A few examples:
It makes me so angry. It's perfectly antithetical to the essence of math as I see it.
In broad strokes I agree with you. Here I was sharing my observation of four cases where a friend was involved this way. One case might have been miscommunication but it doesn't seem likely to me. The other three definitely weren't. In one of those I personally knew the guy; I liked him, but he was also emotionally very unstable and definitely not a safe father. I don't think the abuse was physical in any of those four cases.
! I'm genuinely impressed if you wrote this post without having a mental frame for the concepts drawn from LDT.
Thanks. :)
And thanks for explaining. I'm not sure what "quasi-Kantian" or "quasi-Rawlsian" mean, and I'm not sure which piece of Eliezer's material you're gesturing toward, so I think I'm missing some key steps of reasoning.
But on the whole, yeah, I mean defensive power rather than offensive. The offensive stuff is relevant only to the extent that it works for defense. At least that's how it seems to me! I haven't thought about it very carefully. But the whole point is, what could make me safe if a hostile telepath discovers a truth in me? The "build power" family of solutions is based on neutralizing the relevance of the "hostile" part.
I think you're saying something more sophisticated than this. I'm not entirely sure what it is. Like here you say:
Basically, you have to control things orthogonal to your position in the lineup, to robustly improve your algorithm for negotiating with others.
I'm not sure what "the lineup" refers to, so I don't know what it means for something to be orthogonal to my position in it.
I think I follow and agree with what you're saying if I just reason in terms of "setting up arms races is bad, all else being equal".
Or to be more precise, if I take the dangers of adaptive entropy seriously and I view "create adaptive entropy to get ahead" as a confused pseudo-solution. It might be that that's my LDT-like framework.
I like this way of expressing it. Thanks for sharing.
I think it's the same core thing I was pointing at in "We're already in AI takeoff", only it goes in the opposite direction for metaphors. I was arguing that it's right to view memes as alive for the same reason we view trees and cats as alive. Grey seems to be arguing to set aside the question and just look at the function. Same intent, opposite approaches.
I think David Deutsch's article "The Evolution of Culture" is masterful at describing this approach to memetics.
(Though maybe I should say that the therapist needs to either experience unconditional positive regard toward the client, or successfully deceive themselves and the client into thinking that they do. Heh.)
I mean, technically they don't even need to deceive themselves. They can be consciously judgy as f**k as long as they can mask it effectively. Psychopaths might make for amazing therapists in this one way!
I think the word "power" might be creating some confusion here.
I mean something pretty specific and very practical. I'm not sure how to precisely define it, but here are some examples:
I'm not familiar with LDT. I can't comment on that part. Sorry if that means what I just said misses your point.
The fact that Bob has this policy in the first place is more likely when he's being self-deceptive.
I don't know if that's true. It might be. But some possible counterpoints:
…more often it will be the result of Bob noticing that he's the sort of person who might have something to hide.
Sure, that way of deciding doesn't work.
Likewise, if you're inclined to decide you're going to dig into possible sources of self-deception because you think it's unlikely that you have any, then you can't do this trick.
The hypothetical respect for any self-deception that might be there needs to be unconditional on its existence. Otherwise, for the reason you say, it doesn't work as well.
(…with some caveats about how people are imperfect telepaths, so some fuzz in implementation here is in practice fine.)
That said, I think you're right in that if Omega-C is looking only at the choice of whether to look or not, then yes, Omega-C would be right to take the choice as evidence of a deception.
But the whole point is that Omega-C can read what conscious processes you're using, and can see that you're deciding for a glomerizing reason.
That's why why you choose what you do matters so much here. Not just what you choose.
It's a general rule that if E is strong evidence for X, then ~E is at least weak evidence for ~X.
Conservation of expected evidence is what makes looking relevant. It's not what makes deciding to look relevant.
If I decide to appease Omega-C by looking, and then I find that I'm self-deceiving, the fact that I chose to look gets filtered. The fact that this is possible is why not finding evidence can matter at all. Otherwise it'd just be a charade.
Relatedly: I have a coin in my pocket. I don't feel like checking it for bias. Does that make it more likely that the coin is biased? Maybe. But if I could magically show you that I'm not looking because I honestly do not care one way or the other and don't want to waste the effort, and it doesn't affect me whether it's biased or not… then you can't use my disinterest in checking the coin for bias as evidence of some kind of subconscious deception about the coin's bias. I'm just refusing to do things that would inform you of the coin's possible bias.
If this kind of reasoning weren't possible, then it seems to me that glomerization wouldn't be possible.
It's not very hard to detect when someone's deceiving them self…
A few notes:
…people should notice more and disincentivise that
Boy oh boy do I disagree.
If someone's only option for dealing with a hostile telepath is self-deception, and then you come in and punish them for using it, thou art a dick.
Like, do you think it helps the abused mothers I named if you punish them somehow for not acknowledging their partners' abuse? Does it even help the social circle around them?
Even if the "hostile telepath" model is wrong or doesn't apply in some cases, people self-deceive for some reason. If you don't dialogue with that reason at all and just create pain and misery for people who use it, you're making some situation you don't understand worse.
I agree that getting self-deception out of a culture is a great idea. I want less of it in general.
But we don't get there by disincentivizing it.
To me this is exciting. I deduced that the mental architecture you're describing should be possible. It's extremely cool to hear someone just name it as a lived experience. Like, what would a mind that's actually systematically free of Newcomblike self-deception have to be like, assuming the hostile telepaths problem is real? This is one possible solution. Assuming I haven't misunderstood what you're describing!