I was a co-founder of CFAR in 2012. I'd been actively trying to save the world for about a decade at that point. I left in 2018 to seriously purify my mind & being. I realized in 2020 that I'd been using the fear of the end of the world like an addictive drug and did my damnedest to quit cold-turkey. I'm now doing my best to embody an answer to the global flurry in a way that's something like a fusion of game theory and Buddhist Tantra.
Find my non-rationalist writing, social media, and projects at my Linktree.
A few examples:
It makes me so angry. It's perfectly antithetical to the essence of math as I see it.
In broad strokes I agree with you. Here I was sharing my observation of four cases where a friend was involved this way. One case might have been miscommunication but it doesn't seem likely to me. The other three definitely weren't. In one of those I personally knew the guy; I liked him, but he was also emotionally very unstable and definitely not a safe father. I don't think the abuse was physical in any of those four cases.
! I'm genuinely impressed if you wrote this post without having a mental frame for the concepts drawn from LDT.
Thanks. :)
And thanks for explaining. I'm not sure what "quasi-Kantian" or "quasi-Rawlsian" mean, and I'm not sure which piece of Eliezer's material you're gesturing toward, so I think I'm missing some key steps of reasoning.
But on the whole, yeah, I mean defensive power rather than offensive. The offensive stuff is relevant only to the extent that it works for defense. At least that's how it seems to me! I haven't thought about it very carefully. But the whole point is, what could make me safe if a hostile telepath discovers a truth in me? The "build power" family of solutions is based on neutralizing the relevance of the "hostile" part.
I think you're saying something more sophisticated than this. I'm not entirely sure what it is. Like here you say:
Basically, you have to control things orthogonal to your position in the lineup, to robustly improve your algorithm for negotiating with others.
I'm not sure what "the lineup" refers to, so I don't know what it means for something to be orthogonal to my position in it.
I think I follow and agree with what you're saying if I just reason in terms of "setting up arms races is bad, all else being equal".
Or to be more precise, if I take the dangers of adaptive entropy seriously and I view "create adaptive entropy to get ahead" as a confused pseudo-solution. It might be that that's my LDT-like framework.
I like this way of expressing it. Thanks for sharing.
I think it's the same core thing I was pointing at in "We're already in AI takeoff", only it goes in the opposite direction for metaphors. I was arguing that it's right to view memes as alive for the same reason we view trees and cats as alive. Grey seems to be arguing to set aside the question and just look at the function. Same intent, opposite approaches.
I think David Deutsch's article "The Evolution of Culture" is masterful at describing this approach to memetics.
(Though maybe I should say that the therapist needs to either experience unconditional positive regard toward the client, or successfully deceive themselves and the client into thinking that they do. Heh.)
I mean, technically they don't even need to deceive themselves. They can be consciously judgy as f**k as long as they can mask it effectively. Psychopaths might make for amazing therapists in this one way!
I think the word "power" might be creating some confusion here.
I mean something pretty specific and very practical. I'm not sure how to precisely define it, but here are some examples:
I'm not familiar with LDT. I can't comment on that part. Sorry if that means what I just said misses your point.
The fact that Bob has this policy in the first place is more likely when he's being self-deceptive.
I don't know if that's true. It might be. But some possible counterpoints:
…more often it will be the result of Bob noticing that he's the sort of person who might have something to hide.
Sure, that way of deciding doesn't work.
Likewise, if you're inclined to decide you're going to dig into possible sources of self-deception because you think it's unlikely that you have any, then you can't do this trick.
The hypothetical respect for any self-deception that might be there needs to be unconditional on its existence. Otherwise, for the reason you say, it doesn't work as well.
(…with some caveats about how people are imperfect telepaths, so some fuzz in implementation here is in practice fine.)
That said, I think you're right in that if Omega-C is looking only at the choice of whether to look or not, then yes, Omega-C would be right to take the choice as evidence of a deception.
But the whole point is that Omega-C can read what conscious processes you're using, and can see that you're deciding for a glomerizing reason.
That's why why you choose what you do matters so much here. Not just what you choose.
It's a general rule that if E is strong evidence for X, then ~E is at least weak evidence for ~X.
Conservation of expected evidence is what makes looking relevant. It's not what makes deciding to look relevant.
If I decide to appease Omega-C by looking, and then I find that I'm self-deceiving, the fact that I chose to look gets filtered. The fact that this is possible is why not finding evidence can matter at all. Otherwise it'd just be a charade.
Relatedly: I have a coin in my pocket. I don't feel like checking it for bias. Does that make it more likely that the coin is biased? Maybe. But if I could magically show you that I'm not looking because I honestly do not care one way or the other and don't want to waste the effort, and it doesn't affect me whether it's biased or not… then you can't use my disinterest in checking the coin for bias as evidence of some kind of subconscious deception about the coin's bias. I'm just refusing to do things that would inform you of the coin's possible bias.
If this kind of reasoning weren't possible, then it seems to me that glomerization wouldn't be possible.
It's not very hard to detect when someone's deceiving them self…
A few notes:
…people should notice more and disincentivise that
Boy oh boy do I disagree.
If someone's only option for dealing with a hostile telepath is self-deception, and then you come in and punish them for using it, thou art a dick.
Like, do you think it helps the abused mothers I named if you punish them somehow for not acknowledging their partners' abuse? Does it even help the social circle around them?
Even if the "hostile telepath" model is wrong or doesn't apply in some cases, people self-deceive for some reason. If you don't dialogue with that reason at all and just create pain and misery for people who use it, you're making some situation you don't understand worse.
I agree that getting self-deception out of a culture is a great idea. I want less of it in general.
But we don't get there by disincentivizing it.
…I went in the other direction: trying to self-deceive little, and instead be self-honest about my real motivations, even if they are "bad PR".
Yep. I'm not sure why you think this is a "very different" conclusion. I'd say the same thing about myself. The key question is how to handle the cases where becoming conscious of a "bad PR" motivation means it might get exposed.
And you answer that! In part at least. You divide people into three categories based on (a) whether you need occlumency with them at all and (b) whether you need to use occlumency on the fact that you're using occlumency.
I don't think of it in terms this explicit, but it's pretty close to what I do now. People get to see me to the extent that I trust them with what I show them. And that's conscious.
Am I misunderstanding you somehow?
Moreover, having an extremely difficult high-stakes problem is not just a strong reason to self-deceive less, it's also strong reason to become more truth-oriented as a community. This means that people with such a common cause should strive to put each other at least in category 2 above, tentatively moving towards 3 (with the caveat of watching out for bad actors trying to exploit that).
I both agree and partly disagree. I tagged your comment with where.
Totally, yes, having a real and meaningful shared problem means we want a truth-seeking community. Strong agreement.
But I think how we "strive" to be truth-seeking might be extremely important. If it's a virtue instead of an engineering consideration, and if people are shamed or punished for having non-truth-seeking behaviors, then the collective "striving" being talked about will encourage individual self-deception and collective untalkaboutability. It's an example of inducing adaptive entropy.
Relatedly: mathematicians don't have truth-seeking collaboration because they're trying hard to be truth-seeking. They're trying to solve problems, and they can verify whether their proposed solutions actually solve the problems they're working on. That means truth-seeking is more useful for what they're doing than any alternatives are. There's no need for focusing on the Virtue of Seeking Truth as a culture.
Likewise, there's no Virtue of Using a Hammer in carpentry.
What puts someone in category 2 or 3 for me isn't something I can strive for. It's more like, I can be open to the possibility and be willing to look for how they and I interact. Then I discover how my trust of them shifts. If I try to trust people more than I do, I end up in more adaptive entropic confusion. I'm pretty sure this is lawful on par with thermodynamics.
This might be what you meant. If so, sorry to set up and take a swing at a strawman of what you were saying.
Ah yeah, I think "gaining independence" is a better descriptor of (what I meant by) that solution type.