Worried that I might already be a post-rationalist. I'm very interested in minimizing miscommunication, and helping people through the uncanny valley of rationality. Feel free to pm me about either of those things.
I agree that disguising one's self as "someone who cares about X" doesn't require being good at X, at least when you only have short contained contact with them.
I'm trying to emphasize that I don't think Cade has made any progress in learning to "say the right things". I think he has probably learned more individual words that are more frequent in a rationalist context than not (like the word "priors"), but it seems really unlikely that he's gotten any better at even the grammar of rationalist communication.
Like, I'd be mediumly surprised if he, when talking to a rat, said something like "so what's your priors on XYZ?" I'd be incredibly surprised if he said something like "there's clearly a large inferential distance between your world model and the public's world model, so maybe you could help point me towards what you think the cruxes might be for my next article?"
That last sentence seems like a v clear example of something that both doesn't actually require understanding or caring about epistemology to utter, yet if I heard it I'd assume a certain orientation to epistemology and someone could falsely get me to "let my guard down". I don't think Cade can do things like that. And based on Zack's convo and Vassar's convo with him, and the amount of time and exposure he's had to learn between the two convos, I don't think that's the sort of thing he's capable of.
I might be misunderstanding, I understood the comment I was responding to as saying that Zack was helping Cade do a better job of disguising himself as someone who cared about good epistemics. Something like "if Zack keeps talking, Cade will learn to the surface level features of a good Convo about epistemology and thus, even if he still doesn't know shit, he'll be able to trick more people into thinking he's someone worth talking to."
In response to that claim, I shared an older interview of Cade to demonstrate that his been exposed to people who talk about epistemology for a while, and he did not do a convincing job of pretending to be in good faith then, and in this interview with Zack I don't think he's doing any better a job of seeming like he's acting in good faith.
And while there can still be plenty of reasons to not talk to journalists, or Cade in particular, I really don't think "you'll enable them to mimick us better" is remotely plausible.
I can visibly see you training him, via verbal conversation, how to outperform the vast majority of journalists at talking about epistemics.
Metz doesn't seem any better at seeming like he cares about or thinks at all about epistemics than he did in 2021.
Symbiotic would be a mutually beneficial relationship. What I described is very clearly not that
Yeah, the parasitic dynamic seems to set up the field for the scapegoating backup such that I'd expect to often find the scapegoating move in parasitic ecosystems that have been running their course for a while.
Your comment seems like an expansion on who is the party being fooled and it also points out another purpose for the obfuscation. A defense of pre-truth would be a theory that shows how it's not deceptive and not a way to cover up a conflict. That being said I agree that an investor that plays pre-truth does want founders to lie, and it seems very plausible that they orient to their language game as a "figure it out" initiation ritual.
I'm with you on the deficiency of the signalling frame when talking about human communication and communication more generally. Skyrms and others who developed the signalling frame explicitly tried to avoid having a notion of of intentionality in order to explore questions like "how could the simplest things that still make sense to call 'communication' develop in systems that don't have human level intelligence?", which means the model has a gaping hole when trying to talk about what people do.
I wrote a post about the interplay between the intentional aspects of meaning and what you're calling the probabilistic information. It's doesn't get too into the weeds, but might provoke more ideas in you.
Not quite what you're looking for, but if you've got a default sense that coordination is hard, Jessica Taylor has a evocatively named post Coordination isn't hard.
I remember at some point finding a giant messy graph that was all of The Sequences and the links between posts. I can't track down the link, anyone remember this and have a lead?
This post is my current recommendation for practicing getting a felt sense for the range of emotions you can be experiencing.
https://drmaciver.substack.com/p/labelling-feelings
I'm curious if what your describing is similar to what I'm describing in this post. When I was started paying more attention to emotions I'd often feel these impenetrable clouds of grey that I couldn't discern much content from. https://naturalhazard.xyz/then_and_now