but I think rationalists would benefit from more understanding of what purpose higher levels serve
I don't buy it, I think tons of great rationalists fully understand why people use language for coordination rather than for matching up with reality.
My first thought for the skills for rationalist folk to get better at are (a) code-switching, i.e. realizing when someone's acting on a different level and being able to interface with them on that level, and (b) being able to enter an environment where people are primarily focused on higher simulacra levels, not go crazy, be sane when you come home, yet still be able to fight and win in that environment (without saying things that are false).
I think a substantial fraction of LWers have the (usually implicit—they may not have even read about simulacra) belief that higher levels are inherently morally problematic, and that engaging on those levels about an important topic is at best excusable under the kind of adversarial circumstances where direct lies are excusable. (There's the obvious selection effect where people who feel gross about higher levels feel more comfortable on LW than almost anywhere else.)
I think there need to be better public arguments against that viewpoint, not least because I'm not fully convinced it's wrong.
here's another practical idea for a Manifold hack-a-thon: Bayesian Truth Serum. (...) it's not obvious that there's a good way to market-ize it
Self Resolving prediction markets are a marketization of Bayesian Truth Serum
I've been on this big kick talking about truthseeking in effective altruism. I started with vegan advocacy because it was the most legible, but now need to move on to the deeper problems. Unfortunately those problems are still not that legible, and I end up having to justify a lot of what I previously took as basic premises, and it's all kind of stuck.
Sorry to be cynical here, but I suspect that the root of this problem isn't so much disagreeing on basic premises, but rather, soldier mindsets.
This excerpt from The Scout Mindset comes to mind:
My path to this book began in 2009, after I quit graduate school and threw myself into a passion project that became a new career: helping people reason out tough questions in their personal and professional lives. At first I imagined that this would involve teaching people about things like probability, logic, and cognitive biases, and showing them how those subjects applied to everyday life. But after several years of running workshops, reading studies, doing consulting, and interviewing people, I finally came to accept that knowing how to reason wasn't the cure-all I thought it was.
Knowing that you should test your assumptions doesn't automatically improve your judgement, any more than knowing you should exercise automatically improves your health. Being able to rattle off a list of biases and fallacies doesn't help you unless you're willing to acknowledge those biases and fallacies in your own thinking. The biggest lesson I learned is something that's since been corroborated by researchers, as we'll see in this book: our judgment isn't limited by knowledge nearly as much as it's limited by attitude.
It reminds me of when couples (in a romantic partnership) fight. You can have two sane, reasonable, rational people engaged in a dispute where they always seem to talk past each other and fail to make any progress whatsoever. But then when they wake up the next morning in a calm and co-regulated state, they somehow just smile at each other and resolve the dispute within minutes.
(Note: I haven't followed the overarching conversation too closely. I'm somewhat confident in my prediction here but not super confident.)
The favored solutions to this right now are betting markets, retroactive funding, and impact certificates. I love these for the reasons you'd expect, but they've been everyone's favored solutions for years and haven't solved anything yet.
Have any of these really been tried? Manifold has been really cool, and I think has been having positive effects here. Nobody has really done any amount of retroactive funding (Jaan Tallinn kind of has done some retroactive ETH grants, but it's not been very focused, though I also think those have been good), and there is no remotely functional impact certificate market.
there is no remotely functional impact certificate market.
this is my point. People have been praising ICs for a while and there have been several attempts, but there is no functioning market.
I don't think this is doomed- the latest attempts seem stronger than previous, even if they're not very good yet.
My sense is right now IC markets are suffering from a lack of really compelling projects, because anything legibily compelling can get funding elsewhere. I've donated to projects I was otherwise insufficiently excited about on these platforms specifically to support the platforms, and I think that's worth doing, and possibly I should talk about it more.
ETA: tagging @habryka since I accidentally responded to vaniver, and he won't get pinged
I'm going to lay out a few threads in my mind, in the hopes that you will be curious about some of them and we can untangle from there.
All of these seem too abstract. I think that it would be better to focus on more concrete disagreements. Like the previously discussed question of whether veganism entails health-related tradeoffs.
I haven't followed that and other conversations too closely, so I hesitate to even write this comment. I guess just chalk it up to me registering a not-fully-informed prediction that pushing the conversation towards concrete things like this and away from more abstract questions is frequently going to be the better path forward.
double checking you know I'm the one who wrote the veganism post? Because I am very with you that looking at theory in isolation is a waste of time, but also believe that it's an important adjunct to object-level work.
Speaking of which, if anyone has an object level concern with EA epistemics you can submit it here. Anonymous submissions are allowed although in practice not very useful unless you can point to public evidence.
double checking you know I'm the one who wrote the veganism post?
Yes. I'm just registering the prediction that continuing that sort of stuff would be better than pursuing abstract stuff.
I made three offers to Dialogue. One is in progress but may not publish, one is agreed to but we're trying to define the topic, and the third was turned down.