I think our collective HHS needs are less "clever policy ideas" and more "actively shoot ourselves in the foot slightly less often."
That's a good point about public discussions. It's not how I absorb information, but I can definitely see that.
I'm not sure where I'm proposing bureaucracy? The value is in making sure a conversation efficiently adds value for both parties, by not having to spend time rehashing things that are much faster absorbed in advance. This avoids the friction of needing to spend much of the time rehashing 101-level prerequisites. A very modest amount of groundwork beforehand maximizes the rate of insight in discussion.
I'm drawing in large part from personal experience. A significant part of my job is interviewing researchers, startup founders, investors, government officials, and assorted business people. Before I get on a call with these people, I look them (and their current and past employers, as needed) up on LinkedIn and Google Scholar and their own webpages. I briefly familiarize myself with what they've worked on and what they know and care about and how they think, as best I can anticipate, even if it's only for 15 minutes. And then when I get into a conversation, I adapt. I'm picking their brain to try and learn, so I try to adapt to their communication style and translate between their worldview and my own. If I go in with an idea of what questions I want answered, and those turn out to not be the important questions, or this turns out to be the wrong person to discuss it with, I change direction. Not doing this often leaves everyone involved frustrated at having wasted their time.
Also, should I be thinking of this as a debate? Because that's very different than a podcast or interview or discussion. These all have different goals. A podcast or interview is where I think the standard I am thinking of is most appropriate. If you want to have a deep discussion, it's insufficient, and you need to do more prep work or you'll never get into the meatiest parts of where you want to go. I do agree that if you're having a (public-facing) debate where the goal is to win, then sure, this is not strictly necessary. The history of e.g. "debates" in politics, or between creationists and biologists, shows that clearly. I'm not sure I'd consider that "meaningful" debate, though. Meaningful debates happen by seriously engaging with the other side's ideas, which requires understanding those ideas.
I can totally believe this. But, I also think that responsibly wearing the scientist hat entails prep work before engaging in a four hour public discussion with a domain expert in a field. At minimum that includes skimming the titles and ideally the abstracts/outlines of their key writings. Maybe ask Claude to summarize the highlights for you. If he'd done that he'd have figured out many of the answers to many of these questions on his own, or much faster during discussion. He's too smart not to.
Otherwise, you're not actually ready to have a meaningful scientific discussion with that person on that topic.
If I'm understanding you correctly, then I strongly disagree about what ethics and meta-ethics are for, as well as what "individual selfishness" means. The questions I care about flow from "What do I care about, and why?" and "How much do I think others should or will care about these things, and why?" Moral realism and amoral nihilism are far from the only options, and neither are ones I'm interested in accepting.
I'm not saying it improves decision making. I'm saying it's an argument for improving our decision making in general, if mundane decisions we wouldn't normally think are all that important have much larger and long-lasting consequences. Each mundane decision affects a large number of lives that parts of me will experience, in addition to the effects on others.
I don't see #1 affecting decision making because it happens no matter what, and therefore shouldn't differ based on our own choices or values. I guess you could argue it implies an absurdly high discount rate if you see the resulting branches as sufficiently separate from one another, but if the resulting worlds are ones I care about, then the measure dilution is just the default baseline I start from in my reasoning. Unless there is some way we can or could meaningfully increase the multiplication rate in some sets of branches but not others? I don't think that's likely with any methods or tech I can foresee.
#2 seems like an argument for improving ourselves to be more mindful in our choices to be more coherent on average, and #3 an argument for improving our average decision making. The main difference I can think of for how measure affects things is maybe in which features of the outcome distribution/probabilities among choices I care about.
It's not my field of expertise, so I have only vague impressions of what is going on, and I certainly wouldn't recommend anyone else use me as a source.
I'm not entirely sure how many of these I agree with, but I don't really think any of them could be considered heretical or even all that uncommon as opinions on LW?
All but #2 seem to me to be pretty well represented ideas, even in the Sequences themselves (to the extent the ideas existed when the Sequences got written).
#2 seems to me to rely on the idea that the process of writing is central or otherwise critical to the process of learning about, and forming a take on, a topic. I have thought about this, and I think for some people it is true, but for me writing is often a process of translating an already-existing conceptual web into a linear approximation of itself. I'm not very good at writing in general, and having an LLM help me wordsmith concepts and workshop ideas as a dialogue partner is pretty helpful. I usually form takes my reading and discussing and then thinking quietly, not so much during writing if I'm writing by myself. Say I read a bunch of things or have some conversations, take notes on these, write an outline of the ideas/structure I want to convey, and share the notes and outline with an LLM. I ask it to write a draft that it and I then work on collaboratively. How is that meaningfully worse than writing alone, or writing with a human partner? Unless you meant literally "Ask an LLM for an essay on a topic and publish it," in which case yes, I agree.
This is true. But ideally I don't think what we need is to be clever, except to the extent that it's a clever way to communicate with people so they understand why the current policies produce bad incentives and agree about changing them.