Edit to add: I think almost every concept we use in life is part metaphor, part not, and the difference is one of degree and not kind. I was definitely surprised to learn this, or at least to learn how deep the rabbit hole goes.
Almost all human thinking is part metaphor.
Words have uses not meanings. Definitions are abstractions.
In otherwise, everything is (in part) a metaphor.
This is almost true. Fat is less dense than water, so a tablespoon of butter weighs something like 10% less than a half ounce. Not enough to matter in practice for most cooking. Your toast and your average chocolate chip cookie don't care. But, enough that professionals use weight not volume in most recipes. And enough that the difference in fat content between butters (as low as 80% in the US but more often 85+% in European or otherwise "premium" butters) can matter in more sensitive recipes, like pie crust and drop biscuits. I used to add 1-2 Tbsp of shortening to my pie crust. I stopped when I switched to Kerrygold butter - no longer needed.
A classic example is that, at least in English, time is often described using distance metaphors
For me, I knew this was a metaphor, but until I took Mandarin in college I never realized that other languages/cultures/people used different spatial metaphors for time. Is the future in front of you, or behind? Are you moving towards it, or it towards you? This has some practical applications, since apparently even in English people have different intuitions about what it means to push a meeting or event up/out/back/ahead.
I think it would be more accurate to say we should have multiple levels of dangerous capability tests, which reflect different levels of increased danger.
For example, someone who has never pipetted before might struggle to measure microliters precisely or contaminate the tip when touching a bottle. Acquiring these skills often takes months of learning from experienced scientists — something terrorists can’t easily do.
This seems like a very thin line of defense. I first worked as an intern in a bio lab and learned how to pipette when I was in high school. Physical lack of a fume hood for sterile technique seems like a slightly bigger barrier, but even then, it's not all that hard to find people with radical beliefs who have science or engineering degrees, and many of those will be able to get access to basic lab facilities. How many people have ever worked in a wet lab for one month? In the proposed scenario, all of those people will have access to the knowledge and skills needed to make bioweapons within a year or two. How many of these people could be persuaded to teach a layperson the basic mechanical skills needed? Maybe pay them and claim they're teaching an introductory class for a job training program or something.
In other words:
you probably aren’t going to make perfect meringues the first time because everything about your kitchen — the humidity, the dimensions, and power of your oven, the exact timing of how long you whipped the egg whites — is a little bit different than the person who wrote the recipe
is true. But, it's not so reassuring if I want to make sure no one successfully makes meringues without permission, and they each can get at least a handful of attempts on average, and there are millions of people who've done other kinds of cooking before, and in addition to the recipe you're allowed to watch every video and read every article available for free online showing different aspects of the processes being demonstrated.
This is true. But ideally I don't think what we need is to be clever, except to the extent that it's a clever way to communicate with people so they understand why the current policies produce bad incentives and agree about changing them.
I think our collective HHS needs are less "clever policy ideas" and more "actively shoot ourselves in the foot slightly less often."
That's a good point about public discussions. It's not how I absorb information, but I can definitely see that.
I'm not sure where I'm proposing bureaucracy? The value is in making sure a conversation efficiently adds value for both parties, by not having to spend time rehashing things that are much faster absorbed in advance. This avoids the friction of needing to spend much of the time rehashing 101-level prerequisites. A very modest amount of groundwork beforehand maximizes the rate of insight in discussion.
I'm drawing in large part from personal experience. A significant part of my job is interviewing researchers, startup founders, investors, government officials, and assorted business people. Before I get on a call with these people, I look them (and their current and past employers, as needed) up on LinkedIn and Google Scholar and their own webpages. I briefly familiarize myself with what they've worked on and what they know and care about and how they think, as best I can anticipate, even if it's only for 15 minutes. And then when I get into a conversation, I adapt. I'm picking their brain to try and learn, so I try to adapt to their communication style and translate between their worldview and my own. If I go in with an idea of what questions I want answered, and those turn out to not be the important questions, or this turns out to be the wrong person to discuss it with, I change direction. Not doing this often leaves everyone involved frustrated at having wasted their time.
Also, should I be thinking of this as a debate? Because that's very different than a podcast or interview or discussion. These all have different goals. A podcast or interview is where I think the standard I am thinking of is most appropriate. If you want to have a deep discussion, it's insufficient, and you need to do more prep work or you'll never get into the meatiest parts of where you want to go. I do agree that if you're having a (public-facing) debate where the goal is to win, then sure, this is not strictly necessary. The history of e.g. "debates" in politics, or between creationists and biologists, shows that clearly. I'm not sure I'd consider that "meaningful" debate, though. Meaningful debates happen by seriously engaging with the other side's ideas, which requires understanding those ideas.
I can totally believe this. But, I also think that responsibly wearing the scientist hat entails prep work before engaging in a four hour public discussion with a domain expert in a field. At minimum that includes skimming the titles and ideally the abstracts/outlines of their key writings. Maybe ask Claude to summarize the highlights for you. If he'd done that he'd have figured out many of the answers to many of these questions on his own, or much faster during discussion. He's too smart not to.
Otherwise, you're not actually ready to have a meaningful scientific discussion with that person on that topic.
I find this intuitively reasonable and in alignment with my own perceptions. A pet peeve of mine has long been that people say "sentient" instead of "sapient" - at minimum since I first read The Color of Magic and really thought about the difference. We've never collectively had to consider minds that were more clearly sapient than sentient, and our word-categories aren't up to it.
I think it's going to be very difficult to disentangle the degree to which LLMs experience vs imitate the more felt aspects/definitions of consciousness. Not least because even humans sometimes get mixed up about this within ourselves.
In the right person, a gradual-onset mix of physicalized depression symptoms, anhedonia, and alexithymia can go a long way towards making you, in some sense, not fully conscious in a way that is invisible to almost everyone around you. Ask me how I know: There's a near decade-long period of my life where, when I reminisce about it, I sometimes say things like, "Yeah, I wish I'd been there."