LessWrong dev & admin as of July 5th, 2022.
Why did the early computer vision scientists not write succeed in writing a formal ruleset for recognizing birds, and ultimately it took a messy cludge of inscrutable learned heuristics to solve that task?
I disapprove of Justice Potter in many respects, but "I know it when I see it" is indeed sometimes the only practical[1] way to carve reality.
(This is not meant to be a robust argument, just a couple of pointers at countervailing considerations.)
For humans.
And it seems like you forgot about them too by the time you wrote your comment.
It was not clear from your comment which particular catastrophic failures you meant (and in fact it's still not clear to me which things from your post you consider to be in that particular class of "catastrophic failures", which of them you attribute at least partial responsibility for to MIRI/CFAR, by what mechanisms/causal pathways, etc).
ETA: "OpenAI existing at all" is an obvious one, granted. I do not think EY considers SBF to be his responsibility (reasonable, given SBF's intellectual inheritance from the parts of EA that were least downstream of EY's thoughts). You don't mention other grifters in your post.
FYI I am generally good at tracking inside baseball but I understand neither what specific failures[1] you would have wanted to see discussed in an open postmortem nor what things you'd consider to be "improvements" (and why the changes since 2022/04/01 don't qualify).
I'm sure there were many, but I have no idea what you consider to have been failures, and it seems like you must have an opinion because otherwise you wouldn't be confident that the changes over the last three years don't qualify as improvements.
People sometimes ask me what's good about glowfic, as a reader.
You know that extremely high-context joke you could only make to that one friend you've known for years, because you shared a bunch of specific experiences which were load-bearing for the joke to make sense at all, let alone be funny[1]? And you know how that joke is much funnier than the average low-context joke?
Well, reading glowfic is like that, but for fiction. You get to know a character as imagined by an author in much more depth than you'd get with traditional fiction, because the author writes many stories using the same character "template", where the character might be younger, older, a different species, a different gender... but still retains some recognizable, distinct "character". You get to know how the character deals with hardship, how they react to surprises, what principles they have (if any). You get to know Relationships between characters, similarly. You get to know Societies.
Ultimately, you get to know these things better than you know many people, maybe better than you know yourself.
Then, when the author starts a new story, and tosses a character you've seen ten variations of into a new situation, you already have _quite a lot of context_ for modeling how the character will deal with things. This is Fun. It's even more Fun when you know many characters by multiple authors like that, and get to watch them deal with each other. There's also an element of parasocial attachment and empathy, here. Knowing someone[2] like that makes everything they're going through more emotionally salient - victory or defeat, fear or jubilation, confidence or doubt.
Part of this is simply a function of word count. Most characters don't have millions of words[3] written featuring them. I think the effect of having the variation in character instances and their circumstances is substantial, though.
Probably I should've said this out loud, but I had a couple of pretty explicit updates in this direction over the past couple years: the first was when I heard about character.ai (and similar), the second was when I saw all TPOTers talking about using Sonnet 3.5 as a therapist. The first is the same kind of bad idea as trying a new addictive substance and the second might be good for many people but probably carries much larger risks than most people appreciate. (And if you decide to use an LLM as a therapist/rubber duck/etc, for the love of god don't use GPT-4o. Use Opus 3 if you have access to it. Maybe Gemini is fine? Almost certainly better than 4o. But you should consider using an empty Google Doc instead, if you don't want to or can't use a real person.)
I think using them as coding and research assistants is fine. I haven't customized them to be less annoying to me personally, so their outputs often are annoying. Then I have to skim over the output to find the relevant details, and don't absorb much of the puffery.
If we assume conservatively that a bee’s life is 10% as unpleasant as chicken life
This doesn't seem at all conservative based on your description of how honey bees are treated, which reads like it was selecting for the worst possible things you could find plausible citations for. In fact, very little of your description makes an argument about how much we should expect such bees to be suffering in an ongoing way day-to-day. What I know of how broiler chickens are treated makes suffering ratios like 0.1% (rather than 10%) seem reasonable to me. This also neglects the quantities that people are likely to consume, which could trivially vary by 3 OoM.
If you're a vegan I think there are a bunch of good reasons not to make exceptions for honey. If you're trying to convince non-vegans who want to cheaply reducing their own contributions to animal suffering, I don't think they should find this post very convincing.
I agree it's more related than a randomly selected Nate post would be, but the comment itself did not seem particularly aimed at arguing that Nate's advice was bad or that following it would have undesirable consequences[1]. (I think the comments it was responding to were pretty borderline here.)
I think I am comfortable arguing that it would be bad if every post that Nate made on subjects like "how to communicate with people about AI x-risk" included people leaving comments with argument-free pointers to past Nate-drama.
The most recent post by Nate seemed good to me; I think its advice was more-than-sufficiently hedged and do not think that people moving in that direction on the margin would be bad for the world. If people think otherwise they should say so, and if they want to use Nate's interpersonal foibles as evidence that the advice is bad that's fine, though (obviously) I don't expect I'd find such arguments very convincing.
When keeping in mind its target audience.
I think it would be bad for every single post that Nate publishes on maybe-sorta-related subjects to turn into a platform for relitigating his past behavior[1]. This would predictably eat dozens of hours of time across a bunch of people. If you think Nate's advice is bad, maybe because you think that people following it risk behaving more like Nate (in the negative ways that you experienced), then I think you should make an argument to that effect directly, which seems more likely to accomplish (what I think is) your goal.
Which, not having previously expressed an opinion on, I'll say once - sounds bad to me.
I believe this post to be substantially motivated by Zack's disagreement with LessWrong moderators about appropriate norms on LessWrong. (Epistemic status: I am one of the moderators who spoke to Zack on the subject, as indicated[1] in the footer of his post.)
Sort of.