I agree it's more related than a randomly selected Nate post would be, but the comment itself did not seem particularly aimed at arguing that Nate's advice was bad or that following it would have undesirable consequences[1]. (I think the comments it was responding to were pretty borderline here.)
I think I am comfortable arguing that it would be bad if every post that Nate made on subjects like "how to communicate with people about AI x-risk" included people leaving comments with argument-free pointers to past Nate-drama.
The most recent post by Nate seemed good to me; I think its advice was more-than-sufficiently hedged and do not think that people moving in that direction on the margin would be bad for the world. If people think otherwise they should say so, and if they want to use Nate's interpersonal foibles as evidence that the advice is bad that's fine, though (obviously) I don't expect I'd find such arguments very convincing.
When keeping in mind its target audience.
I think it would be bad for every single post that Nate publishes on maybe-sorta-related subjects to turn into a platform for relitigating his past behavior[1]. This would predictably eat dozens of hours of time across a bunch of people. If you think Nate's advice is bad, maybe because you think that people following it risk behaving more like Nate (in the negative ways that you experienced), then I think you should make an argument to that effect directly, which seems more likely to accomplish (what I think is) your goal.
Which, not having previously expressed an opinion on, I'll say once - sounds bad to me.
(Separately, even accepting for the sake of argument that you notice most work done and have a negative reaction to it, that is not very strong counterevidence to the original claim.)
If the only thing you see about Aella is that she had work done on her lips, then I think that sufficiently well demonstrates the point that you don't notice most "high quality" plastic surgery.
They imagine writing small and carefully locked-down infrastructure and allowing the AIs to interact with it.
That's surprising and concerning. As you say, if these companies expect their AIs to do end-to-end engineering and R&D tasks internally, it seems difficult to imagine how they could do that without having employee-level privileges. Any place where they don't is a place where humans turn into a bottleneck. I can imagine a few possible objections to this:
Like, to be clear, I would definitely prefer a world where these organizations wrote "small and carefully locked-down infrastructure" as the limited surface their AIs were allowed to interact with; I just don't expect that to actually happen in practice.
This comment describes how the images for the "Best of LessWrong" (review winners) were generated. (The exact workflow has varied a lot over time, as image models have changed quite a lot, and LLMs didn't always exist, and we've built more tooling for ourselves, etc.)
The prompt usually asks for an aquarelle painting, often in the style of Thomas Schaller. (Many other details, but I'm not the one usually doing artwork, so not the best positioned to point to common threads.) And then there's a pretty huge amount of iteration and sometimes post-processing/tweaking.
Almost every comment rate limit stricter than "once per hour" is in fact conditional in some way on the user's karma, and above 500 karma you can't even be (automatically) restricted to less than one comment per day:
https://github.com/ForumMagnum/ForumMagnum/blob/master/packages/lesswrong/lib/rateLimits/constants.ts#L108
// 3 comments per day rate limits
{
...timeframe('3 Comments per 1 days'),
appliesToOwnPosts: false,
rateLimitType: "newUserDefault",
isActive: user => (user.karma < 5),
rateLimitMessage: `Users with less than 5 karma can write up to 3 comments a day.<br/>${lwDefaultMessage}`,
},
{
...timeframe('3 Comments per 1 days'), // semi-established users can make up to 20 posts/comments without getting upvoted, before hitting a 3/day comment rate limit
appliesToOwnPosts: false,
isActive: (user, features) => (
user.karma < 2000 &&
features.last20Karma < 1
), // requires 1 weak upvote from a 1000+ karma user, or two new user upvotes, but at 2000+ karma I trust you more to go on long conversations
rateLimitMessage: `You've recently posted a lot without getting upvoted. Users are limited to 3 comments/day unless their last ${RECENT_CONTENT_COUNT} posts/comments have at least 2+ net-karma.<br/>${lwDefaultMessage}`,
},
// 1 comment per day rate limits
{
...timeframe('1 Comments per 1 days'),
appliesToOwnPosts: false,
isActive: user => (user.karma < -2),
rateLimitMessage: `Users with less than -2 karma can write up to 1 comment per day.<br/>${lwDefaultMessage}`
},
{
...timeframe('1 Comments per 1 days'),
appliesToOwnPosts: false,
isActive: (user, features) => (
features.last20Karma < -5 &&
features.downvoterCount >= (user.karma < 2000 ? 4 : 7)
), // at 2000+ karma, I think your downvotes are more likely to be from people who disagree with you, rather than from people who think you're a troll
rateLimitMessage: `Users with less than -5 karma on recent posts/comments can write up to 1 comment per day.<br/>${lwDefaultMessage}`
},
// 1 comment per 3 days rate limits
{
...timeframe('1 Comments per 3 days'),
appliesToOwnPosts: false,
isActive: (user, features) => (
user.karma < 500 &&
features.last20Karma < -15 &&
features.downvoterCount >= 5
),
rateLimitMessage: `Users with less than -15 karma on recent posts/comments can write up to 1 comment every 3 days. ${lwDefaultMessage}`
},
// 1 comment per week rate limits
{
...timeframe('1 Comments per 1 weeks'),
appliesToOwnPosts: false,
isActive: (user, features) => (
user.karma < 0 &&
features.last20Karma < -1 &&
features.lastMonthDownvoterCount >= 5 &&
features.lastMonthKarma <= -30
),
// Added as a hedge against someone with positive karma coming back after some period of inactivity and immediately getting into an argument
rateLimitMessage: `Users with -30 or less karma on recent posts/comments can write up to one comment per week. ${lwDefaultMessage}`
},
I think you could make an argument that being rate limited to one comment per day is too strict given its conditions, but I don't particularly buy this as argument against rate limiting long-term commenters in general.
But presumably you want long-term commenters with large net-positive karma staying around and not be annoyed by the site UI by default.
A substantial design motivation behind the rate limits, beyond throttling newer users who haven't yet learned the ropes, was to reduce the incidence and blast radius of demon threads. There might be other ways of accomplishing this, but it does require somehow discouraging or preventing users (even older, high-karma users) from contributing to them. (I agree that it's reasonable to be annoyed by how the rate limits are currently communicated, which is a separate question from being annoyed at the rate limits existing at all.)
Hi Bharath, please read our policy on LLM writing before making future posts consisting almost entirely of LLM-written content.
In a lot of modern science, top-line research outputs often look like "intervention X caused 7% change in metric Y, p <0.03" (with some confidence intervals that intersect 0%). This kind of relatively gear-free model can be pathological when it turns out that metric Y was actually caused by five different things, only one of which was responsive to intervention X, but in that case the effect size was very large. (A relatively well-known example is the case of peptic ulcers, where most common treatments would often have no effect, because the ulcers were often caused by an H. pylori infection.)
On the other end of the spectrum are individual trip reports self-experiments. These too have their pathologies[1], but they are at least capable of providing the raw contact with reality which is necessary to narrow down the search space of plausible theories and discriminate between hypotheses.
With the caveat that I'm default-skeptical of how this generalizes (which the post also notes), such basic foundational science seems deeply undersupplied at this level of rigor. Curated.
Taking psychedelic experiences at face value, for instance.
This doesn't seem at all conservative based on your description of how honey bees are treated, which reads like it was selecting for the worst possible things you could find plausible citations for. In fact, very little of your description makes an argument about how much we should expect such bees to be suffering in an ongoing way day-to-day. What I know of how broiler chickens are treated makes suffering ratios like 0.1% (rather than 10%) seem reasonable to me. This also neglects the quantities that people are likely to consume, which could trivially vary by 3 OoM.
If you're a vegan I think there are a bunch of good reasons not to make exceptions for honey. If you're trying to convince non-vegans who want to cheaply reducing their own contributions to animal suffering, I don't think they should find this post very convincing.