I think he just objected to the phrasing. I do think "set up a system where people can be banned by others whom Said does not instruct on who to ban" is a stretch for "Said bans people from DSL."
I have generally found Said to mean the things he says quite literally and to expect others to do so as well. It's painful to read a conversation where one person keeps assigning subtext to another who quite clearly never intended to put it there.
Sidechannel note: Said wishes it to be known that he neither bans people from DSL nor customarily has the right to, the task being delegated to moderators rather than the sysop. ( https://share.obormot.net/textfiles/MINHjLX7 )
I'm very good friends with someone who is persistently critical and it has imo largely improved my mental health, fwiw, by forcing me to construct a functioning and well-maintained ego which I didn't really have before.
I think in my whole life I have once seen a person come back because another person left, and they didn't stay long anyway. Broadly speaking I don't think this ever works.
I think this only works if your standards for posts are in sync with those of the outside world. Otherwise, you're operating under incompatible status models and cannot sustain your community standards against outside pressure; you will always be outcompeted by the outside world (who can pretty much always offer more status than you can simply by volume) unless you can maintain the worth of your respect, and you cannot do that by copying outside appraisal.
I think you failed to establish that the long, well-written and highly-upvoted critiques lived in the larger LW archipelago, so there's a hole in your existence proof. On that basis, I would surmise that on priors Said assumed you were referring to comments or on-site posts.
Sounds like you should create PokemonBench.
I don't understand it but it does make me feel happy.
Haven't heard back yet...
edit: Heard back!
I think what is actually happening is "yes, all the benchmarks are inadequate." In humans, those benchmarks correlate to a particular kind of ability we may call 'able to navigate society and to, in some field, improve it.' Top of the line AIs still routinely deletes people's home dirs and cannot run a profitable business even if extremely handheld. AIs have only really started this year to convincingly contribute to software projects outside of toys. There are still many software projects that could never be created by even a team of AIs all running in pro mode at 100x the cost of living of a human. Benchmarks are fundamentally an attempt to measure a known cognitive manifold by sampling it at points. What we have learnt in these years is that it is possible to build an intelligence that has a much more fragmented cognitive manifold than humans do.
This is what I think is happening. Humans use maybe a dozen strong generalist strategies with diverse modalities that are evaluated slowly and then cached. LLMs use one - backprop on token prediction - that is general enough to generate hundreds of more-or-less-shared subskills. But that means that the main mechanism that gives these LLMs a skill in the first place is not evaluated over half its lifetime. As a consequence of this, LLMs are monkey paws: they can become good at any skill that can be measured, and in doing so they demonstrate to you that the skill that you actually wanted- the immeasurable one that you hoped the measurable one would provide evidence towards - actually did not benefit nearly as much as you hoped.
It's strange how things worked out. Decades of goalshifting, and we have finally created a general, weakly superhuman intelligence that is specialized towards hitting marked goals and nothing else.