Jacob, or "Jisk" when there are too many Jacobs about and I need a nickname. Host of the North Oakland LW Meetups, every Tuesday.
Honestly pretty disappointed with the state of the modern LW site, but it's marginally better than other non-blogs so I'm still here.
It should be possible to easily find me from the username I use here, though not vice versa, for interview reasons.
I still prefer the ones I see there to what I see on LW. Lower quantity higher value.
Currently no great alternatives exist because LW killed them. The quality of the comment section on SSC and most other rationalist blogs I was following got much worse when LW was rebooted (and killed several of them), and initially it looked like LW was an improvement, but over time the structural flaws killed it.
I still see much better comments on individual blogs - Zvi, Sarah Constantin, Elizabeth vN, etc. - than on LessWrong. Some community Discords are pretty good, though they are small walled gardens; rationalist Tumblr has, surprisingly, gotten actively better over time, even as it shrank. All of these are low volume.
It's possible in theory that the volume of good comments on LessWrong is higher than those places. I don't know, and in practical terms don't care, because they're drowned out by junk, mostly highly-upvoted junk. I don't bother to look for good comments here at all because they're sufficiently bad that it's not worthwhile I post here only for visibility, not for good feedback, because I know I won't get it; I only noticed this post at all because of a link from a Discord.
Groupthink is not a possible future, to be clear. It's already here in a huge way, and probably not fixable. If there was a chance of reversing the trend, it ended with Said being censured and censored for being stubbornly anti-groupthink to the point of rudeness. Because he was braver or more stubborn than me and kept trying for a couple years after I gave up.
I see much more value in Lighthaven than in the rest of the activity of Lightcone.
I wish Lightcone would split into two (or even three) organizations, as I would unequivocally endorse donating to Lighthaven and recommend it to others, vs. LessWrong where I'm not at all confident it's net positive over blogs and Substacks, and the grantmaking infastructure and other meta which is highly uncertain and probably highly replaceable.
All of the analysis of the impact of new LessWrong is misleading at best; it is assuming that volume on LessWrong is good in itself, which I do not believe to be the case; if similar volume is being stolen from other places, e.g. dropping away from blogs on the SSC blogroll and failing to create their own Substacks, which I think is very likely to be true, this is of minimal benefit to the community and likely negative benefit to the world, as LW is less visible and influential than strong existing blogs or well-written new Substacks.
That's on top of my long-standing objections to the structure of LW, which is bad for community epistemics by encouraging groupthink, in a way that standard blogs are not. If you agree with my contention there, then even a large net increase in volume would still be, in expectation, significantly negative for the community and the world. Weighted voting delenda est; post-author moderation delenda est; in order to win the war of good group epistemics we must accept losing the battles of discouraging some marginal posts from the prospect of harsh, rude, and/or ignorant feedback.
That was true this week, but the first time I attended (the 12th) I believe it wasn't, I arrived at what I think was 6:20-6:25 and found everything had already started.
Based on my prior experience running meetups, a 15m gap between 'doors open' and starting the discussion is too short. 30m is the practical minimum; I prefer 45-60m because I optimize for low barrier to entry (as a means of being welcoming).
I also find this to be a significant barrier in participating myself, as targeting a fifteen-minute window for arrival is usually beyond my planning abilities unless I have something else with a hard end time within the previous half-hour.
The amount of empty space where the audience understands what's going on and nothing new or exciting is happening is much, much higher in 60s-70s film and TV. Pacing is an art, and that art has improved drastically in the last half-century.
Standards, also, were lower, though I'm more confident in this for television. In the 90s, to get kids to be interested in a science show you needed Bill Nye. In the 60s, doing ordinary high-school science projects with no showmanship whatsoever was wildly popular because it was on television and this was inherently novel and fascinating. (This show actually existed.)
A man who is always asking 'Is what I do worth while?' and 'Am I the right person to do it?' will always be ineffective himself and a discouragement to others.
-- G.H. Hardy, A Mathematician's Apology
a belief is only really worthwhile if you could, in principle, be persuaded to believe otherwise
There's a point to be made here about why 'unconditional love' is unsatisfying to the extent the description as 'unconditional' is accurate.
...Oh, my mistake, it looked like they were posted a lot later than that and the ~skipped one made that look confirmed. Usually-a-week ahead is plenty of time and I'm sorry I said anything.
I don't have much understanding of current AI discussions and it's possible those are somewhat better/less advanced a case of rot.
Those same psychological reasons indicate that anything which is actual dissent will be interpreted as incivility. This has happened here and is happening as we speak. It was one of the significant causes of SBF. It's significantly responsible for the rise of woo among rationalists, though my sense is that that's started to recede (years later). It's why EA as a movement seems to be mostly useless at this point and coasting on gathered momentum (mostly in the form of people who joined early and kept their principles).
I'm aware there is a tradeoff, but being committed to truthseeking demands that we pick one side of that tradeoff, and LessWrong the website has chosen to pick the other side instead. I predicted this would go poorly years before any of the things I named above happened.
I can't claim to have predicted the specifics, I don't get many Bayes Points for any of them, but they're all within-model. Especially EA's drift (mostly seeking PR and movement breadth). The earliest specific point where I observed that this problem was happening was 'Intentional Insights', where it was uncivil to observe that the man was a huckster and faking community signals, and so it took several rounds of blatant hucksterism for him to finally be disavowed and forced out. If EA'd learned this lesson then, it would be much smaller but probably 80% could have avoided involvement in FTX. LW-central-rationalism is not as bad, yet, but it looks on the same path to me.