I mostly feel bad about LessWrong these days. I slightly dread logging on, I don't expect to find much insightful on the website, and think the community has a lot of groupthink / other "ew" factors that are harder for me to pin down (although I think that's improved over the last year or two). I also feel some dread at posting this because it might burn social capital I have with the mods, but whatever.
(Also, most of this stuff is about the community and not directly in the purview of the mods anyways.)
Here are some rambling thoughts, though:
I expect there to be a bunch of responses which strike me as defensive, revisionist gaslighting, and I don't know if/when I'll reply.
I feel pretty frustrated at how rarely people actually bet or make quantitative predictions about existential risk from AI. [...]
I think that alignment "theorizing" is often a bunch of philosophizing and vibing in a way that protects itself from falsification (or even proof-of-work) via words like "pre-paradigmatic" and "deconfusion." I think it's not a coincidence that many of the "canonical alignment ideas" somehow don't make any testable predictions until AI takeoff has begun.
This sentiment resonates strongly with me.
A personal background: I remember getting pretty heavily involved in AI alignment discussions on LessWrong in 2019. Back then I think there were a lot of assumptions people had about what "the problem" was that are, these days, often forgotten, brushed aside, or sometimes even deliberately minimized post-hoc in order to give the impression that the field has a better track record than it actually does. [ETA: but to be clear, I don't mean to say everyone made the same mistake I describe here]
This has been a bit shocking and disorienting to me, honestly, because at the time in 2019 I didn't get the strong impression that people were deliberately constr...
I wrote a fair amount about alignment from 2014-2020[1] which you can read here. So it's relatively easy to get a sense for what I believed.
Here are some summary notes about my views as reflected in that writing, though I'd encourage you to just judge for yourself[2] by browsing the archives:
Some thoughts on my journey in particular:
Suppose in 2024-2029, someone constructs an intelligent robot that is able clean a room to a high level of satisfaction, consistent with the user’s intentions, without any major negative side effects or general issues of misspecification. It doesn’t break any vases while cleaning.
I remember explicit discussion about how solving this problem shouldn't even count as part of solving long-term / existential safety, for example:
"What I understand this as saying is that the approach is helpful for aligning housecleaning robots (using near extrapolations of current RL), but not obviously helpful for aligning superintelligence, and likely stops being helpful somewhere between the two. [...] There is a risk that a large body of safety literature which works for preventing today's systems from breaking vases but which fails badly for very intelligent systems actually worsens the AI safety problem" https://www.lesswrong.com/posts/H7KB44oKoSjSCkpzL/worrying-about-the-vase-whitelisting?commentId=rK9K3JebKDofvJA3x
...Why is it so hard to find people explicitly saying that this specific problem, and the examples illustrating it, were not meant to be seriously representative of the hard parts of
This matches my sense of how a lot of people seem to have... noticed that GPT-4 is fairly well aligned to what the OpenAI team wants it to be, in ways that Yudkowsky et al said would be very hard, and still not view this as at a minimum a positive sign?
Ie problems of the class 'I told the intelligence to get my mother out of the burning building and it blew her up so the dead body flew out the window, this is because I wasn't actually specific enough' just don't seem like they are a major worry anymore?
Usually when GPT-4 doesn't understand what I'm asking, I wouldn't be surprised if a human was confused also.
If I was misreading the blog post at the time, how come it seems like almost no one ever explicitly predicted at the time that these particular problems were trivial for systems below or at human-level intelligence?!?
Quoting the abstract of MIRI's "The Value Learning Problem" paper (emphasis added):
Autonomous AI systems’ programmed goals can easily fall short of programmers’ intentions. Even a machine intelligent enough to understand its designers’ intentions would not necessarily act as intended. We discuss early ideas on how one might design smarter-than-human AI systems that can inductively learn what to value from labeled training data, and highlight questions about the construction of systems that model and act upon their operators’ preferences.
And quoting from the first page of that paper:
...The novelty here is not that programs can exhibit incorrect or counter-intuitive behavior, but that software agents smart enough to understand natural language may still base their decisions on misrepresentations of their programmers’ intent. The idea of superintelligent agents monomaniacally pursuing “dumb”-seeming goals may sound odd, but it follows from the observation of Bostrom an
If that were to happen, I think an extremely natural reading of the situation is that a substantial part of what we thought "the problem" was in value alignment has been solved, from the perspective of this blog post from 2019. That is cause for an updating of our models, and a verbal recognition that our models have updated in this way.
Yet, that's not how I think everyone on LessWrong would react to the development of such a robot. My impression is that a large fraction, perhaps a majority, of LessWrongers would not share my interpretation here, despite the plain language in the post explaining what they thought the problem was. Instead, I imagine many people would respond to this argument basically saying the following:
"We never thought that was the hard bit of the problem. We always thought it would be easy to get a human-level robot to follow instructions reliably, do what users intend without major negative side effects, follow moral constraints including letting you shut it down, and respond appropriately given unusual moral dilemmas. The idea that we thought that was ever the problem is a misreading of what we wrote. The problem was always purely that alignment issues would arise after we far surpassed human intelligence, at which point entirely novel problems will arise."
For what it's worth I do remember lots of people around the MIRI-sphere complaining at the time that that kind of prosaic alignment work was kind of useless, because it missed the hard parts of aligning superintelligence.
Well, for instance, I watched Ryan Carey give a talk at CHAI about how Cooperative Inverse Reinforcement Learning didn't give you corrigibility. (That CIRL didn't tackle the hard part of the problem, despite seeming related on the surface.)
I think that's much more an example of
"Prosaic alignment work is kind of useless because it will actually be easy to get a roughly human-level machine to interpret our commands reliably, do what you want without significant negative side effects, and let you shut it down whenever you want etc. The hard part is doing this for superintelligence."
than of
"Prosaic alignment work is kind of useless because machine learning is natively not very transparent and alignable, and we should focus instead on creating alignable alternatives to ML, or building the conceptual foundations that would let us align powerful AIs."
"Sure, Rohin thought that was a major problem, but we [our organization/thought cluster/ideological group] never agreed with him."
Oh really? Did you ever explicitly highlight this particular disagreement at the time?
FWIW at the time I wasn't working on value learning and wasn't incredibly excited about work in that direction, despite the fact that that's what the rest of my lab was primarily focussed on. I also wrote a blog post in 2020, based off a conversation I had with Rohin in 2018, where I mention how important it is to work on inner alignment stuff and how those issues got brought up by the 'paranoid wing' of AI alignment. My guess is that my view was something like "stuff like reward learning from the state of the world doesn't seem super important to me because of inner alignment etc, but for all I know cool stuff will blossom out of it, so I'm happy to hear about your progress and try to offer constructive feedback", and that I expressed that to Rohin in person.
At this point I think there are a number of potential replies from people who still insist that the LW models of AI alignment were never wrong, which I (depending on the speaker) think can often border on gaslighting:
This is one of the main reasons I'm not excited about engaging with LessWrong. Why bother? It feels like nothing I say will matter. Apparently, no pre-takeoff experiments matter to some folk.[1] And even if I successfully dismantle some philosophical argument, there's a good chance they will use another argument to support their beliefs instead. Nothing changes.
So there we are. It doesn't matter what my experiments say, because (it is claimed) there are no testable predictions before The End. But also, everyone important already knew in advance that it'd be easy to get GPT-4 to interpret and execute your value-laden requests in a human-reasonable fashion. Even though ~no one said so ahead of time.
When talking with pre-2020 alignment folks about these issues, I feel gaslit quite often. You have no idea how many times I've been told things like "most people already understood that reward is not the optimization target"[2] and "maybe you had a lesson you needed ...
I get why you feel that way. I think there are a lot of us on LessWrong who are less vocal and more openminded, and less aligned with either optimistic network thinkers or pessimistic agent foundations thinkers. People newer to the discussion and otherwise less polarized are listening and changing their minds in large or small ways.
I'm sorry you're feeling so pessimistic about LessWrong. I think there is a breakdown in communication happening between the old guard and the new guard you exemplify. I don't think that's a product of venue, but of the sheer difficulty of the discussion. And polarization between different veiwpoints on alignment.
I think maintaining a good community falls on all of us. Formats and mods can help, but communities set their own standards.
I'm very, very interested to see a more thorough dialogue between you and similar thinkers, and MIRI-type thinkers. I think right now both sides feel frustrated that they're not listened to and understood better.
(I didn't follow this argument at the time, so I might be missing key context.)
The blog post "Reward is not the optimization target" gives the following summary of its thesis,
- Deep reinforcement learning agents will not come to intrinsically and primarily value their reward signal; reward is not the trained agent’s optimization target.
- Utility functions express the relative goodness of outcomes. Reward is not best understood as being a kind of utility function. Reward has the mechanistic effect of chiseling cognition into the agent's network. Therefore, properly understood, reward does not express relative goodness and is therefore not an optimization target at all.
I hope it doesn't come across as revisionist to Alex, but I felt like both of these points were made by people at least as early as 2019, after the Mesa-Optimization sequence came out in mid-2019. As evidence, I'll point to my post from December 2019 that was partially based on a conversation with Rohin, who seemed to agree with me,
...consider a simple feedforward neural network trained by deep reinforcement learning to navigate my Chests and Keys environment. Since "go to the nearest key" i
Deep reinforcement learning agents will not come to intrinsically and primarily value their reward signal; reward is not the trained agent’s optimization target.
I have no stake in this debate, but how is this particular point any different than what Eliezer says when he makes the point about humans not optimizing for IGF? I think the entire mesaoptimization concern is built around this premise, no?
Thanks for the edit :)
As I mentioned elsewhere (not this website) I don't agree with "will reliably lead people to false beliefs", if we're talking about ML people rather than LW people (as was my audience for that blog post).
I do think that it's a reasonable hypothesis to have, and I assign it more likelihood than I would have a year ago (in large part from you pushing some ML people on this point, and them not getting it as fast as I would have expected).
It seems to me that often people rehearse fancy and cool-sounding reasons for believing roughly the same things they always believed, and comment threads don't often change important beliefs. Feels more like people defensively explaining why they aren't idiots, or why they don't have to change their mind. I mean, if so—I get it, sometimes I feel that way too. But it sucks and I think it happens a lot.
My sense is that this is an inevitable consequence of low-bandwidth communication. I have no idea whether you're referring to me or not, and I am really not saying you are doing so, but I think an interesting example (whether you're referring to it or not) are some of the threads recently where we've been discussing deceptive alignment. My sense is that neither of us have been very persuaded by those conversations, and I claim that's not very surprising, in a way that's epistemically defensible for both of us. I've spent literal years working through the topic myself in great detail, so it would be very surprising if my view was easily swayed by a short comment chain—and similarly I expect that the same thing is true of you, where you've spent much more time thinking about this and ...
My sense is that neither of us have been very persuaded by those conversations, and I claim that's not very surprising, in a way that's epistemically defensible for both of us. I've spent literal years working through the topic myself in great detail, so it would be very surprising if my view was easily swayed by a short comment chain—and similarly I expect that the same thing is true of you, where you've spent much more time thinking about this and have much more detailed thoughts than are easy to represent in a simple comment chain.
I've thought about this claim more over the last year. I now disagree. I think that this explanation makes us feel good but ultimately isn't true.
I can point to several times where I have quickly changed my mind on issues that I have spent months or years considering:
FWIW, LessWrong does seem—in at least one or two ways—saner than other communities of similar composition. I agree it's better than Twitter overall. But in many ways it seems worse than other communities. I don't know what to do about it, and to be honest I don't have much faith in e.g. the mods.[1]
Hopefully my comments do something anyways, though. I do have some hope because it seems like a good amount has improved over the last year or two.
Despite thinking that many of them are cool people.
Why are you so focused on Eliezer/MIRI yourself? If you think you (or events in general) have adequately shown that their specific concerns are not worth worrying about, maybe turn your attention elsewhere for a bit? For example you could look into other general concerns about AI risk, or my specific concerns about AIs based on shard theory. I don't think I've seen shard theory researchers address many of these yet.
I feel pretty frustrated at how rarely people actually bet or make quantitative predictions about existential risk from AI. EG my recent attempt to operationalize a bet with Nate went nowhere. Paul trying to get Eliezer to bet during the MIRI dialogues also went nowhere, or barely anywhere—I think they ended up making some random bet about how long an IMO challenge would take to be solved by AI. (feels pretty weak and unrelated to me. lame. but huge props to Paul for being so ready to bet, that made me take him a lot more seriously.)
For what it's worth, I would be up for a dialogue or some other context where I can make concrete predictions. I do think it's genuinely hard, since I do think there is a lot of masking of problems going on, and optimization pressure that makes problems harder to spot (both internally in AI systems and institutionally), so asking me to make predictions feels a bit like asking me to make predictions about FTX before it collapsed.
Like, yeah, I expect it to look great, until it explodes. Similarly I expect AI to look pretty great until it explodes. That seems like kind of a core part of the argument for difficulty for me.
I would nevertheless be happy to try to operationalize some bets, and still expect we would have lots of domains where we disagree, and would be happy to bet on those.
Like, yeah, I expect it to look great, until it explodes. Similarly I expect AI to look pretty great until it explodes. That seems like kind of a core part of the argument for difficulty for me.
If your hypothesis smears probability over a wider range of outcomes than mine, while I can more sharply predict events using my theory of how alignment works—that constitutes a Bayes-update towards my theory and away from yours. Right?
"Anything can happen before the explosion" is not a strength for a theory. It's a vulnerability. If probability is better-concentrated by any other theories which make claims about both the present and the future of AI, then the noncommittal theory gets dropped.
Sure, yeah, though like, I don't super understand. My model will probably make the same predictions as your model in the short term. So we both get equal Bayes points. The evidence that distinguishes our models seems further out, and in a territory where there is a decent chance that we will be dead, which sucks, but isn't in any way contradictory with Bayes rule. I don't think I would have put that much probability on us being dead at this point, so I don't think that loses much of any bayes points. I agree that if we are still alive in 20-30 years, then that's definitely bayes points, and I am happy to take that into account then, but I've never had timelines or models that predicted things to look that different from now (or like, where there were other world models that clearly predicted things much better).
My model will probably make the same predictions as your model in the short term.
No, I don't think so. My model(s) I use for AGI risk is an outgrowth of the model I use for normal AI research, and so it makes tons of detailed predictions. That's why my I have weekly fluctuations in my beliefs about alignment difficulty.
Overall question I'm interested in: What, if any, catastrophic risks are posed by advanced AI? By what mechanisms do they arise, and by what solutions can risks be addressed?
Making different predictions. The most extreme prediction of AI x-risk is that AI presents, well, an x-risk. But theories gain and lose points not just on their most extreme predictions, but on all their relevant predictions.
I have a bunch of uncertainty about how agentic/transformative systems will look, but I put at least 50% on "They'll be some scaffolding + natural outgrowth of LLMs." I'll focus on that portion of my uncertainty in order to avoid meta-discussions on what to think of unknown future systems.
I don't know what your model of AGI risk is, but I'm going to point to a cluster of adjacent models and memes which have been popular on LW and point out a bunch of predictions t...
This model naturally predicts things like "it's intractably hard/fragile to get GPT-4 to help people with stuff." Sure, the model doesn't predict this with probability 1, but it's definitely an obvious prediction.
Another point is that I think GPT-4 straightforwardly implies that various naive supervision techniques work pretty well. Let me explain.
From the perspective of 2019, it was plausible to me that getting GPT-4-level behavioral alignment would have been pretty hard, and might have needed something like AI safety via debate or other proposals that people had at the time. The claim here is not that we would never reach GPT-4-level alignment abilities before the end, but rather that a lot of conceptual and empirical work would be needed in order to get models to:
Well, to the surprise of my 2019-self, it turns out that naive RLHF with a cautious supervisor designing the reward model seems basically sufficient to do all of these things in a reas...
What did you think would happen, exactly? I'm curious to learn what your 2019-self was thinking would happen, that didn't happen.
I feel pretty frustrated at how rarely people actually bet or make quantitative predictions about existential risk from AI.
Without commenting on how often people do or don't bet, I think overall betting is great and I'd love to see more it!
I'm also excited how much of it I've seen since Manifold started gaining traction. So I'd like to give a shout out to LessWrong users who are active on Manifold, in particular on AI questions. Some I've seen are:
Good job everyone for betting on your beliefs :)
There are definitely more folks than this: feel free to mention more folks in the comments who you want to give kudos to (though please don't dox anyone who's name on either platforms is pseudonymous and doesn't match the other).
Yeah, I'm not really happy with the state of discourse on this matter either.
I think it's not a coincidence that many of the "canonical alignment ideas" somehow don't make any testable predictions until AI takeoff has begun. 🤔
As a proponent of an AI-risk model that does this, I acknowledge that this is an issue, and I indeed feel pretty defensive on this point. Mainly because, as @habryka pointed out and as I'd outlined before, I think there are legitimate reasons to expect no blatant evidence until it's too late, and indeed, that's the whole reason AI risk is such a problem. As was repeatedly stated.
So all these moves to demand immediate well-operationalized bets read a bit like tactical social attacks that are being unintentionally launched by people who ought to know better, which are effectively exploiting the territory-level insidious nature of the problem to undermine attempts to combat it, by painting the people pointing out the problem as blind believers. Like challenges that you're set up to lose if you take them on, but which make you look bad if you turn them down.
And the above, of course, may read exactly like a defense attempt a particularly self-aware blin...
I feel pretty frustrated at how rarely people actually bet or make quantitative predictions about existential risk from AI. EG my recent attempt to operationalize a bet with Nate went nowhere. Paul trying to get Eliezer to bet during the MIRI dialogues also went nowhere, or barely anywhere—I think they ended up making some random bet about how long an IMO challenge would take to be solved by AI. (feels pretty weak and unrelated to me. lame. but huge props to Paul for being so ready to bet, that made me take him a lot more seriously.)
This paragraph doesn't seem like an honest summary to me. Eliezer's position in the dialogue, as I understood it, was:
Your comments' points seem like further evidence for my position. That said, your comment appears to serve the function of complicating the conversation, and that happens to have the consequence of diffusing the impact of my point. I do not allege that you are doing so on purpose, but I think it's important to notice. I would have been more convinced by a reply of "no, you're wrong, here's the concrete bet(s) EY made or was willing to make but Paul balked."
I will here repeat a quote[1] which seems relevant:
[Christiano][12:29]
my desire to bet about "whatever you want" was driven in significant part by frustration with Eliezer repeatedly saying things like "people like Paul get surprised by reality" and me thinking that's nonsense
...
- The journey is a lot harder to predict than the destination. Cf. "it's easier to use physics arguments to predict that humans will one day send a probe to the Moon, than it is to predict when this will happen or what the specific capabilities of rockets five years from now will be". Eliezer isn't claiming to have secret insights about the detailed year-to-year or month-to-month changes in the field; if he thought that, he'd have been m
Thanks for you feedback. I certainly appreciate your articles and I share many of your views. Reading what you had to say, along with Quentin, Jacob Cannell, Nora was a very welcome alternative take that expanded my thinking and changed my mind. I have changed my mind a lot over the last year, from thinking AI was a long way off and Yud/Bostrom were basically right to seeing that its a lot closer and theories without data are almost always wrong in may ways - e.g. SUSY was expected to be true for decades by most of the world's smartest physicists. Many alignment ideas before GPT3.5 are either sufficiently wrong or irrelevant to do more harm than good.
Especially I think the over dependence on analogy, evolution. Sure when we had nothing to go on it was a start, but when data comes in, ideas based on analogies should be gone pretty fast if they disagree with hard data.
(Some background - I read the site for over 10 years have followed AI for my entire career, have an understanding of Maths, Psychology, and have built and deployed a very small NN model commercially. Also as an aside I remember distinctly being surprised that Yud was skeptical of NN/DL in the earlier days when I considered it obviously where AI progress would come from - I don't have references because I didn't think that would be disputed afterwards)
I am not sure what the silent majority belief on this site is (by people not Karma)? Is Yud's worldview basically right or wrong?
analogies based on evolution should be applied at the evolutionary scale: between competing organizations.
Hi there.
> (High confidence) I feel like the project of thinking more clearly has largely fallen by the wayside, and that we never did that great of a job at it anyways.
I'm new to this community. I've skimmed quite a few articles, and this sentence resonates with me for several reasons.
1) It's very difficult in general to find websites like LessWrong these days. And among the few that exist, I've found that the intellectuals on them are so incredibly doubtful of their own intellect. This creates a sort of Ouroboros phenomenon where intellects just eat themselves into oblivion. Like, maybe I'm wrong but this site's popularity seems to be going down?
2) At least from what I've noticed, when I compare articles in the last 2 months, to ones from about a decade ago, there is an alarming truth in your sentence. A decade ago, there were questions left in the articles for commenters to answer, there was a willingness to change one's mind and to add/enhance ideas in a good faith manner. Now, it seems that many have confused this website for LinkedIn, posting their own personal paper trails (which is largely in a tone that isn't unique anyways.)
It's really unfortunate, since I was excited upon being greeted with much older articles. And then realising "Oh... that was from... holy! 10 years ago!?" To then be disappointed by our articles from today.
I think it's fine for there to be a status hierarchy surrounding "good alignment research". It's obviously bad if that becomes mismatched with reality, as it almost certainly is to some degree, but I think people getting prestige for making useful progress is essentially what happens for it to be done at all.
I feel pretty frustrated at how rarely people actually bet or make quantitative predictions about existential risk from AI.
I think that might be a result of how the topic is, well, just really fucking grim. I think part of what allows discussion of it and thought about it for a lot of people (including myself) is a certain amount of detachment. "AI doomers" get often accused of being LARPers or not taking their own ideas seriously because they don't act like people who believe the world is ending in 10 years, but I'd flip that around - a person who beli...
I think there are some great points in this comment but I think it's overly negative about the LessWrong community. Sure, maybe there is a vocal and influential minority of individuals who are not receptive to or appreciative of your work and related work. But I think a better measure of the overall community's culture than opinions or personal interactions is upvotes and downvotes which are much more frequent and cheap actions and therefore more representative. For example, your posts such as Reward is not the optimization target have received hundreds of...
No disagreement here that this place does this. I also think we should attempt to change many of these things. However, I don't expect the lesswrong team to do anything sufficiently drastic to counter the hero-worship. Perhaps they could consider hiding usernames by default, hiding vote counts until things have been around for some period of time, or etc.
Hmm, my sense is Eliezer very rarely comments, and the people who do comment a lot don't have a ton of hero worship going on (like maybe Wentworth?). So I don't super believe that hiding usernames would do much about this.
Somewhat relatedly, there have been a good number of times where it seems like I've persuaded someone of A and of A ⟹ B and they still don't believe B, and coincidentally B is unpopular.
Would you mind sharing some specifiexamples? (Not of people of but of beliefs)
LessWrong.com is my favorite website. I’ve tried having thoughts on other websites and it didn't work. Seriously, though—I feel very grateful for the effort you all have put in to making this an epistemically sane environment. I have personally benefited a huge amount from the intellectual output of LW—I feel smarter, saner, and more capable of positively affecting the world, not to mention all of the gears-level knowledge I’ve learned, and model building I’ve done as a result, which has really been a lot of fun :) And when I think about what the world would look like without LessWrong.com I mostly just shudder and then regret thinking of such dismal worlds.
Some other thoughts of varying import:
If there were one dial I’d want to experiment with turning on LW it would be writing quality, in the direction of more of it.
I'd like to highlight this. In general, I think fewer things should be promoted to the front page.
[edit, several days later]: https://www.lesswrong.com/posts/SiPX84DAeNKGZEfr5/do-websites-and-apps-actually-generally-get-worse-after is a prime example. This has nothing to do with rationality or AI alignment. This is the sort of off-topic chatter that belongs somewhere else on the Internet.
[edit, almost a year later]: https://www.le...
I’m a huge fan of agree/disagree voting. I think it’s an excellent example of a social media feature that nudges users towards truth, and I’d be excited to see more features like it.
I also enjoy the reacts way more than I expected! They feel aesthetically at home here, especially with reacts for specific parts of the text.
It seems like it would be useful to have it for top-level posts. I love disagree voting and there are massive disparities sometimes between upvotes and agreements that show how useful it is in surfacing good arguments that are controversial.
I think I'm seeing some high effort, topical and well-researched top-level posts die on the vine because of controversial takes that are probably disagree voting. This is not a complaint about my own posts sometimes dying; I've been watching others posts with this hypothesis, and it fits.
I guess there's a reason for not having it on top-level posts, but I miss having it on top-level posts.
I'd like to like this more but I don't have a clear idea of when to up one, up the other, down one, down the other, or down one and up the other.
The EA Forum has this problem worse, but I've started to see it on LessWrong: it feels to me like we have a lot more newbies on the site who don't really get what LW-style rationality is about, and they make LessWrong a less fun place to write because they are regressing discussion norms back towards the mean.
Earlier this year I gave up on EAF because it regressed so far towards the mean that it became useless to me. LW has still been passable but feels like it's been ages since I really got into a good, long, deep thread with somebody on here. Partly that's because I'm busy, but it's also because I'm been quicker to give up because my expectations of having a productive conversation here are now lower. :-(
Do you have any thoughts on what the most common issues you see are or is it more like that every time it is a different issue?
First of all, I appreciate all the work the LessWrong / Lightcone team does for this website.
Maybe there's a lot of boiling feelings out there about the site that never get voiced?
I tend to avoid giving negative feedback unless someone explicitly asks for it. So…here we go.
Over the 1.5 years, I've been less excited about LessWrong than any time since I discovered this website. I'm uncertain to what extent this is because I changed or because the community did. Probably a bit of both.
The most obvious change is the rise of AI Alignment writings on LessWrong. There are two things that bother me about AI Alignment writing.
I have hidden the "AI Alignment" tag from my homepage, but there is still a spillover effect. "Likes unfalsifiable political claims" is the opposite of the kind of community I want to be part of. I think adopting lc's POC || GTFO burden of proof would make AI Alignment dialogue productive, but I am pessimistic about that happening on a collective scale.
When I write about weird ideas, I get three kinds of responses.
Over the years, I feel like I've gotten fewer "yes and" comments and more "we don't want you to say that" comments. This might be because my writing has changed, but I think what's really going on is that this happens to every community as it gets older. What was once radical eventually congeals into dogma.
I used to post my weird ideas immediately to LessWrong. Now I don't, because I feel like the reception on LessWrong would bum me out.[1]
I wonder what fraction of the weirdest writers here feel the same way. I can't remember the last time I've read something on LessWrong and thought to myself, "What a strange, daring, radical idea. It might even be true. I'm scared of what the implications might be." I miss that.[2]
I have learned a lot from reading and writing on LessWrong. Eight months ago, I had an experience where I internalized something very deep about rationality. I felt like I graduated from Level 1 to Level 2.
According to Eliezer Yudkowsky, his target audience for the Sequences was 2nd grade. He missed and ended up hitting college-level. They weren't supposed to be comprehensive. They were supposed to be Level 1. But after that, nobody wrote a Level 2. (The postrats don't count.) I've been trying―for years―to write Level 2, but I feel like a sequence of blog posts is a suboptimal format in 2023. Yudkowsky started writing the Sequences in 2006, when YouTube was still a startup. That leads me to…
The other reason I've been posting less on LessWrong is that I feel like I'm hitting a soft ceiling with what I can accomplish here. I'm nowhere near the my personal skill cap, of course. But there is a much larger potential audience (and therefore impact) if I shifted from writing essays to filming YouTube videos. I can't think of anything LessWrong is doing wrong here. The editor already allows embedded YouTube links.
Over the years, I feel like I've gotten fewer "yes and" comments and more "we don't want you to say that" comments. This might be because my writing has changed, but I think what's really going on is that this happens to every community as it gets older. What was once radical eventually congeals into dogma.
This is the part I'm most frustrated with. It used to be you could say some wild stuff on on this site and people would take you seriously. Now there's a chorus of people who go "eww, gross" if you go too far past what they think should be acceptable. LessWrong culture originally had very high openness to wild ideas. At worst, if you reasoned well and people disagreed, they'd at least ignore you, but now you're more likely to get downvoted for saying controversial things because they are controversial and it feels bad.
This was always a problem, but feels like it's gotten worse.
Huh, I am surprised by this. I agree this is a thing in lots of the internet, but do you have any examples? I feel like we really still have a culture of pretty extreme openness and taking random ideas seriously (enough that sometimes I feel like wild sounding bad ideas get upvoted too much because people like being contrarian a bit too much).
Here's part of a comment on one of my posts. The comment negatively impacted my desire to post deviant ideas on LessWrong.
Bullshit. If your desire to censor something is due to an assessment of how much harm it does, then it doesn't matter how open-minded you are. It's not a variable that goes into the calculation.
I happen to not care that much about the object-level question anymore (at least as it pertains to LessWrong), but on a meta level, this kind of argument should be beneath LessWrong. It's actively framing any concern for unrestricted speech as poorly motivated, making it more difficult to have the object-level discussion.
The comment doesn't represent a fringe opinion. It has +29 karma and +18 agreement.
I think I'm less open to weird ideas on LW than I used to be, and more likely to go "seems wrong, okay, next". Probably this is partly a me thing, and I'm not sure it's bad - as I gain knowledge, wisdom and experience, surely we'd expect me to become better at discerning whether a thing is worth paying attention to? (Which doesn't mean I am better, but like. Just because I'm dismissing more ideas, doesn't mean I'm incorrectly dismissing more ideas.)
But my guess is it's also partly a LW thing. It seems to me that compared to 2013, there are more weird ideas on LW and they're less worth paying attention to on average.
In this particular case... when you talk about "We don’t want you to say that" comments, it sounds to me like those comments don't want you to say your ideas. It sounds like Habryka and other commenters interpreted it that way too.
But my read of the the comment you're talking about here isn't that it's opposed to your ideas. Rather, it doesn't want you to use a particular style of argument, and I agree with it, and I endorse "we don't want bad arguments on LW". I downvoted that post of yours because it seemed to be arguing poorly. It's possible I missed something; I admi...
I wonder what fraction of the weirdest writers here feel the same way. I can't remember the last time I've read something on LessWrong and thought to myself, "What a strange, daring, radical idea. It might even be true. I'm scared of what the implications might be." I miss that.
I thought Genesmith's latest post fully qualified as that!
I totally didn't think adult gene editing was possible, and had dismissed it. It seems like a huge deal if true, and it's the kind of thing I don't expect would have been highlighted anywhere else.
I wonder what fraction of the weirdest writers here feel the same way. I can't remember the last time I've read something on LessWrong and thought to myself, "What a strange, daring, radical idea. It might even be true. I'm scared of what the implications might be." I miss that.
The post about not paying one's taxes was pretty out there and had plenty interesting discussion, but now it's been voted down to the negatives. I wish it was a bit higher (at 0-ish karma, say), which might've happened if people could disagree-vote on it.
But yes, overall this critic...
Another improvement I didn't notice until right now is the "respond to a part of the original post" feature. I feel like it nudges comments away from nitpicking.
The other reason I've been posting less on LessWrong is that I feel like I'm hitting a soft ceiling with what I can accomplish here. I'm nowhere near the my personal skill cap, of course. But there is a much larger potential audience (and therefore impact) if I shifted from writing essays to filming YouTube videos.
There are also writers with a very large reach. A recommendation I saw was to post where most of the people and hence most of the potential readers are, i.e. on the biggest social media sites. If you're trying to have impact as a writer, the reachable audience on LW is much smaller. (Though of course there are other ways of having a bigger impact than just reaching more readers.)
I can't think of anything LessWrong is doing wrong here. The editor already allows embedded YouTube links.
One thing that could help is to be able to have automatic crossposting from your YouTube channel like you can currently have from a blog. It would be even more powerful if it generated a transcript automatically (though that's currently difficult and expansive).
I wonder what fraction of the weirdest writers here feel the same way. I can't remember the last time I've read something on LessWrong and thought to myself, "What a strange, daring, radical idea. It might even be true. I'm scared of what the implications might be." I miss that.
Do you remember any examples from back in the day?
I enjoy your content here and would like to continue reading you as you grow into your next platforms.
YouTube grows your audience in the immediate term, among people who have the tech and time to consume videos. However, text is the lowest common denominator for human communication across longer time scales. Text handles copying and archiving in ways that I don't think we can promise for video on a scale of hundreds of years, let alone thousands. Text handles search with an ease that we can only approximate for video by transcribing it. Transcription is tr...
I just posted a big effortpost and it may have been consigned to total obscurity because I posted it at the wrong time of day. Unsure whether I actually want the recommendation algorithm to have flattened time-discounting over periods with less activity on the site, or if I should just post more strategically in the future.
I have found the dialogues to be generally low-quality to read. The good ones tend to be more interview-like - "I have something I want to talk about but writing a post is harder than talking to a curious interlocutor about it." I think this maybe suggests that I want to see dialogues rebranded to not say "dialogue."
(Note, I don't think it's because it was posted at the wrong time of day. I think it's because the opening doesn't make a clear case for why people should read it.
In my experience posts like this still get a decent amount of attention if they are good, but it takes a lot longer, since it spreads more by word-of-mouth. The initial attention burst of LW is pretty heavily determined by how much the opening paragraphs and title draw people in. I feel kind of sad about that, but also don't have a great alternative to the current HN-style algorithm that still does the other things we need karma/frontpage-sorting algorithm to do)
I have found the dialogues to be generally low-quality to read.
I think overall I've found dialogues pretty good, I've found them useful for understanding people's specific positions and getting people's takes on areas I don't know that well.
My favorite one so far is AI Timelines, which I found useful for understanding the various pictures of how AI development will go in the near term. I liked How useful is mechanistic interpretability? and Speaking to Congressional staffers about AI risk for understanding people's takes on these areas.
AI content for specialists
There is a lot of AI content recently, and it is sometimes of the kind that requires specialized technical knowledge, which I (an ordinary software developer) do not have. Similarly, articles on decision theories are often written in a way that assumes a lot of background knowledge that I don't have. As a result there are many articles I don't even click at, and if I accidentally do, I just sigh and close them.
This is not necessarily a bad thing. As something develops, inferential distances increase. So maybe, as a community we are developing a new science, and I simply cannot keep up with it. -- Or maybe it is all crackpottery; I wouldn't know. (Would you? Are some of us upvoting content they are not sure about, just because they assume that it must be important? This could go horribly wrong.) Which is a bit of a problem for me, because now I can no longer recommend Less Wrong in good faith as a source of rational thinking. Not because I see obviously wrong things, but because there are many things where I have no idea whether they are right or wrong.
We had some AI content and decision theory here since the beginning. But those articles written back then by Eliezer were quite easy to understand, at least for me. For example, "How An Algorithm Feels From Inside" doesn't require anything beyond high-school knowledge. Compare it to "Hypothesis: gradient descent prefers general circuits". Probably something important, but I simply do not understand it.
Just like historically MIRI and CFAR split into two organizations, maybe Less Wrong should too.
Feeling of losing momentum
I miss the feeling that something important is happening right now (and I can be a part of it). Perhaps it was just an illusion, but at the first years of Less Wrong it felt like we were doing something important -- building the rationalist community, inventing the art of everyday rationality, with the perspective to raise the general sanity waterline.
It seems to me that we gave up on the sanity waterline first. The AI is near, we need to focus on the people who will make a difference (whom we could recruit for an AI research), there is no time to care about the general population.
Although recently, this baton was taken over by the Rational Animations team!
Is the rationalist community still growing? Offline, I guess it depends on the country. In Bratislava, where I live, it seems that ~ no one cares about rationality. Or effective altruism. Or Astral Codex Ten. Having five people at a meetup is a big success. Nearby Vienna is doing better, but it is merely climbing back to pre-COVID levels, not growing. Perhaps it is better at some other parts of the world.
Online, new people are still coming. Good.
Also, big thanks to all people who keep this website running.
But still it no longer feels to me anymore like I am here to change the world. It is just another form of procrastination, albeit a very pleasant one. (Maybe because I do not understand the latest AI and decision theory articles; maybe all the exciting things are there.)
Etc.
Some dialogs were interesting, but most are meh.
My greatest personal pet peeve was solved: people no longer talk uncritically about Buddhism and meditation. (Instead of talking more critically they just stopped talking about it at all. Works for me, although I hoped for some rational conclusion.)
It is difficult for me to disentangle what happens in the rationalist community from what happens in my personal life. Since I have kids, I have less free time. If I had more free time, I would probably be recruiting for the local rationality (+adjacent) community, spend more time with other rationalists, maybe even write some articles... so it is possible that my overall impression would be quite different.
(Probably forgot something; I may add some points later.)
Is the rationalist community still growing? Offline, I guess it depends on the country. In Bratislava, where I live, it seems that ~ no one cares about rationality. Or effective altruism. Or Astral Codex Ten. Having five people at a meetup is a big success. Nearby Vienna is doing better, but it is merely climbing back to pre-COVID levels, not growing. Perhaps it is better at some other parts of the world.
I think that starting things that are hard forks of the lesswrong memeplex might be beneficial to being able to grow. Raising the sanity waterline woul...
I love LessWrong. I have better discussions here than anywhere else on the web.
I think I may have a slightly different experience with the site than the modal user because I am not very engaged in the alignment discourse.
I've found the discussions on the posts I've written to be of unusually high quality, especially the things I've written about fertility and polygenic embryo screening.
I concur with other comments about the ability to upvote and agree/disagree with a comment to be a great feature which I use all the time.
My number one requested feature continues to be the ability to see a retention graph on the posts I've written, i.e. where do people get bored and stop reading? After technical accuracy my number one goal is to write something interesting and engaging, but I lack any kind of direct feedback mechanism to optimize my writing in that way.
My number one requested feature continues to be the ability to see a retention graph on the posts I've written, i.e. where do people get bored and stop reading? After technical accuracy my number one goal is to write something interesting and engaging, but I lack any kind of direct feedback mechanism to optimize my writing in that way.
Yeah, I've been wanting something like this for a while. It would require capturing more data and processing a bunch of data than we have historically. Also distinguishing between someone skimming up and down a post and actua...
(low confidence, low context, just an intuition)
I feel as though the LessWrong team should experiment with even more new features, treating the project of maintaining a platform for collective truth-seeking like a tech startup. The design space for such a platform is huge (especially as LLMs get better).
From my understanding, the strategy that startups use to navigate huge design spaces is “iterate on features quickly and observe objective measures of feedback”, which I suspect LessWrong should lean into more. Although, I imagine creating better truth-seeking infrastructure doesn’t have as good of a feedback signal as “acquire more paying users” or “get another round of VC funding”.
This is basically what we do, capped by our team capacity. For most of the last ~2 years, we had ~4 people working full-time on LessWrong plus shared stuff we get from EA Forum team. Since the last few months, we reallocated people from elsewhere in the org and are at ~6 people, though several are newer to working on code. So pretty small startup. Dialogues has been the big focus of late (plus behind the scenes performance optimizations and code infrastructure).
All that to say, we could do more with more money and people. If you know skilled developers willing to live in the Berkeley area, please let us know!
Agreed! Cf. Proposal for improving the global online discourse through personalised comment ordering on all websites -- using LessWrong as the incubator for the first version of the proposed model would actually be critical.
I feel a mix of pleased and frustrated. The main draw for me is AI safety discussion. I dislike the feeling of group-think around stuff, and I value the people who speak up against the group-think with contrary views (e.g. TurnTrout), who post high quality technical content, or well-researched and thought-out posts (e.g. Steven Byrnes).
I feel frustrated at things like feeling that people don't always do a good job of voting comments up based on how valuable/coherent/high-effort the information content is, and then separately voting agree/disagree. I really like this feature, and I wish people gave it more respect. I am pleased that it does as well as it does though.
I like the new emojis and the new dialogues. I'm excited for the site designers to keep trying new (optional) stuff.
The things I'd like more from the site would be if it could split into two: one which was even more in the direction of technical discussion of AI safety, and the other for rationality and philosophy stuff. And then I'd like the technical side to have features like jupyter notebook-based posts for dynamic code demonstrations. And people presenting recent important papers not their own (e.g. from arxiv), for the sake of highlighting/summarizing/sparking-discussion. The weakness of the technical discussion here is, in my opinion, related to the lack of engagement with the wider academic community and empirical evidence.
Ultimately, I don't think it matters much what we do with the site in the longer term because I think things are about to go hockey stick singularity crazy. That's the bet I'm making anyway.
Yeah. The threshold for "okay, you can submit to alignmentforum" is way, way, way too high, and as a result, lesswrong.com is the actual alignmentforum. Attempts to insist otherwise without appropriately intense structural change will be met with lesswrong.com going right on being the alignmentforum.
Ok, slightly off topic, but I just had a wacky notion for how to break-up groupthink as a social phenomenon. You know the cool thing from Audrey Tang's ideas, Polis? What if we did that, but we found 'thought groups' of LessWrong users based on the agreement voting. And then posts/comments which were popular across thought-groups instead of just intensely within a thought group got more weight?
Niclas Kupper tried a LessWrong Polis to gather our opinions a while back. https://www.lesswrong.com/posts/fXxa35TgNpqruikwg/lesswrong-poll-on-agi
I still like the site, though I had to set the AI tag to -100 this year. One thing I wish was a bit different is that I've posted a whole bunch of LW-site-relevant feedback in comments (my natural inclination is to post comprehensive feedback on whatever content I interact with), and for a good fraction of them I've received no official reaction whatsoever. I don't know if the comments got ignored because the LW team didn't see them, or didn't have the time to act on them, or whatever, but I still wish I'd gotten some kinds of reactions on them.
I'm not asking for my feedback to be implemented[1], but when I post feedback comments on site posts by LW team members, I do wish I got some kind of acknowledgement, even if it always turned out to be "we've seen this feedback, but we have bigger fish to fry".
As an example, here are all my unanswered feedback requests since 2023-08-01 (arbitrary cutoff from when I got bored browsing my comments history):
Finally, if someone on the LW team was interested, it could be neat to dialogue on a topic like "LW and open-source contributions".
Though it occasionally does get implemented: like fixes to this bug report on comment reactions, or to a report on the comments counter being bugged in discussions.
Just as an FYI: pinging us on Intercom is a much more reliable way of ensuring we see feature suggestions or bug reports than posting comments. Most feature suggestions won't be implemented[1]; bug reports are prioritized according to urgency/impact and don't always rise to the level of "will be addressed" (though I think >50% do).
At least not as a result of a single person suggesting them; we have ever made decisions that were influenced on the margin by suggestions from one or more LW users.
Genuine question: Why Intercom? What's so good about it?
Re: reliability & follow-ups:
Overall feels like it's ok, but very frustrating because it feels like it could be so much better. But I don't think this is mainly about the software of LW; it's about culture more broadly in decay (or more precisely, all the methods of coordinating on visions having been corrupted and new ones not gaining steam while defending boundaries).
A different thing: This is a problem for everyone, but: stuff gets lost. https://www.lesswrong.com/posts/DtW3sLuS6DaYqJyis/what-are-some-works-that-might-be-useful-but-are-difficult It's bad, and there's a worldwide problem of indexing the Global Archives.
I like LessWrong a lot.
I discovered the site nearly 2 years ago, and have sort of meandered through old and new posts enjoying them. Something I have observed is that, having now gone through most of the best of the "back-catalogue" (old stuff) I am now visiting and reading less, because stuff I like is added at a given rate and I was consuming much faster than that rate. This is relevant, because without being careful it creates the impression of reduced good content. So perhaps any feedback along the lines of "there used to be more good stuff" should be checked for this illusion.
I have filtered AI tagged stuff off completely. I come to LessWrong to read about some weird new idea someone has (eg. a crazy thought experiment, a seemingly-mad ethical claim or a weird fiction). Five posts on AI alignment was enough for me. I don't need to see more. I am really pleased that the filter system allows this to work so seamlessly - I just don't see AI stuff any more and sometimes kind of forget LessWrong is used as an "AI place" by some people.
I like the react symbols, and agree/disagree voting. I have not tried reading or participating in a dialogue, and am unlikely to do so. The dialogue format doesn't seem like the right frame for the kind of thing I like.
I am really pleased that the filter system allows this to work so seamlessly - I just don't see AI stuff any more and sometimes kind of forget LessWrong is used as an "AI place" by some people.
It works quite well; the one limitation is that the tag filter can only filter out posts that have been tagged correctly, which brand-new posts aren't necessarily. That said, I just checked the New Post editor, and there's now a section to apply tags from within the editor. So this UX change likely reduced the proportion of untagged posts.
Bullet points of things that come to mind:
And inspired by my thoughts on the positivity of feedback, let me say this: I still consider LessWrong a great website as websites go. Even if I don't nowadays find it as worldview-changing as when I first read it, there's still a bunch of great stuff here.
(Duncan Sabien has announced that he likely won't post on LessWrong anymore. [...] I feel like LessWrong is losing a lot here: Sabien is clearly a top rationality writer.)
I think that Duncan writing on his own blog and we linking the good posts from LW may be the best solution for both sides. (Duncan approves of being linked.)
I think the ratio of comments/post is too small.
I think there are a lot of old posts that don't get read. I'm most drawn to the Latest Posts because that's where the social interaction via commenting is. LessWrong is quite tolerant of comments on old posts, but they don't get as much engagement. It's too diffuse to be self-sustaining, but I feel like the newcomers are missing out on that in the core material.
What can we do about that? Maybe someone else has a better idea, but I think I'd like to see an official community readthrough of the core sequence...
I feel pretty good about LessWrong. The amount of attention I give to LW tends to ebb and flow in phases, and I'm currently in a phase where I gave it less attention (In large part due to the war in Israel), and now I'm probably going to enter into a phase of giving it a lot of attention because of the 2022 review.
I think the team is doing a great job with the site, both in terms of feature and moderation, and the site keeps getting better.
I do feel the warping effect of the AI topic on the site, and I'm ambivalent about it. On the one hand, I do think it's an important topic that should be discussed here, on the other hand, it does flood out everything else (I've changed my filters to deal with it) and a lot of it is low quality. I also see and feel the pressure to make everything related to AI somehow, which again, I'm ambivalent about. On the one hand if it's so significant and important, then it makes sense to connect many things to it, on the other hand, I'm not sure it does much good to the writing on the site.
I also wish the project to develop the art of rationality got more attention, as I think it is still important and there's a lot of progress to be made and work to be done. But I also wish that whatever attention it got would be of higher quality - there are very few rationality posts in the last few years that were on the level of the old essays from Eliezer and Scott.
Perhaps the problem is that good writers just don't stay on LessWrong, and prefer to go on their own platforms or to twitter where they can get more attention and make money from their writing. One idea I have to deal with that is to implement a gifting feature (with real money), perhaps using plural funding. I think it can incentivize people to write better things, and incentivize good writers to also post on LW. I definitely know it would motivate me, at least.
Another thing I would like, which would help deal with the fact that lots of writing that's relevant to LW isn't on LW, is to improve the way linkposts work. Currently, I come across a lot of writing that I want to share on LW, but it would be drowned out if I shared it in the open thread or a shortform post, and I don't want to share it as a linkpost because I don't want it to be displayed on my page as one of my posts (and drown out the rest of my posts). I also don't feel like I deserve all the Karma it would get, so it feels a bit... dirty? Here's what I have in mind instead - have a clear distinction between corssposts and linkposts:
I think these two features would greatly help LW be a place where great writing can be found and discussed, and hopefully that disproportionately includes writing on the art of rationality.
The linkposts idea is interesting. I agree that it's weird to get karma for posting linkposts for other people.
In addition, there's also a problem where, no matter on which site you are (e.g. Reddit or Twitter or LW), native posts get much more engagement and upvotes than linkposts that require visiting an external site. But of course you also can't just copy the content from the external site, because that would be copyright infringement.
Anyway, as per elsewhere in this thread, your linkposts suggestion has a higher chance of being seen if you also make it on Intercom.
I like the agree-disagree vote and the design.
With the content and votes...
- my impression is until ~1-2 years ago LW had a decent share of great content; I disliked the average voting "taste vector", which IMO represented somewhat confused taste in roughly "dumbed down MIRI views" direction. I liked many of the discourse norms
- not sure what exactly happened, but my impression is LW is often just another battlefield in 'magical egregore war zone'. (It's still way better than other online public spaces)
What I mean by that is a lot of people seemingly moved from 'let's figure out how things are' into 'texts you write are elaborate battle moves in egregore warfare''. Don't feel excited about pointing to examples, but impression are ...growing share of senior top-ranking users who seem hard to convince about anything, can not be bothered to actually engage with arguments, writing either literal manifestos or in manifesto-style.
I like LW, and think that it does a certain subset of things better than anywhere else on the internet.
In particular, terms of "sane takes on what's going on" I can usually find them somewhere in the highly upvoted posts or comments.
I think in general my issue with LW is it just reflects the pitfalls of the rationalist worldview. In general the prevailing view conflates intelligence with wisdom, and therefore fails to grasp what is sacred on a moment to moment level that allows skillful action.
I think the fallout of SBF, the fact that rationalists and EAs keep building AI capabilities organizations, rationality adjacent cults centered around obviously immoral world views etc., are all predictable consequences of doing a thing where you try to intelligence hard enough that wisdom comes out.
I don't really expect this to change, and expect LW to continue to be a place that has the sanest takes on what's going on and then leads to incredible mistakes when trying to address that situation. And that previous sentence basically sums up how I feel about LW these days.
The thing that seems to me to have gotten worse is what gets upvoted. AI content is the big one here; it's far too easy to get a high karma post about an AI related topic even if the post really isn't very good, which I think has a ton of bad downstream consequences. Unfortunately, I think this extends even to the Alignment Forum.
I have no idea what to do about it though. Disagreement voting is good. Weighted voting is probably good (although you'd have to see who voted for what to really know). And the thing where mods don't let every post through is also good. I wish people would vote differently, but I don't have a solution.
Here are the Latest Posts I see on my front page and how I feel about them (if I read them, what I remember, liked or disliked, if I didn't read them, my expectations and prejudices)
I think a pattern is that there is a lot of content on LessWrong that:
The devil may be in "legibly" here, eg maybe I'm getting a lot out of reading LW in diffuse ways that I can't pin down concretely, but I doubt it. I think I should spend less time consuming LessWrong, and maybe more time commenting, posting, or dialoguing here.
I think dialogues are a great feature, because:
ETA: I like the new emojis.
I used to visit every day since 2018 and find one or two interesting articles to read on all kinds of topics.
For the past few months I just read zvi’s stuff and any AI related not too technical articles.
Some Reddit forums have dedicated days to topics. I don’t know if having AI stuff only a few days a week would help restore the balance haha.
One concrete complaint I have is that I feel a strong incentive toward timeliness, at the cost of timelessness. Commenting on a fresh, new post tends to get engagement. Commenting on something from more than two weeks ago will often get none, which makes effortful comments feel wasted.
I definitely feel like there is A Conversation, or A Discourse, and I'm either participating in it during the same week as everyone else, or I'm just talking to myself.
(Aside: I have a live hypothesis that this is tightly related to The Twitterization of Everything.)
I think this a real problem (tho I think it's more fundamental than your hypothesis would suggest; we could check commenting behaviour in the 2000s as a comparison).
We have some explorations underway addressing related issues (like maybe the frontpage should be more recommender-y and show you good old posts, while the All Posts page is used for people who care a lot about recency). I don't think we've concretely considered stuff that would show you good old posts with new comments, but that might well be worth exploring.
I love LessWrong. I love it for my professional work on alignment (in tandem with AF), and I love it for learning about rationality and the world.
There are problems with LessWrong, but I challenge anyone to name an alternative that's better.
I think the high standards of civility (dare I say niceness) and rigor are undervalued. Having less pollution from emotionally-focused and deeply flawed arguments is hugely important.
This question has started me thinking about a post titled "the case for LessWrong as a tool for rapid scientific progress".
The comparison to working in academia for years is night and day different, in favor of LessWrong.
the case for LessWrong as a tool for rapid scientific progress
If you’re interested in prior discussion of that, see list of posts tagged “Intellectual Progress via LessWrong”.
There was never a point in the past ~nine years of me knowing about it when I viewed as lesswrong as anything but the place the odd, annoying, mostly wrong ai safety people went, having participated with the community around it for about that long, mostly without reading much here. Eventually I decided I wanted to talk to them more. I generally think of lesswrong as a place to talk at people who won't listen but who need to hear an other perspective - and I say that agreeing that the world is in dire straits and needs saving from superhuman agency (which I think is currently concentrated in human organizations that people here consistently underestimate). I see it as a scientific forum of middling quality that is related to the research topic I care about. I occasionally find something I like on it, try to share it, and get blowback from one group of friends that I'm browsing that site again. The upvote mechanism seems to vigorously promote groupthink, especially with the harsh effects of downvotes on newbies. I do think of it as one of the few spaces online that is consistently a conversation ground between progressive classical liberals and conservative classical liberals, so that's nice, I guess.
Got any better forums to point me to? I'll take a look and decide for myself how they compare to LessWrong.
What’s up with all this Dialogues stuff? It’s confusing…
I don't really like reading the dialogues, and I mostly skip them. Most of the ones I have read have felt like I'm just watching people sort out their ideas rather than reading some ideas that have already been sorted out.
The sidebar that shows all comments by author is incredibly useful (to me)!
I don't know how long ago it was put in, but when I noticed it, it made it waaaaay easier for me to parse through big conversation trees, get a sense for what people are thinking, and zero in on threads I want to read in detail.
Thanks to whoever had that idea and implemented it!
I used to comment a fair bit over the last decade or so, and post occasionally. After the exodus of LW 1.0 the site was downhill, but the current team managed to revive it somehow and they deserve a lot of credit for that, most sites on the downward trajectory never recover.
It felt pretty decent for another few years, but eventually the rationality discourse got swamped by the marginal quality AI takes of all sorts. The MIRI work, prominently featured here, never amounted to anything, according to the experts in ML, probability and other areas relevant to their research. CFAR also proved a flop, apparently. A number of recent scandals in various tightly or loosely affiliated orgs did not help matters. But mainly it's the dearth of insightful and lasting content that is sad. There is an occasional quality post, of course, but not like it used to be. The quality discourse happens on ACX and ACXD and elsewhere, but rarely here. To add insult to injury, the RSS feed stopped working, so I can no longer see the new posts on my offsite timeline.
My guess is that the bustling front disguises serious issues, and maybe the leadership could do what Eliezer called "Halt, melt, and catch fire". Clearly this place does not contribute to AI safety research in any way. The AI safety agitprop has been undoubtedly successful beyond wildest dreams, but seems like it's run its course, now that it has moved into a wider discourse. EA has its own place. What is left? I wish I knew. I would love to see LW 3.0 taking off.
To add insult to injury, the RSS feed stopped working, so I can no longer see the new posts on my offsite timeline.
Check out GreaterWrong’s RSS feeds; you can click the “RSS” link at the top right of any page to get a feed for that view (frontpage, all, curated, whatever else).
Can you say more about the RSS feed not working? I just checked the basics and they still seem to work.
To add insult to injury, the RSS feed stopped working, so I can no longer see the new posts on my offsite timeline.
Have you reported this on Intercom as a bug report?
I like it a lot. I'm mainly a tumblr usr, and on tumblr we're all worried about the site being shut down because it doesn't make any money. I love having LessWrong as a place for writing up my thoughts more carefully than I would on tumblr, and it also feels like a sort of insurance policy if tumblr goes under, since LessWrong seems to be able to maintain performance and usability with a small team. The mods seem active enough that they frontpage my posts pretty quickly, which helps connect them with an audience that's not already familiar with me, whereas on tumblr I haven't gotten any readers through the tag system in years and I'm coasting on inertia from the followers I already have.
I feel incredibly fond for LessWrong. I've learned so much awesome stuff. And while not perfect, there's a community of people who more or less agree on and are familiar with various, er, "epistemic things", for lack of a better phrase. Like, it's nice to at least know that the person you're conversing with knows about and agrees on things like what counts as evidence and the map-territory distinction.
That said, I do share the impression that others here have expressed of it heading "downhill". Watered down. Lower standards. Less serious. Stuff like that. I find it a little annoying and disappointing, but nothing too crazy.
Personally I have posts with the AI, Existential Risk, and Death tags marked as "Hidden" (those topics make me unhappy). So my feed probably looks a lot different from yours. I've noticed a reduction in quality and quantity of content.
Things I'd really like to see:
That said, I do share the impression that others here have expressed of it heading "downhill". Watered down. Lower standards. Less serious. Stuff like that. I find it a little annoying and disappointing, but nothing too crazy.
I don't know, looking back at older posts (especially on LW 1.0) current LW is less "schizo" and more rigorous/boring—though maybe that's because I sometimes see insanely long & detailed mechinterp posts?
I've proposed for LessWrong to bootstrap [BetterDiscourse] for bringing Twitter's Community Notes (pol.is, viewpoints.xyz) value to any popular content on the web, as well as likely improving the sense-making on LessWrong itself, as TurnTrout's and other people's comments confirm my own thinking that LessWrong hasn't avoided groupthink culture.
How prepared is LW for an attack? Those who want AI research to proceed unimpeded have an incentive to sabotage those who want to slow it down or ban it and consequently have an incentive to DDoS LW.com or otherwise make the site hard to use. What kind of response could LW make against that?
Also, how likely is it that an adversary will manage to exploit security vulnerabilities to harvest all the PMs (private messages) stored on LW?
I think there's a lot of good content here, but there are definitely issues with it tilting so much towards AI Safety. I'm an AI Safety person myself, but I'm beginning to wonder if it is crowding out the other topics of conversation.
We need to make a decision: are we fine with Less Wrong basically becoming about AI because that's the most important topic or do we need to split the discussion somehow?
AI safety posts generally go over my head, although the last one I read seemed fantastically important and accessible.
AI-safety posts are probably the most valuable posts here, even if they crowd out other posts (both posts I think are valuable and posts I think are, at best, chaff).
So for the most part I'm really happy with it - I think it's got a great UI and a great feel. I haven't much used the Dialogues feature (not even reading them), but they don't interfere in any way with the rest of my experience.
One thing I think might need some tuning is the feature that limits the post rate based on the karma of your previous posts. I've once found myself rate-limited due to it, and the cause was simply that my last 20 comments had been in not particularly lively discussions where they ended up staying at the default 2/0 score. Now I suppose you could construe that as "evidently you haven't said anything that was contributing particularly to the discussion", but is that enough to justify rate limiting? If someone was outright spamming, surely they'd be down voted, so that's not the reason to do it. I'd say a pattern of consistent down voting is a better base for this. After that I found myself trying to "pick up" my score for a while by going to comment posts that were already highly popular to make sure my comment would be seen and upvoted enough to avoid this happening again, and that seems like something you don't particularly want to incentivize. It just reinforces posts on the basis of them being popular, not necessarily what one honestly considers most interesting.
The first time I tried to load this page it took >10 seconds before erroring out in a way that made me need to close the website and open it again.
Also, it seems like the site is spamming me to make dialogues. To get to this post in "recent comments" I had to scroll down past some people the site was suggesting I dialogue with. I clicked the "x" next to each of those entries to get them to go away. Then, when I had to re-load the page, two more suggested dialogue partners had spawned. This was after I turned off the notifications I had been subscribed to pinging me whenever anyone wanted to dialogue with me.
There's a bunch of interesting AI alignment content, more than I feel like I have the bandwidth or inclination to read. I also like that there's a trickle of new interesting users, e.g. it's cool that Maxwell Tabarrok is on my front page.
As is often said, I'd be interested in more "classic rationality" content relative to AI stuff. Like, I don't think we're by any means perfect on that axis, or past some point of diminishing return. Since it's apparently easier to write posts about AI, maybe someone should write up this paper and turn it into life advice. Alternatively, I think looking at how ancient people thought about logic could be cool (see e.g. the white horse paradox or Ibn Sina's development of a modal logic system or whatever).
I have the impression that lots of people find LW too conflict-y, but I think we avoid the worst excesses of being a total forum of everyone agreeing with each other about how great we all are, and that more disagreement would make that better, as long as it's with gentleness and respect, as they say.
Oh also the pattern of what things of mine get upvoted vs downvoted feel pretty weird. E.g. I thought my post on a mistake I think people make in discussions about open-source AI was a good contribution, if perhaps poorly written. But it got fewer upvotes than a post that was literal SEO. I guess the latter introduced people to a cool thing that wasn't culture-war-y and explained it a bit, but I think the explanation I gave was pretty bad, because as mentioned, I actually just wanted it to be SEO.
Oh, also I think the site wants me to care about the review or believe that it's important/valuable, but I don't really.
You should click the settings gear in the "Dialogues" section to hide suggested partners from you.
A little while ago I vented at my shortform on this topic, https://www.lesswrong.com/posts/pjCnAXMkXjbmLw3ii/nim-s-shortform?commentId=EczMSzhPMpRAEBhhj.
Since writing that, I still feel a widening gap between my views and those of the LW zeitgeist. I'm not convinced that AI is inevitably killing everybody if it gets smart enough, as the smartest people around here seem to believe.
Back when AI was a "someday" thing, I feel like people discussed its risks here with understanding of perspectives like mine, and I felt like my views were gradually converging toward those of the site as I read more. It felt like people who disagreed about x-risk were regarded as potential allies worth listening to, in a way that I don't experience from more recent content.
Since AI has become a "right now" thing, I feel like there's an attitude that if you aren't already sold on AI destroying everything then you're not worth discussing it with. This may be objectively correct: if someone with the power to help stop AI from destroying us and finite effort to exert spends their time considering ignorant/uninformed/unenlightened perspectives such as my own, diverting that effort from doing more important things may be directly detrimental to the survival of the species.
In short, I get how people smarter than I am are assigning high probability to us being in a timeline where LW needs to stop being the broader forum that I joined it for. I figure they're probably doing the right thing, and I'm probably in the wrong place for what LW is needing to become. Complaining about losing what LW was to make way for what it needs to be feels like complaining about factories transitioning from making luxury items to making essential supplies during a crisis.
And it feels like if this was whole experience was a fable, the moral would be about alignment and human cooperation in some way ;)
It's fun to come through and look for interesting threads to pull on. I skim past most stuff but there's plenty of good and relevant writing to keep me coming back. Yeah sure it doesn't do a super great job of living up to the grandiose ideals expressed in the Sequences but I don't really mind, I don't feel invested in ~the community~ that way so I'll gladly take this site for what it is. This is a good discussion forum and I'm glad it's here.
oh wait, major trivial irritation: it keeps forgetting I set the theme to dark mode! something about brave's improved privacy settings, perhaps? if dark mode could be stored serverside that would be grand
Yeah, I think we store theme settings in a cookie. You might just want to manually permit the cookie in the Brave settings (we intentionally don't do it server-side because many users want to have different settings for different devices, so doing it at the cookie level seems like the right call).
I am relatively new to the community, and was excited to join and learn more about the actual methods to address AI risks, and how to think scientifically generally.
However after using for a while, I am a bit disappointed. I realized I probably had to filter many things here.
Good:
Bad:
I think for now, I probably will continue using but with many many filters.
Seems fine - good.
I have enjoyed some dialogues though I think there is a lot of content.
For me, I'd like a focus on summarisation and consensus. What are the things we all agree on on a topic.
Even on this thread, I think there could be a way to atomise ideas and see what the consensus is.
I find LessWrong really useful for learning things, but it's also become kind of overwhelming, especially because a lot of people don't start posts with a summary so I can't quickly filter. My RSS feed has about 500 unread LessWrong posts and I doubt I'll read more than 1/5th of them after summarizing.
I'm thinking of writing my own software to run posts through an LLM to get a one-paragraph summary. I'm tempted to try to filter by feed by tag, but that's too high level (I can't follow in-the-weeds AI safety research, but I do want to read certain kinds of high-level technical posts).
LLM summaries aren't yet non-hallucinatory enough that we've felt comfortable putting them on the site, but we have run some internal experiments on this.
My opinion is a bit mixed on LessWrong at the moment. I'm usually looking for one of two types of content whenever I peruse the site:
- Familiar Ideas Under Other Names: Descriptions of concepts and techniques I already understand that use language more approachable to "normal" people than the highly-niche jargon I use myself, which help me discuss them with others more conveniently
- Unfamiliar or Forgotten Ideas: Descriptions of concepts and techniques I haven't thought of recently or at all, which can be used as components for future projects
I've only been using the site for a few months, but I found a large initial surge of Familiar Ideas Under Other Names, and now I have my filters mostly set up to fish for possible new catches over time. Given the complexity and scope of some of my favorite posts in this category, I'm still fairly satisfied with a post meeting these requirements only showing up once a month or so. Before coming to LW, I would seldom encounter such things, so I'm still enjoying an increased intake.
I've been having a much harder time finding Unfamiliar or Forgotten Ideas, but that category has always been a tricky one to pursue even at the best of times, so it's hard to speculate one way or another about whether the current state of the site is acceptable or not.
On a more general note, I'm not able to direct much interest towards much of the AI discussions because it rates very poorly on the "how important is this to me" scale. I've been having to invest some effort into adjusting my filters to compensate, but I notice that there's still a lot of content that is adjacent-but-not-directly AI that sneaks in anyways. However, I haven't had too long to fully exercise the filters, so I don't want to present that as some sort of major issue when it's currently just a bit tiresome.
As for my thoughts on LW generally, I both like and dislike the site pretty severely.
On the one hand, I do think it has some major positives compared to basically every other site. In particular, I explicitly like the fact that politics is very discouraged here, which allows for much more productive conversations, and more generally I think the moderation system is quite great, and I especially like the fact that they try to keep the garden well-kept.
I also like the fact that they try to separate the concepts of disagreement and it's a bad post/comment via the agree/disagree system, separating the role of karma and agree/disagree voting.
I also agree with a lot of lsusr's "The Good" claims on LW.
If there's one reason I stay on LW, it's probably the quality of the conversation doesn't get nearly as bad as the rest of the internet, and is quite great, and while LW is overvalued, useful insights can be extracted if you're careful.
I mostly agree with TurnTrout and lsusr's answers on what the problems are, with a sidenote of the fact that I suspect a lot of problems came from the influx of FiO readers into LW without any other grounding point. Niplav thankfully avoided this wave, and I buy that the empirical circles like Ryan Greenblatt's social circle isn't relying on fiction, but I'm worried that a lot of non-experts are so bad epistemically speaking that they ended up essentially writing fanfic on AI doom, and forget to check whether the assumptions actually hold in reality.
(Idea drawn from TurnTrout and JDP in the discord, they both realized the implications of a FiO influx way before I did.)
I suspect a lot of problems came from the influx of HPMOR and FiO readers into LW without any other grounding point. Niplav thankfully avoided this wave, and I buy that the empirical circles like Ryan Greenblatt's social circle isn't relying on fiction, but I'm worried that a lot of non-experts are so bad epistemically speaking that they ended up essentially writing fanfic on AI doom, and forget to check whether the assumptions actually hold in reality.
This seems unlikely to me, since HPMOR peaked in popularity nearly a decade ago.
I love the react system, so much so that I usually use it as my favored mode of communication, because I get to point out stuff that is important without having to go through the process of writing a comment, which can be long.
I especially like the way reacts can be applied to individual pieces of text, and IMO the react system, especially the semantic reacts have been a good place where LW outperforms other websites.
Sadly, LW isn't a community that I would say that I am a part of. I say that begrudgingly, as LW seems to 'have been' and still is, 'a decent place on the internet'.
The issue with being decent, is that it doesn't work long term, at least not for me.
Why did other people leave LW before? I'm not sure. Why do I want to leave? And what drew me here in the first place?
I came here to seek for people with integrity, people thinking outside the box, highly intelligent and willing to both pursue their individuality and take/give feedback from equals/peers - with the intention of getting help, but also provide support in growing my own as well as the rationality/general intelligence/EQ/bigger goals of others, in a congruous, open-ended, honest, sincere and cooperative environment.
To take ideas, concepts and take them to their logical conclusion, is something I care about, and was hoping to find a community that is Congruous and Coherent according to its own explicit ideas and values, with enough discernment to make it work. This is a tall order perhaps, but I was hoping, when I found this place, that it was closer to that ideal.
From what I've seen, there might be a slightly higher population of the kinds of people I'm looking for here, but on the other hand, there is a wide gulf between what those people want and need to thrive, and the kind of environment LW is providing.
I'm not the most articulate in writing, but I wrote about this gulf of Who is LW for in some comments, and also a post called "The LW crossroads of purpose".
And, I see it as a very pressing matter, not only because laissez-faire seems to ruin subcultures, but because there are so many places on the internet where your average Joe can go, but so few where it seems those that crave high-end personal, rational, emotional development, can actually get support, and support each other.
A place where integrity, respect and cooperation is a fundamental practice, and where things aren't solved through "democracy", but by finding the best way to go forward. A place that supports the creation of the very good/best, and not the decently/average+ good.
I'm not aware if those that 'left' LW went somewhere more coherent in this regard. Substack seems to be a place, but is there a 'community' out there waiting? At least not that I am aware of? Which means I would rather write this, and on the total off chance that this idea gets traction, and LW will have a "serious dojo" for rationality - with a high bar to entry, in a high trust environment that grows organically and slowly; I'll at least hear of it, and might even want to join.
I wouldn't even mind if it had a subscription fee of sorts, and some of the members got paid. Why sweat the small stuff.
For now, I'll stay in the shadows, and maybe look at older posts and see who was here before. Maybe some of them is someone I want to talk to.
Kindly,
Caerulea-Lawrence
Lesswrong is great but the social groups that run it are all focussed on AI xrisk. Same goes for the high karma users who get more upvote power and get to decide what reaches the homepage.. My timelines and xrisk estimates are lower than them (15% ASI by 2030, 5% humanity dead by 2030) and hence I'd like to be able to discuss other topics.
I currently vaguely feel I will have single-handedly lead such an effort if I wanted it to happen. Maybe you'll see more of me (but not anon) in the coming years.
LW, along with Astral Codex Ten, are the best places on the internet. Lately LW tops the charts for me, perhaps because I've made it through Scott's canon but not LW's. As a result, my experience on LW is more about the content than the meta and community. Just coming here, I don't stumble across much evidence of conflict within this community - I only learned about it after friending various rationalists on FB such as Duncan (which btw I really like having rationalists in my FB feed, which does give me a sense of community and belongingness... perhaps there is something to having multiple forums).
On the slight negative side, I have long believed LW to be an AI doom echo chamber. This is partly due to my fibrotic intuitions, persisting despite reading Superintelligence and having friends in AI safety research, and only breaking free after ChatGPT. But part of it I still believe is true. The reasons include hero worship (as mentioned already on this thread), the community's epistemic landscape (as in, it is harder and riskier to defend a position of low vs high p(doom)), and perhaps even some hegemony of language.
In terms of the app: it is nice. From my own experiences building apps with social components, I would have never guessed that a separate "karma vote" and "agreement vote" would work. Only on LW!
LessWrong is mostly ok. Specific problems/new things I'd like:
NEW REACTION EMOJIS
TECHNICAL PROBLEMS
"HIDE USER NAMES" PROBLEMS
I often have an ugh feeling towards reading long comments.
Posts are usually well written, but long comments are usually rambly, even the highest karma ones. It takes a lot of effort to read the comments on top of reading the post, and the payoff is often small.
But for multiple reasons, I still feel an obligation to read at least some comments, and ugh.
For possible solutions:
1. This is my problem and I should find a way to stop feeling ugh
2. Have some ways to easily read a summary of long comments (AI or author generated)
3. People should write shorter comments on average
Pretty good overall. My favorite posts are about the theory of the human mind that helps me build a model of my own mind and the minds of others, especially in how it can go wrong (mental illness, ADHD, et. al.)
The AI stuff is way over my head, to the point where my brain just bounces off of the titles alone, but that's fine - not everything is for everyone. Also reading the acronyms EDT and CDT always make me think of the timezones, not the decision theories.
About the only complaint I have is that the comments can get pretty dense and recursively meta, which can be a bit hard to follow. Zvi will occasionally talk about a survey of AI safety experts giving predictions about stuff and it just feels like a person talking about people talking about predictions about risks associated with AI. But this is more of a me thing and probably people who can keep up find these things very useful.
I think it's still the best forum for discussing the most important thing happening in the world.
Suppose I start a dialog knowing I will never choose to publish it. Would the LW team welcome that or tend to consider it a waste of resources because nothing gets published?
If you get value out of it, we're happy for dialogues to be used that way, as long as it's clear to all participants what the expectations re: publishing/not publishing are, so that nobody has an upleasant surprise at the end of the day. (Dialogues currently allow any participant to unilaterally publish, since most other options we could think imposed a lot of friction on publishing.)
Why not make it so there is a box "ask me before allowing other participants to publish" that is unchecked by default?
I found this upsettingly contrary to my expectations. I thought that the way it would work would be that all participants would need to click 'publish' for it to publish. Not a huge deal, but you should make it clear that that is the case, and ideally allow for any of the participants to opt the dialogue out of 'unilateral publish' mode.
The LessWrong Review runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year.
Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?
The LessWrong Review runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year.
Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?
I'm finding LessWrong really valuable for learning, but it's getting overwhelming because many posts lack summaries. I have around 500 unread posts in my RSS feed, and I doubt I'll read more than 1/5th of them even after summarizing. I'm considering creating software to use LLM for one-paragraph summaries. I'm also thinking of filtering my feed by tag, but it's too broad for my specific interests.
Hello! This is jacobjacob from the LessWrong / Lightcone team.
This is a meta thread for you to share any thoughts, feelings, feedback or other stuff about LessWrong, that's been on your mind.
Examples of things you might share:
...or anything else!
The point of this thread is to give you an affordance to share anything that's been on your mind, in a place where you know that a team member will be listening.
(We're a small team and have to prioritise what we work on, so I of course don't promise to action everything mentioned here. But I will at least listen to all of it!)
I haven't seen any public threads like this for a while. Maybe there's a lot of boiling feelings out there about the site that never get voiced? Or maybe y'all don't have more to share than what I find out from just reading normal comments, posts, metrics, and Intercom comments? Well, here's one way to find out! I'm really curious to ask and see how people feel about the site.
So, how do you feel about LessWrong these days? Feel free to leave your answers below.