I'm an independent researcher currently working on a sequence of posts about consciousness. You can send me anonymous feedback here: https://www.admonymous.co/rafaelharth. If it's about a post, you can add [q] or [nq] at the end if you want me to quote or not quote it in the comment section.
I'm very sympathetic to this general case, but the post does raise a bunch of red flags. I've asked Claude to summarize how good the life of the typical bee is, and it presented a far less negative picture. I'm not sure I can trust this article more than that.
Although I don't super like honey, so I might stop eating/eat less anyway.
This doesn't feel like it engages with anything I believe. None of the things you listed are things I object to. I don't object to how you wrote the the Scaling Hypothesis post, I don't object to the Baby Eaters, I super don't object to decoupling, and I super extra don't object to 'politics is the mind-killer'. The only one I'd even have to think about is Crocker's Rules, but I don't think I have an issue with those, either. They're notably something you opt into.
I can definitely say on my own part that nothing of major value I have done as a writer online—whether it was popularizing Bitcoin or darknet markets or the embryo selection analysis or writing 'The Scaling Hypothesis'—would have been done if I had cared too much about "vibes" or how it made the reader feel. (Many of the things I have written definitely did make a lot of readers feel bad. And they should have. There is something wrong with you if you can read, say, 'Scaling Hypothesis' and not feel bad. I myself regularly feel bad about it! But that's not a bad thing.) Even my Wikipedia editing earned me doxes and death threats.
I claim that Said's post is bad because it can be rewritten into a post that fulfills the same function but doesn't feel as offensive.[1] Nothing analogous is true for the Scaling Hypothesis. And it's not just that you couldn't rewrite it to be less scary but convey the same ideas; rather the whole comparison in a non-starter because I don't think that your post on the scaling hypothesis has bad vibes, at all. If memory serves (I didn't read your post in its entirety back then, but I read some of it and I have some memory of how I reacted), it sparks a kind of "holy shit this is happening and extremely scary ---(.Ó﹏Ò.)" reaction. This is, like, actively good. It's not in the same category as Said's comment in any way whatsoever.
[...] on LW, where it has long been a core tenet that one should not be bothered by messages like that.
I agree that it is better to to not be bothered. My position is not "you should be more influenced by vibes", it's something like "in the real world vibes are about 80% of the causal factors behind most people's comments on LW and about 95% outside of LW, and considering this fact about how brains work in how you write is going to be good, not bad". In particular, as I described in my latest response to Zack, I claim that the comments that I actually end up leaving on this site are significantly less influenced by vibes than Said's because recognizing what my brain does allows me to reject it if I want to. Someone who earnestly believes to be vibe-blind while not being vibe-blind at all can't do that.
Someone once wrote, upon being newly arrived to LW, a good observation of the local culture about how this works [...]
This honestly just doesn't seem related, either. Status-blindness is more specific than vibe-blindness, and even if vibe-blindness were a thing, it wouldn't contradict anything I've argued for.
it is not identical in terms of content, as Zack pointed out, but here I'm using function in the sense of the good thing the post comment achieves, which is to leave a strongly worded and valid criticism of the post. (In actual fact, I think my version is significantly more effective at doing that.) ↩︎
Okay -- I agree that the overall meaning of the comment is altered. If you have a categorical rule of "I want my meaning to be only this and exactly this, and anything that changes it is disqualified" then, yes, your object is valid. So consider my updated position to be something like, "your standard (A) has no rational justification, and also (B) relies a false model of how people write comments." I'll first argue (A), then (B).
Similar information, but not "exactly" the same information. Deleting the "very harmful false things" parenthetical omits the claim that the falsehoods promulgated by organized religion are very harmful. (That's significant because someone focused on harm rather than epistemics might be okay with picking up harmless false beliefs, but not very harmful false beliefs.) Changing "very quickly you descend" to "you can descend" alters the speed and certainty with which religious converts are claimed to descend into nebulous and vague anti-epistemology. (That's significant, because a potential convert being warned that they could descend into anti-epistemology might think, "Well, I'll be extra careful not to do that, then," whereas a warning that one very quickly will descend is less casually brushed off.)
It is logically coherent to have the () reactions. But do you think it's plausible? What would be your honest probability assessment that a religious person reads this and actually goes that route -- as in, they accept the claims of the comment but take the outs you describe in () -- whereas if they had read Said's original comment instead, they'd still accept the premises, and this time they'd be convinced?
Conversely, one could imagine that a religious person reads Said's version and doesn't engage with it because they feel offended, whereas they would have engaged with it, and that the same person would have engaged with my version. (Which, obviously, I'd argue is more likely.)
At this point, my mental model of you responds with something like
You're probably correct on the consequential analysis (i.e., the softened version would be more likely to be persuasive)[1], but I don't think it follows that we as a community should therefore moderate vibes because [very eloquently argued case about censorship being bad that I won't try to replicate here]
To which I say, okay. Fine. I don't think there is a slippery slope here, but I think arguing this is a losing battle. So I'll stop with (A) here.
My case for (B) is that the algorithm which produced Said's message didn't take of these details into account, so changing them doesn't censor or distort the intent behind the message. Said didn't run an assessment of how harmful the consequences are exactly, determined that they're most accurately described as "very harmful" rather than "harmful" or "extremely harmful", and then posted it. Ditto with the other example.
I'm not sure if how much of any evidence I need here to make this point, but here are some ways in which you can see that the above is true
if you did consider the meaning to this level of detail, then you wouldn't write "very quickly you descend" because well, you might not descend, it's not 100%, so you'd have to qualify this somehow.[2]
Thinking this carefully about the content of your messages takes a lot of time. Said doesn't take this much time for his comments, which is how he can respond so quickly.
If you thought about the actual merits of the proposal, then you'd scrap the entire second half of the comment, which is only tangentially relevant to the actual crux. You would be far more likely to point out that a good chunk of the post relies on this sentence
and to the extent anything that doesn't consider itself a religion provides these, it's because it's imitating the package of things that makes something a religion.
... which is not justified in the post at all. This would be a vastly more useful critique!
So, you're placing this extreme importance on the precise semantic meaning of Said's comment, when the comment wasn't that well thought-out in the first place. I'd be much more sympathetic to defending details of semantic meaning if those details had been carefully selected.
The thing that's frustrating to me -- not just this particular point in this conversation but the entire vibes debate -- and which I should have probably pointed out much earlier -- is that being more aware of vibes makes your messages less dependent on them, not more. Because noticing the influence allows you to adjust. If you realize a vibe is pushing you to write X, you can then be like, hold on that's stupid, let me instead re-assess how whatever I'm responding to right now actually impacts the reasons why I believe the thing I believe. And then you'll probably notice that what you're pushed to write doesn't really hit the crux at all and instead scrap it and write something else. (See the footnote[3] for examples in this category.)
To put it extremely bluntly, the thing that was actually causally upstream of the details in Said's message was not a careful consideration of the factual details; it was that he thinks religion is dumb and bad, which influenced a parameter sent to the language-generation module that output the message, which made it choose language that sounded more harsh. This is why it says "perfect example" and not "example", why the third paragraph sounds so dismissive, why the message contains no !s, why he said "very quickly you descend" rather than "you can descend", and so on. The vibe isn't an accidental by-product, it's the optimization target! Which you can clearly observe by the changes I've pointed out here.
... and on a very high level, to just give a sense of my actual views on this, the whole thing just seems ridiculously backwards in the sense that it doesn't engage with what our brains are actually doing. Like I think it happens to be the case that not listening to vibes is often better (although this is a murky distinction because a lot of good thought relies on what are essentially vibes as well -- it's ultimately a form of computation), but the broader point is that, whatever you want to improve, more awareness of what's actually going going to be good. Knowledge is power and all that.
If you don't think this, then that would be a crux, but also I'd be very surprised and, not sure how I'd continue the conversion then, but for now I'm not thinking too much about this. ↩︎
This is absurdly nit-picky but as are the changes you pointed out. ↩︎
Alright for example, the first thing I wrote when responding to your comment was about you quoting me saying "These two messages convey exactly the same information". I actually meant to refer to the specific line I quoted only, where this statement was more defensible. But I asked myself, "does this actually matter for the crux" and the answer was no, so I scrapped it. The same thing is true for me quoting Gordon's response and pointing out that it fits better with my model than yours, and a snide remark about how your () ascribes superhuman rationality powers to religious people in particular.
Now you may be like, well those are good things, but that's different from vibes. But it's not really, it's the same skill of, notice what your brain is actually doing, and if it's dumb, interfere and make it do something else. More introspection is good.
I guess the other difference is that I'm changing how I react here rather than how someone else reacts. I guess some people may view one as super good and the other as super bad (e.g., gwern's comment gave off that vibe to me). To me these are both good for the same reason. Deliberately inserting unhelpful vibes into your comment is like uploading a post with formatting that you know will break the editor and then being like "well the editor only breaks because this part here is poorly programmed, if it were programmed better then it would do fine". In any other context this would-pattern match to obviously foolish behavior. ("I don't look before crossing the sidewalk because cars should stop.") It's only taken seriously because people are deluded about the degree to which vibes matter in practice.
Anyway, I think you get the point. In retrospect I should have probably structured a lot of my writing about this differently, but can't do that now. ↩︎
Do you mean this literally? Because that’s intensely ironic, if so! You see, it’s extremely obvious to me why that comment got downvoted.
I didn't say I don't get why it happened; I said, I don't get why it should happen, meaning I don't see a reason I agree with, I think the comment is fine. (And if it matters, I never thought about what I think would have happened or why with this comment, so I neither made a true nor a false prediction.)
I think that you are typical-minding very strongly. It seems as if you’re not capable of imagining that someone can fail to perceive the sort of thing we’re discussing as being some sort of social attack. This is causing you to both totally misunderstand my own perspective, and to have a mistaken belief about how “almost everyone on LessWrong” thinks. (I don’t know if you just haven’t spent much time around people of a certain mental make-up, or what.)
I think I'm capable of imagining that someone can fail to perceive this sort of thing. I know this because I did imagine this -- when you told me you don't care, and every comment I had read from you was in the same style, I (perhaps naively) just assumed that you're telling the truth.
But then you wrote this reply to me, which was significantly friendlier than any other post you've written to me. This came directly after I said this
BTW I think asking me what I mean by vibes is completely reasonable. Someone strong-downvoted your comment I guess because it sounds pedantic but I don't agree with this, I don't think this is a case where the concept so obvious that you shouldn't ask for a definition. (I strong-upvoted back to 0.)
And then also your latest comment (the one I'm replying to) is the least friendly, except for the final paragraph, which is friendly again. So, I when I said did something unusually nice,[1] you were being nice in response. When I was the most rude, in my previous comment, you were the most rude back. Your other comments in this thread that stand out as more nice are those in response to Ben Pace rather than habryka.
... so in summary, you're obviously just navigating social vibes like a normal person. I was willing to take your words that you're immune, but not if you're demonstrating otherwise! (A fun heuristic is just to look at {number of !}/{post length}. There are exceptions, but most of the time, !s soften the vibe.)
clarifying that this was not an intended trap; I just genuinely don't get why the particular comment asking me to define vibes should get downvoted. (Although I did deliberately not explain why I said I don't believe you; I wanted to see if you'd ask or just jump to a conlucusion.) ↩︎
Strong disagree. Said's comment does several things that have almost no function except to make vibes worse, which means you can just take those out, which will make the comment shorter. I will in fact add in a little bit of hedging and it will still be shorter overall because the hedging will require fewer words than the unnecessary rudeness.
Here's Said's comment. Here's a not-unnecessarily-rude-but-still-completely-candid-version-that's-actually-166-characters-shorter-than-the-original-and-that-I-genuinely-think-achieves-the-same-thing-and-if-not-I'd-like-to-hear-why-not
I think it's bad to participate in organized religion because it exposes you to intense social pressure to believe false things.
You can find religions you can practice without being asked to give up your honest search for truth with no need to even pretend to have already written the bottom line.
This may formally be true, i.e., you may not be officially asked to believe false things. But if your social context consists of people who all believe approximately the same false things, and the social context is organized around those beliefs, and the social context valorizes those beliefs, then the social pressure will be intense nonetheless. And some of these false beliefs are fairly subtle! (Eliezer discusses this in the sequences.)
I also got asked about how I feel about religions and truth seeking. My answer is that you shouldn’t think of religions as being about the truth as rationalists typically think of it because religions are doing something orthogonal.
...which I think is just an example of damage done by a religion. The claim that “you shouldn’t think of religions as being about the truth as rationalists typically think of it” seems like typical anti-epistemology.
Religion is about “the truth as rationalists typically think of it”. There is nothing but “the truth as rationalists typically think of it”, because there’s just “the truth”, and then there are things which aren’t truth claims at all, of any kind (like preferences, etc.). But get into religion, start relaxing your epistemic standards just a bit, and you can descend into this sort of nebulous “well there’s different things which are ‘true’ in different ways, and what even is ‘truth’, anyway”, etc. And then your ability to know what’s true and what’s false is gone, and nothing is left but “vibes”.
This takes it down from about an 8/10 rudeness to maybe a 4 or 5. Is anyone going to tell me that this is not sufficiently blunt or direct? Will non-savvy readers have to read between the lines to figure out that this is a rebuttal of the code idea? I think the answer is clearly no; if people see this comment, they will immediately view it as a rebuttal of the post's thesis.
The original uses phrases like
And here we have a perfect example of the damage done by religion.
This is not any more direct than saying
I think this is just an example of the damage done by religion.
These two messages convey exactly the same information, the first just has an additional layer of derision/mockery which the second doesn't. (And again, the second is shorter.) And I know you know this difference because you navigate it in your own writing, which is why I'm somewhat irritated that you're talking as if Said's comments were just innocently minimalistic/direct.
edit: corrected typo
And now consider how many of them stop being true if we quantify “everyone” not over the population of the Earth, but over the commentariat of this forum.
I was actually already thinking about just people on LessWrong when I wrote that. I think it's almost everyone on LessWrong.
What makes one better at rationality is behaving as if one does not have said reaction
We’re talking about having an emotional reaction which interferes with your ability to consider what is being said to you. The ability to not suffer that detrimental effect is not “numbness”.
then this person is simply not ready to participate in any kind of serious discussion, period.
Not being bothered by this sort of thing is good (cf. the earlier parts of this comment); being bothered by it is bad.
Right, I mean, you're repeatedly and categorically framing the problem as solely lying with the person who gets bothered by emotions. You've done the same in the previous post where I opted out of the discussion.
It's not my view at all. I think a community will achieve much better outcomes if being bothered by the example message is considered normal and acceptable, and writing the example message is considered bad.
I don't know how to proceed from here. Note that I'm not trying to convince you, I'm only responding. What I can say is, if you are trying to convince me, you have to do something other than in this comment, because I felt like you primarily told me things that I already understood from the other comment thread (where I truncated the discussion). In particular, there are a lot of times where you're just stating something as if you expect me to agree with it (like all the instances I quoted), but I don't -- and again I feel like I already knew that you think this from the other comment.
For completeness:
#1-#2
This argues that the pain thing is different; I agree it's different; it doesn't mean that self-modificaiton (or selection) is desirable here.
it’s very easy to not be bothered by this sort of thing (#3)
I already said that I think ~everyone is bothered by it, so obviously, disagree. (I don't even believe that you're not bothered by this kind of thing;[1] I think you are and it does change your conduct as well, although I totally believe that you believe you're not bothered.)
Not being bothered by this sort of thing is good
Actually I technically do agree with this -- in the sense that, if you could flip a switch where you're not bothered by it but you still notice the vibe, that would be good -- but I think it's not practically achievable so it doesn't really matter.
This is something I usually wouldn't say out of politeness/vibe protection, but since you don't think I should be doing that, saying it kind of feels more respectful, idk. ↩︎
On the contrary, it’s highly correlated with a variety of qualities which we have excellent reason to care about (and which we might collectively summarize as “likelihood of usefully contributing to advancement of rationality and the accomplishment of useful goals”).
I definitely don't agree with this. Especially in this particular case, I think almost everyone will have the same reaction, and I don't think people who don't have this reaction are meaningfully better at rationality. (In general, I don't think the way to improve your rationality is to make yourself as numb as possible.)
these are in fact two different things, not one thing phrased in two different ways!
that's because I phrased it poorly. I was trying to gesture at the same feeling with both, I just don't know what to call it. Like, the feeling that the situation you're in has become adversarial. I think it's a weaker version of what you'd feel if you were in a group conversation and suddenly one person insults someone else, or something like that.
They are amenable to volitional alteration
I completely agree with this, but "you can theoretically train yourself to not be bothered by it" is true for a lot things, and no one thinks that we should therefore give people a free pass to do them. You can train yourself to have equanimity to physical pain; presumably this wouldn't make it okay for me to inflict physical pain on you. You need more pieces to argue that we should ask people to self-modify to not have the reaction, rather than avoid triggering the reaction.
In this case, that strikes as not reasonable. This particular reaction (i.e., having this adversarial feeling that I failed to describe well in response to the line I quoted) seems both very hard to get rid of, and probably not desirable to get rid of. There's a very good evolutionary reason why we have it, to detect conflict, and still seems pretty valuable today. I think I'm unusually sensitive to this vibe, and I think this is pretty useful to navigate social situations. Spotting potential conflict early is useful, this stuff is relevant information.
Basically the sum of what a post or comment will make the reader feel. (This is not the actual definition because the actual definition would require me to explain what I think a vibe is at the level of the brain, but it's good enough.)
Technically this is a two-place function of post and reader because two different people can feel very different things from reading the same thing, so strictly speaking it doesn't make sense to say that a comment has bad vibes. But in practice it's highly correlated. So when I say this comment has bad vibes, it's short for, "it will have bad vibes for most readers", which I guess is in turn short for, "most people who read this will feel things that are detrimental for having a good discussion".
To give the most obvious example in the specific comment, the sentence
This is very straightforwardly a bad thing.
sounds very combative (i.e., will generally will evoke adversarial feelings). And tbc this will also be true for people who aren't the author because we've evolved to simulate how others feel; that's why you can feel awkward watching an awkward scene in a movie.
BTW I think asking me what I mean by vibes is completely reasonable. Someone strong-downvoted your comment I guess because it sounds pedantic but I don't agree with this, I don't think this is a case where the concept so obvious that you shouldn't ask for a definition. (I strong-upvoted back to 0.)
Oh. Oh. So you agree with me that the details weren't that well thought out (or at least didn't bother arguing against it), and ditto about the net effects, but you don't think it matters (or at any rate, isn't the important point) because you're not trying to optimize positive effects, but just honest communication...?
This is not what I thought your position was, but I guess it makes sense if I try to retroactively fit it. This means most (all?) of my objections don't apply anymore. Like, yeah, if you terminally value authentically representing the author's emotional state of mind, then of course deliberately adjusting vibes is a net negative for your values.
(I think this completely misses the point I was trying to make, which is that "I will do X which I know will have bad effects, but I'll do it anyway because the reason it has bad effects is that other people are making mistakes, so it's not me who should change X, but other people who should change" is recognized as dumb for almost all values of X, especially on LW -- but I also think this doesn't matter anymore, either, because the argument is again about consequences, which you just demoted as the optimization target. If you agree that it doesn't matter anymore, then no need to discuss this more.)
I guess now I have a few questions