Wiki Contributions

Comments

There's a major challenge in all of this in that I see any norms you introduce as being additional tools that can be abused to win–just selectively call out your opponents for alleged violations to discredit them.

I think this is usually done subconsciously -- people are more motivated to find issues with arguments they disagree with.

It seems like you wanted me to respond to this comment, so I'll write a quick reply.

Now for the rub: I think anyone working on AI alignment (or any technical question of comparable difficulty) mustn't exhibit this attitude with respect to [the thing they're working on]. If you have a problem where you're not able to achieve high confidence in your own models of something (relative to competing ambient models), you're not going to be able to follow your own thoughts far enough to do good work--not without being interrupted by thoughts like "But if I multiply the probability of this assumption being true, by the probability of that assumption being true, by the probability of that assumption being true..." and "But [insert smart person here] thinks this assumption is unlikely to be true, so what probability should I assign to it really?"

This doesn't seem true for me. I think through details of exotic hypotheticals all the time.

Maybe others are different. But it seems like maybe you're proposing that people self-deceive in order to get themselves confident enough to explore the ramifications of a particular hypothesis. I think we should be a bit skeptical of intentional self-deception. And if self-deception is really necessary, let's make it a temporary suspension of belief sort of thing, as opposed to a life belief that leads you to not talk to those with other views.

It's been a while since I read Inadequate Equilibria. But I remember the message of the book being fairly nuanced. For example, it seems pretty likely to me that there's no specific passage which contradicts the statement "hedgehogs make better predictions on average than foxes".

I support people trying to figure things out for themselves, and I apologize if I unintentionally discouraged anyone from doing that -- it wasn't my intention. I also think people consider learning from disagreement to be virtuous for a good reason, not just due to "epistemic learned helplessness". Also, learning from disagreement seems importantly different from generic deference -- especially if you took the time to learn about their views and found yourself unpersuaded. Basically, I think people should account for both known unknowns (in the form of people who disagree whose views you don't understand) and unknown unknowns, but it seems OK to not defer to the masses / defer to authorities if you have a solid grasp of how they came to their conclusion (this is my attempt to restate the thesis of Inadequate Equilibria as I remember it).

I don't deny that learning from disagreement has costs. Probably some people do it too much. I encouraged MIRI to do it more on the margin, but it could be that my guess about their current margin is incorrect, who knows.

Separately, I don't think the MIRI/CFAR associated social circle is a cult.

Nor do I. (I've donated money to at least one of those organizations.) [Edit: I think they might be too tribal for their own good -- many groups are -- but the word "cult" seems too strong.]

I do think MIRI/CFAR is to some degree an "internet tribe". You've probably noticed that those can be pathological.

Anyway, you're writing a lot of words here. There's plenty of space to propose or cite a specific norm, explain why you think it's a generally good norm, and explain why Ilya violated it. I think if you did that, and left off the rest of the rhetoric, it would read as more transparent and less manipulative to me. A norm against "people [comparing] each other to characters from Rick and Morty" seems suspiciously specific to this case (and also not necessarily a great norm in general).

Basically I'm getting more of an "ostracize him!" vibe than a "how can we keep the garden clean?" vibe -- you were pretending to do the second one in your earlier comment, but I think the cursing here makes it clear that your true intention is more like the first. I don't like mob justice, even if the person is guilty. (BTW, proposing specific norms also helps keep you honest, e.g. if your proposed norm was "don't be crass", cursing would violate that norm.)

(It sounds like you view statements like the above as an expression of "aggressive conformism". I could go on about how I disagree with that, but instead I'll simply note that under a slight swap of priors, one could easily make the argument that it was the original comment by Ilya that's an example of "aggressive conformism". And yet I note that for some reason your perception of aggressive conformism was only triggered in response to a comment attacking a position with which you happen to agree, rather than by the initial comment itself. I think it's quite fair to call this a worrisome flag--by your own standards, no less.)

Ilya's position is not one I agree with.

I'm annoyed by aggressive conformism wherever I see it. When it comes to MIRI/CFAR, my instinct is to defend them in venues where everyone criticizes them, and criticize them in venues where everyone defends them.

I'll let you have the last word in this thread. Hopefully that will cut down on unwanted meta-level discussion.

It's not obvious to me that Ilya meant his comment as aggressively as you took it. We're all primates and it can be useful to be reminded of that, even if we're primates that go to space sometimes. Asking yourself "would I be responding similar to how I'm responding now if I was, in fact, in a cult" seems potentially useful. It's also worth remembering that people coded as good aren't always good.

Your comment was less crass than Ilya's, but it felt like you were slipping "we all agree my opponent is a clear norm violator" into a larger argument without providing any supporting evidence. I was triggered by a perception of manipulativeness and aggressive conformism, which put me in a more factionalistic mindset.

It is an invitation to turn the comments section into something like a factionalized battleground

If you want to avoid letting a comments section descend into a factionalized battleground, you also might want to avoid saying that people "would not much be missed" if they are banned. From my perspective, you're now at about Ilya's level, but with a lot more words (and a lot more people in your faction).

I am interested in the fact that you find the comment so cult-y though, because I didn't pick that up.

It's a fairly incoherent comment which argues that we shouldn't work to overcome our biases or engage with people outside our group, with strawmanning that seems really flimsy... and it has a bunch of upvotes. Seems like curiosity, argument, and humility are out, and hubris is in.

Thanks, this is encouraging.

I think mostly everyone agrees with this, and has tried, and in practice, we keep hitting "inferential distance" shaped walls, and become discouraged, and (partially) give up.

I've found an unexpected benefit of trying to explain my thinking and overcome the inferential distance is that I think of arguments which change my mind. Just having another person to bounce ideas off of causes me to look at things differently, which sometimes produces new insights. See also the book passage I quoted here.

which in turn I fundamentally see as a consequence of epistemic learned helplessness run rampant

Not sure I follow. It seems to me that the position you're pushing, that learning from people who disagree is prohibitively costly, is the one that goes with learned helplessness. ("We've tried it before, we encountered inferential distances, we gave up.")

Suppose there are two execs at an org on the verge of building AGI. One says "MIRI seems wrong for many reasons, but we should try and talk to them anyways to see what we learn." The other says "Nah, that's epistemic learned helplessness, and the costs are prohibitive. Turn this baby on." Which exec do you agree with?

This isn't exactly hypothetical, I know someone at a top AGI org (I believe they "take seriously the idea that they are a computation/algorithm") who reached out to MIRI and was basically ignored. It seems plausible to me that MIRI is alienating a lot of people this way, in fact. I really don't get the impression they are spending excessive resources engaging people with different worldviews.


Anyway, one way to think about it is talking to people who disagree is just a much more efficient way to increase the accuracy of your beliefs. Suppose the population as a whole is 50/50 pro-Skub and anti-Skub. Suppose you learn that someone is pro-Skub. This should cause you to update in the direction that they've been exposed to more evidence for the pro-Skub position than the anti-Skub position. If they're trying to learn facts about the world as quickly as possible, their time is much better spent reading an anti-Skub book than a pro-Skub book, since the pro-Skub book will have more facts they already know. An anti-Skub book also has more decision-relevant info. If they read a pro-Skub book, they'll probably still be pro-Skub afterwards. If they read an anti-Skub book, they might change their position and therefore change their actions.

Talking to an informed anti-Skub in person is even more efficient, since the anti-Skub person can present the very most relevant/persuasive evidence that is the very most likely to change their actions.

Applying this thinking to yourself, if you've got a particular position you hold, that's evidence you've been disproportionately exposed to facts that favor that position. If you want to get accurate beliefs quickly you should look for the strongest disconfirming evidence you can find.

None of this discussion even accounts for confirmation bias, groupthink, or information cascades! I'm getting a scary "because we read a website that's nominally about biases, we're pretty much immune to bias" vibe from your comment. Knowing about a bias and having implemented an effective, evidence-based debiasing intervention for it are very different.

BTW this is probably the comment that updated me the most in the direction that LW will become / already is a cult.

I'm not sure I agree with Jessica's interpretation of Eliezer's tweets, but I do think they illustrate an important point about MIRI: MIRI can't seem to decide if it's an advocacy org or a research org.

"if you actually knew how deep neural networks were solving your important mission-critical problems, you'd never stop screaming" is frankly evidence-free hyperbole, of the same sort activist groups use (e.g. "taxation is theft"). People like Chris Olah have studied how neural nets solve problems a lot, and I've never heard of them screaming about what they discovered.

Suppose there was a libertarian advocacy group with a bombastic leader who liked to go tweeting things like "if you realized how bad taxation is for the economy, you'd never stop screaming". After a few years of advocacy, the group decides they want to switch to being a think tank. Suppose they hire some unusually honest economists, who study taxation and notice things in the data that kinda suggest taxation might actually be good for the economy sometimes. Imagine you're one of those economists and you're gonna ask your boss about looking into this more. You might have second thoughts like: Will my boss scream at me? Will they fire me? The organizational incentives don't seem to favor truthseeking.

Another issue with advocacy is you can get so caught up in convincing people that the problem needs to be solved that you forget to solve it, or even take actions that are counterproductive for solving it. For AI safety advocacy, you want to convince everyone that the problem is super difficult and requires more attention and resources. But for AI safety research, you want to make the problem easy, and solve it with the attention and resources you have.

In The Algorithm Design Manual, Steven Skiena writes:

In any group brainstorming session, the most useful person in the room is the one who keeps asking “Why can’t we do it this way?”; not the nitpicker who keeps telling them why. Because he or she will eventually stumble on an approach that can’t be shot down... The correct answer to “Can I do it this way?” is never “no,” but “no, because. . . .” By clearly articulating your reasoning as to why something doesn’t work, you can check whether you have glossed over a possibility that you didn’t think hard enough about. It is amazing how often the reason you can’t find a convincing explanation for something is because your conclusion is wrong.

Being an advocacy org means you're less likely to hire people who continually ask "Why can’t we do it this way?", and those who are hired will be discouraged from this behavior if it's implied that a leader might scream if they dislike the proposed solution. The activist mindset tends to favor evidence-free hyperbole over carefully checking if you glossed over a possibility, or wondering if an inability to convince others means your conclusion is wrong.

I dunno if there's an easy solution to this -- I would like to see both advocacy work and research work regarding AI safety. But having them in the same org seems potentially suboptimal.

The most natural shared interest for a group united by "taking seriously the idea that you are a computation" seems like computational neuroscience, but that's not on your list, nor do I recall it being covered in the sequences. If we were to tell 5 random philosophically inclined STEM PhD students to write a lit review on "taking seriously the idea that you are a computation" (giving them that phrase and nothing else), I'm quite doubtful we would see any sort of convergence towards the set of topics you allude to (Haskell, anthropics, mathematical logic).

As a way to quickly sample the sequences, I went to Eliezer's userpage, sorted by score, and checked the first 5 sequence posts:

IMO very little of the content of these 5 posts fits strongly into the theme of "taking seriously the idea that you are a computation". I think this might be another one of these rarity narrative things (computers have been a popular metaphor for the brain for decades, but we're the only ones who take this seriously, same way we're the only ones who are actually trying).

the sequences pipeline is largely creating a selection for this philosophical stance

I think the vast majority of people who bounce off the sequences do so either because it's too longwinded or they don't like Eliezer's writing style. I predict that if you ask someone involved in trying to popularize the sequences, they will agree.

In this post Eliezer wrote:

I've written about how "science" is inherently public...

But that's only one vision of the future. In another vision, the knowledge we now call "science" is taken out of the public domain—the books and journals hidden away, guarded by mystic cults of gurus wearing robes, requiring fearsome initiation rituals for access—so that more people will actually study it.

I assume this has motivated a lot of the stylistic choices in the sequences and Eliezer's other writing: the 12 virtues of rationality, the litany of Gendlin/Tarski/Hodgell, parables and fables, Jeffreyssai and his robes/masks/rituals.

I find the sequences to be longwinded and repetitive. I think Eliezer is a smart guy with interesting ideas, but if I wanted to learn quantum mechanics (or any other academic topic the sequences cover), I would learn it from someone who has devoted their life to understanding the subject and is widely recognized as a subject matter expert.

From my perspective, the question how anyone gets through all 1800+ pages of the sequences. My answer is that the post I linked is right. The mystical presentation, where Eliezer plays the role of your sensei who throws you to the mat out of nowhere if you forgot to keep your center of gravity low, really resonates with some people (and really doesn't resonate with others). By the time someone gets through all 1800+ pages, they've invested a significant chunk of their ego in Eliezer and his ideas.

Load More