+1 for pointing out an important problem, -1 for relaxing the norm against politics on LW. This sounds like a case of "we should try it, but try it somewhere far away where we won't accidentally light anything important on fire".
I'm not very familiar with the rationalist diaspora, but I wonder if there were or are spaces within that where political discussions are allowed or welcome, how things turned out, and what lessons we can learn from their history to inform future experiments.
I do know about the weekly cultural war threads on TheMotte and the "EA discuss politics" Facebook group but haven't observed them long enough to make any strong conclusions. Also, for my tastes, they seem a little bit too far removed from LW both culturally and in terms of overlapping membership because they both spawned from LW-adjacent groups rather than LW itself.
I think for this and other reasons, it may be time to relax the norm against discussing object-level political issues around here. There are definitely risks and costs involved in doing that, but I think we can come up with various safeguards to minimize the risks and costs, and if things do go badly wrong anyway, we can be prepared to reinstitute the norm. I won't fully defend that here, as I mainly want to talk about "premature abstraction" in this post, but feel free to voice your objections to the proposal in the comments if you wish to do so.
Apologies in advance for only engaging with the part of this post you said you least wanted to defend, but I just wanted to register strong disagreement here. Personally, I would like LessWrong to be a place where I can talk about AI safety and existential risk without being implicitly associated with lots of other political content that I may or may not agree with. If LessWrong becomes a place for lots of political discussion, people will form such associations regardless of whether or not such associations are accurate. Given that that's the world we live in—and the importance imo of having a space for AI safety and existential risk discussions—I think having a strong norm against political discussions is a quite a good thing.
Personally, I would like LessWrong to be a place where I can talk about AI safety and existential risk without being implicitly associated with lots of other political content that I may or may not agree with.
Good point, I agree this is probably a dealbreaker for a lot of people (maybe even me) unless we can think of some way to avoid it. I can't help but think that we have to find a solution besides "just don't talk about politics" though, because x-risk is inherently political and as the movement gets bigger it's going to inevitably come into conflict with other people's politics. (See here for an example of it starting to happen already.) If by the time that happens in full force, we're still mostly political naivetes with little understanding of how politics works in general or what drives particular political ideas, how is that going to work out well? (ETA: This is not an entirely rhetorical question, BTW. If anyone can see how things work out well in the end despite LW never getting rid of the "don't talk about politics" norm, I really want to hear that so I can maybe work in that direction instead.)
I can't help but think that we have to find a solution besides "just don't talk about politics" though, because x-risk is inherently political and as the movement gets bigger it's going to inevitably come into conflict with other people's politics.
My preferred solution to this problem continues to be just taking political discussions offline. I recognize that this is difficult for people not situated somewhere like the bay area where there are lots of other rationalist/effective altruist people around to talk to, but nevertheless I still think it's the best solution.
EDITS:
See here for an example of it starting to happen already.
I also agree with Weyl's point here that another very effective thing to do is to talk loudly and publicly about racism, sexism, etc.—though obviously as Eliezer points out that's not always possible, as not every important subject necessarily has such a component.
This is not an entirely rhetorical question, BTW. If anyone can see how things work out well in the end despite LW never getting rid of the "don't talk about politics" norm, I really want to hear that so I can maybe work in that direction instead.
My answer would be that we figure out how to engage with politics, but we do it offline rather than using a public forum like LW.
How much of an efficiency hit do you think taking all discussion of a subject offline ("in-person") involves? For example if all discussions about AI safety could only be done in person (no forums, journals, conferences, blogs, etc.), how much would that slow down progress?
How much of an efficiency hit do you think taking all discussion of a subject offline ("in-person") involves?
Probably a good deal for anything academic (like AI safety), but not at all for politics. I think discussions focused on persuasion/debate/argument/etc. are pretty universally bad (e.g. not truth-tracking), and that online discussion lends itself particularly well into falling into such discussions. It is sometimes possible to avoid this failure mode, but imo basically only if the conversations are kept highly academic and avoiding of any hot-button issues (e.g. as in some online AI safety discussions, though not all). I think this is basically impossible for politics, so I suspect that not having the ability to talk about politics online won't be much of a problem (and might even be quite helpful, since I suspect it would overall raise the level of political discourse).
anything academic (like AI safety), but not at all for politics [...] avoiding of any hot-button issues
"Politics" isn't a separate magisterium, though; what counts as a "hot-button issue" is a function of the particular socio-psychological forces operative in the culture of a particular place and time. Groups of humans (including such groups as "corportations" or "governments") are real things in the real physical universe and it should be possible to build predictive models of their behavior using the same general laws of cognition that apply to everything else.
To this one might reply, "Oh, sure, I'm not objecting to the study of sociology, social psychology, economics, history, &c., just politics." This sort of works if you define "political" as "of or concerning any topic that seems likely to trigger motivated reasoning and coalition-formation among the given participants." But I don't see how you can make that kind of clean separation in a principled way, and that matters if you care about getting the right answer to questions that have been infused with "political" connotations in the local culture of the particular place and time in which you happen to live.
Put it this way: astronomy is not a "political" topic in Berkeley 2019. In Rome 1632, it was. The individual cognitive algorithms and collective "discourse algorithms" that can't just get the right answer to questions that seem "political" in Berkeley 2019, would have also failed to get the right answer on heliocentrism in Rome 1632—and I really doubt they're adequate to solve AGI alignment in Berkeley 2039.
This sort of existence argument is reasonable for hypothetical supehuman AIs, but real-world human cognition is extremely sensitive to the structure we can find or make up in the world. Sure, just saying "politics" does not provide a clear reference class, so it would be helpful to understand what you want to avoid about politics and engineer around it. My hunch is that avoiding your highly-technical definition of bad discourse that you are using to replace "politics" just leads to a lot of time spent on your politics analysis, with approximately the same topics avoided as a very simple rule of thumb.
I stopped associating or mentioning LW in real life largely because of the political (maybe some parts cultural as well) baggage of several years ago. Not even because I had any particular problem with the debate on the site or the opinions of everyone in aggregate, but because there was just too much stuff to cherry-pick from in our world of guilt by association. Too many mixed signals for people to judge me by.
—and I really doubt they're adequate to solve AGI alignment in Berkeley 2039.
Is this because you think technical alignment work will be a political issue in 2039?
It is sometimes possible to avoid this failure mode, but imo basically only if the conversations are kept highly academic and avoiding of any hot-button issues (e.g. as in some online AI safety discussions, though not all). I think this is basically impossible for politics
I disagree, and think LW can actually do ok, and probably even better with some additional safeguards around political discussions. You weren't around yet when we had the big 2009 political debate that I referenced in the OP, but I think that one worked out pretty well in the end. And I note that (at least from my perspective) a lot of progress in that debate was made online as opposed to in person, even though presumably many parallel offline discussions were also happening.
so I suspect that not having the ability to talk about politics online won’t be much of a problem (and might even be quite helpful, since I suspect it would overall raise the level of political discourse).
Do you think just talking about politics in person is good enough for making enough intellectual progress and disseminating that widely enough to eventually solve the political problems around AI safety and x-risks? Even if I didn't think there's an efficiency hit relative to current ways of discussing politics online, I would be quite worried about that and trying to find ways to move beyond just talking in person...
I disagree, and think LW can actually do ok, and probably even better with some additional safeguards around political discussions. You weren't around yet when we had the big 2009 political debate that I referenced in the OP, but I think that one worked out pretty well in the end.
Do you think having that debate online was something that needed to happen for AI safety/x-risk? Do you think it benefited AI safety at all? I'm genuinely curious. My bet would be the opposite—that it caused AI safety to be more associated with political drama that helped further taint it.
I think it was bad in the short term (it was at least a distraction, and maybe tainted AI safety by association although I don't have any personal knowledge of that), but probably good in the long run, because it gave people a good understanding of one political phenomenon (i.e., the giving and taking of offense) which let them better navigate similar situations in the future. In other words, if the debate hadn't happened online and the resulting understanding widely propagated through this community, there probably would have been more political drama over time because people wouldn't have had a good understanding of the how and why of avoiding offense.
But I do agree that "taint by association" is a big problem going forward, and I'm not sure what to do about that yet. By mentioning the 2009 debate I was mainly trying to establish that if that problem could be solved or ameliorated to a large degree, then online political discussions seem to be worth having because they can be pretty productive.
You said: If by the time that happens in full force, we're still mostly political naivetes with little understanding of how politics works in general or what drives particular political ideas, how is that going to work out well."
I think the debate here might rely on an unnecessary dichotomy, either I discuss politics on LW/in the rationalist community OR I will have little (or no) understanding.
Another solution would be to think of spaces to discuss politics which one can join.
I believe that we wont get a better understanding of politics by discussing it here, as its more of a form of empirical knowledge you acquire:
Some preliminary thoughts how to learn it outside LW or the rationalists community
Other forms of learning more about politics which wouldnt be political by the definition above:
Might add things later. I also have a few other ideas I can share in pm,
Another solution would be to think of spaces to discuss politics which one can join.
There are spaces I can join (and have joined) to do politics or observe politics but not so much to discuss politics, because the people there lack the rationality skills or background knowledge (e.g., the basics of Bayesian epistemology, or an understanding of game theory in general and signaling in particular) to do so.
I believe that we wont get a better understanding of politics by discussing it here, as its more of a form of empirical knowledge you acquire:
I think we need both, because after observing "politics in the wild", I need to systemize the patterns I observed, understand why things happened the way they did, predict whether the patterns/trends I saw are likely to continue, etc. And it's much easier to do that with other people's help than to do it alone.
I have noticed myself updating towards this for Local Politics (i.e. when I do a bunch of thinking about an issue among nearby EA / X-risk / Rationality orgs or communities).
In particular, I've noticed a fair amount of talking past each other when resolving some disagreement. Alice and Bob disagree, they talk a bit. Alice concludes it's because Bob doesn't understand Principle X. Alice writes an effortpost on Principle X.
And... well, the effortpost is usually pretty useful. Principle X is legitimately important and it's good to have it written up somewhere if it wasn't already.
But, Principle X usually wasn't the crux between Alice and Bob.
(Hmm, I notice that this comment is doing literally the thing we're talking about here. I don't feel like digging up the details but will note that people I've seen doing this include Duncan, Ben Hoffman, maybe Jessica Taylor, maybe Ruby and Me?)
((It's less obvious to me when I do this because of illusion of transparency. I suppose it's possible my entire doublecrux sequence is an instance of this, although in that case I don't think I was expecting it to be the missing piece so much as "I wanted to make sure there was common knowledge of some foundational stuff."))
I actually think these posts often update my thinking towards that person's point of view even though it's not the crux. You think is rock is important, I think hard place is important. You make a post about rock, but it updates me that Rock was even more important than I thought.
I think I've found that for Ben/Jessica/Zack posts, but not for Duncan posts (where instead I'm like "hmm. umm, so, that's like literally the same post I would have wrote to support my point.")
I agree that this is a real limitation of exclusively meta level political discussion.
However, I'm going to voice my strong opposition to any sort of object level political discussion on LW. The main reason is that my model of the present political climate is that it consumes everything valuable that it comes into contact with it. Having any sort of object level discussion of politics could attract the attention of actors with a substantial amount of power who have an interest in controlling the conversation.
I would even go so far as to say that the combination of "politics is the mindkiller", EY's terrible PR, and the fact that "lesswrong cult" is still the second result after typing "lesswrong" into google has done us a huge favor. Together, it's ensured that this site has never had any strategic importance whatsoever to anyone trying to advance their political agenda.
That being said, I think it would be a good idea to have a rat-adjacent space for discussing these topics. For now, the closest thing I can think of is r/themotte on reddit. If we set up a space for this, then it should be on a separate website with a separate domain and separate usernames that can't be easily traced back to us on LW. That way, we can break all ties with it/nuke it from orbit if things go south.
A few days ago romeostevensit wrote in response to me asking about downvotes on a post:
And I replied:
Since writing that, I've had the thought (because of this conversation) that only talking about political issues at a meta level has another downside: premature abstraction. That is, it takes work to find the right abstraction for any issue or problem, and forcing people to move to the meta level right away means that we can't all participate in doing that work, and any errors or suboptimal choices in the abstraction can't be detected and fixed by the community, leading to avoidable frustrations and wasted efforts down the line.
As an example, consider a big political debate on LW back in 2009, when "a portion of comments here were found to be offensive by some members of this community, while others denied their offensive nature or professed to be puzzled by why they are considered offensive." By the time I took my shot at finding the right abstraction for thinking about this problem, three other veteran LWers had already tried to do the same thing. Now imagine if the object level issue was hidden from everyone except a few people. How would we have been able to make the intellectual progress necessary to settle upon the right abstraction in that case?
One problem that exacerbates premature abstraction is that people are often motivated to talk about a political issue because they have a strong intuitive position on it, and when they find what they think is the right abstraction for thinking about it, they'll rationalize an argument for their position within that abstraction, such that accepting the abstract argument implies accepting or moving towards their object-level position. When the object level issue is hidden, it becomes much harder for others to detect such a rationalization. If the abstraction they created is actually wrong or incomplete (i.e., doesn't capture some important element of the object-level issue), their explicit abstract argument is even more likely to have little or nothing to do with what actually drives their intuition.
Making any kind of progress that would help resolve the underlying object-level issue becomes extremely difficult or impossible in those circumstances, as the meta discussion is likely to become bogged down and frustrating to everyone involved as one side tries to defend an argument that they feel strongly about (because they have a strong intuition about the object-level issue and think their abstract argument explains their intuition) but may actually be quite weak due to the abstraction itself being wrong. And this can happen even if their object-level position is actually correct!
To put it more simply, common sense says hidden agendas are bad, but by having a norm for only discussing political issues at a meta level, we're directly encouraging that.
(I think for this and other reasons, it may be time to relax the norm against discussing object-level political issues around here. There are definitely risks and costs involved in doing that, but I think we can come up with various safeguards to minimize the risks and costs, and if things do go badly wrong anyway, we can be prepared to reinstitute the norm. I won't fully defend that here, as I mainly want to talk about "premature abstraction" in this post, but feel free to voice your objections to the proposal in the comments if you wish to do so.)