I think your model of co-option dynamics here is non-obvious to me, and I currently think it's (probably) false for AI safety though I might just not be understanding it.
In particular in adversarial situations I expect group coherence/coordination to make cooption harder rather than easier.
Like in general when I think through the logic of collective action and relax some of the game theory assumptions, or think through history when there's powerful vs less powerful actors in (often violent) conflicts, people being less coordinated usually makes them more susceptible to cooption rather than less. Think of the conquistadors playing up intra-Americas conflict in the Americas, the Achaemenid Empire sponsoring intra-Greek conflicts, broadly divide-and-conquer tactics in history, etc.
As I mentioned in a bunch of comments, I think "loose social movement that centrally coordinates via vibes and whoever is currently the highest-prestige thought-leader" is a format that is both low on coherence/coordination and high on ability to be co-opted.
Indeed I would feel a lot better about AI Safety/EA/Rationality on a lot of these dimensions if there was more formal membership, more things like courts, etc.
The worry here is that we have chosen a fundamental form of social organization that scores low on defensibility and high on resource aquisition, and moving away from that is now very difficult. Many alternatives (both in the "less coordinated" and the "more coordinated" directions) seem to me to be more defensible here.
Or to phrase it in the terms of the post:
I wish we either conquered less, or had more of a plan for how to defend what we conquered. Right now we are doing a lot of conquering, but without any plan for how to defend it, and that seems like it really has a pretty high chance of going badly.
more things like courts
Good point.
Literal courts are expensive, but there's a larger design space if we relax the constraints. Courts have to scale to state-size, interoperate with a huge variety of participants (lawyers, judges, police, other officers, and arbitrary citizens), be robust to certain kids of adversarial attacks, ...
My cached thought is to have a norm of "if you're occupying an important exclusive social niche, such as company leader, thought leader, etc., then you have an obligation to debate representatives from major disagreeing relevant views". May require infrastructure for better debates to go well.
(A very natural-seeming extension of this point is "build a general-purpose optimization system to improve the world" -> "whoops it develops independent agency and kills everyone / is stolen from you by a sociopath who installs a totalitarian dictatorship". It always amuses me when the object-level and the meta-level dynamics mirror each other.)
Seconded. I think there is something small-scale-Pythian-ish going on here.
One way to frame this is that a "general-purpose optimization system (that can be used) to improve the world" needs to be strongly retargetable, and the simplest/cheapest/default-est ways to build such a system involve it being also easily corruptible, susceptible to something like "adversarial inputs", both from the inside ("develops independent agency and kills everyone") and from the outside (corrupted by external actors, or just "mundane" context disasters).
What evil forces or people do you see as threatening to take control of the stick? How can we better support you so that you feel like it's less likely that this will happen?
I feel like going into this will predictably cause a demon thread, but as an obvious pointer that is hopefully on the less controversial side, my sense is the vibes of "how hard is AI Alignment" and "how much are we on track to build superintelligent systems safely" are really quite a lot downstream of the incentives frontier labs face, since >50% of talent-weighted people in AI safety work at the frontier labs.
This seems quite bad to me, and it is quite plausible the world would be better of if instead of there being a field of "AI Safety" that has this much of a central vibe, and is this directly exposed to some extremely strong incentives, it was more the case that a bunch of existing fields are thinking about this on their own term, probably overall using worse epistemics and tools less well-suited to the task, but in a way that I think by-default would be much less hijack-able.
(There are also many other things of this kind going on but I feel like those will be more controversial)
Edit: Or alternatively, that the "field of AI Safety" had something more akin to journals or membership or courts or other forms of social organization that could bring deliberation and intention to how all of these vibes are shifting. I do think right now I am thinking more in the direction of "maybe we should have conquered less", but I am also sympathetic to arguments of the form "but maybe we could just defend more?", I am just somewhat burned out on those dimensions.
I hear you about not wanting to start a demon thread. At the same time, I also feel like this answer is too nonspecific for people who either want to help out or give advice. What part of your work at lightcone feels like it's contributing to AI lab control of safety & alignment research?
Thank you for writing this. I found it both a useful reference post I will be using when explaining this concept to people in the future, and impressively reflective.
As someone that thinks that EA has, in fact, conquered way more than it can defend, and should have declared "ideological bankruptcy" and either started over or retired to a quiet life in the mountains where it can't continue accelerating ASI long ago, it is admirable to see such honest reflection.
Well wishes to all the smart men that could not defend what they conquered. May they find a second chance and redemption, or at the least peaceful retirement...
I'm not entirely sure I'm convinced of the idea that the broad rationalist-EA-AI safety community isn't a confusing patchwork of metaphorical city states? I suppose the money and power is probably concentrated more than the vague culture is?
It is not the least federalist arrangement of interest groups!
I guess I invited lots of comments about specifically the rationality/EA community, though I am worried this discussion is trickier (and I am a bit worried it will cause my thinking on this to get badly anchored and worse).
But to respond nevertheless:
I think the weakpoint of the EA and rationality communities in this framework is more that they are generally not very defensible, not that they aren't a confusing patchwork of different interest groups. Large diffuse social communities without strong boundaries are always subject to capture by random fads, charismatic misaligned leaders, or changes in the information landscape. The EA community in-particular just experiences a staggering amount of turnover in its leadership, while continuously presenting a large pile of resources for the taking for whoever can get influence within its ranks.
In different words, if the garden is too large to defend...
Defense of territory is linear with border area, defense against internal threats is linear in area, defense against org scaling failures seems at least linear in the (already exponentially growing) size. So yes, EA has a huge problem with scaling more, and more, and more. This is mostly because scaling is very, very hard.
But isn't this more a syndrome of federalism than of centralisation? "Anyone can claim our banner and there's no one with the authority to gatekeep and excommunicate them". In fact the same applies to your activist example. Parties or organisations can be centralised, but movements often are not. What gets big is an idea, not any one specific group of people.
To be clear, I am using "federalism" as shorthand for "think really hard about how to structure your governance so that you can get a good balance between robustness and coordination ability" and not to just mean "small government always better". I think this overlaps reasonably well with the historical usage of the term (based on the SEP entry), but this is not my domain of expertise, so if I am wrong, treat it as a term of art.
I like it because it's clear that a federalist system still has a lot of government in it! It's definitely not anarchy.
I think social movements end up at a kind of uniquely bad spot here, with "the vibes" being really quite powerful centrally coordinating forces, but anyone's ability to ensure the vibes stay aligned with what is good being quite weak.
I don't think the terminology was clear. (I finished 100% of the essay and got to this comment before I understood why you picked the word.)
Fair enough. I kind of threw that line in there as a self-deprecating joke as I repeatedly kept thinking "come on, I really feel like I am saying things that are so obviously just in the water and so self-evident in common-sense morality, that it must just read as trite to any readers" during writing.
It really feels to me like the basic reasoning here is really very standard, though surprisingly untouched by LessWrong, and maybe the better pointer towards it is "classical liberalism"? I don't know, if someone has a better pointer towards the existing thinking on this topic, happy to switch it out.
The problem with Federalism isn't that it doesn't capture the idea, it's that it also requires stable delegation and authority, which this post really isn't about. It'd partly subsidiarity, or autonomy, but also state capacity mismatch, and Fukuyama's discussions of decay, and institutional drift, or maybe closer to fragility of complex systems, and the way they fail? (Charles Perrow's work, specifically.)
But I think you're talking about it differently than any of those alone, and none have a simple term for this.
We can say centralisation vs decentralisation, but I think my point holds. Centralised organizations have a single point of failure, and if that is captured the whole thing becomes corrupted, but there's also a certain mitigating effect usually in how willing the elites are to really go wild. The failure mainly looks like being ineffectual, slow, mired in process and politics and unable to ever embrace a bolder belief even when appropriate. Meanwhile decentralised ideologies mean everyone must fill a niche and all sorts of crazy stuff can develop and not be sanctioned.
(a fun case study of two parallel developments of the same exact core idea with these two different paradigms is the Catholic Church vs Protestantism).
To be clear, on the general topic I totally agree that "can this thing be defended from bad actors" is often rather underemphasized!
I sometimes consider quitting.
Seems like "quitting" is very different from stepping back to maintain what has been established and is realistically defensible? I think you may be overindexing on the George Washington example, where him quitting exemplified a central part of the principles he was advocating.
But maybe you mean something less obvious by "quitting"?
I think LessWrong and many other things I've built are in a confusing place as it relates to this post. At the present my thinking is roughly:
It does seem like overall the things this broader ecosystem has built are not that federalist, and not that defensible, but I sure think I have made things marginally more federalist and marginally more defensible so maybe that means I shouldn't quit but others should?
Also, IDK, I don't think LessWrong is that defensible. It's not like we have formal membership, and things are quite beholden to quite a lot of random memetic drift and it would if anything be more surprising than not for this site to still be roughly aligned with the culture that I am excited about in 10 years.
The track record of "online communities stay aligned with the interests of its founders or head admins" is really very weak, indeed so weak I have trouble thinking of almost any positive examples. I do think I've been doing a decent job in the last decade, but that doesn't buy me that much confidence for the next (especially as things will probably be pretty crazy with AI).
I think you may be overindexing on the George Washington example, where him quitting exemplified a central part of the principles he was advocating.
Ah, that is actually just a false-positive, I really wasn't actually intending to analogize the George Washington example of quitting to me quitting. Now that you point it out it sure makes sense as a thing someone would read into the post, but I really didn't actually intend that!
in 10 years
I struggle to understand the following. Since I don't believe that anyone could have any mission in an ASI-ruled world, the critical period is likely to be at most 5 years, not 10. Additionally, during the critical period I expect LW to stay the most important AI-related forum[1] where researchers exchange insights like Greenblatt's impression that most AIs are misaligned, Anthropic's Persona Selection Model or Harms' CAST. Finally, I think that Wikipedia is an online community which stayed aligned with the interests of its founders or head admins of creating the encyclopedia... until AI came and made the public lose interest in it.
The most important other mission of LW is clear philosophy and practical topics like Daycare illnesses.
the critical period is likely to be at most 5 years, not 10
Come on. Yes, timelines appear to be on the shorter side, but clearly it would be extreme hubris to stop planning around >5 year timelines! That really seems very dogmatic to me.
My median timeline is ~7 years until truly transformative AI. And I have quite a lot of probability on things longer than that!
On the meta level: why are there this many net upvotes and agreement votes for planning horizons of "at most 5 years"? This updates me towards thinking that some aspect of collective epistemics is notably worse than I had been tracking.
My guess is most of the agree-votes are for the other parts of the comment. It's always been tricky to disaggregate things like this (which is why we don't have agree-voting on posts).
Finally, I think that Wikipedia is an online community which stayed aligned with the interests of its founders or head admins of creating the encyclopedia
I strongly disagree! I think Wikipedia lost the way around 10 years ago.
Additionally, during the critical period I expect LW to stay the most important AI-related forum
Correct (probably unless I go and try to actively build a competing forum or shut down LessWrong). Why this concerns me is I think kind of clearly answered in the post.
I think Wikipedia lost the way around 10 years ago.
I think I agree, though I currently believe it continues to be strongly net positive for the world. My current guess is that it will lose its value to LLMs before it starts to be sufficiently politically captured to be net negative. I am interested to know if you think it is already net negative.
I am interested to know if you think it is already net negative.
It is always very hard to tell what the counterfactual of something would be, but my guess is yes, it's quite good.
(But IDK, I think Wikipedia has been pretty bad for LessWrong in-particular, and I don't actually have access to all the other communities similarly effected, and possibly there has been large collateral damage that I am blind to)
Could you explain why you believe that it is Wikipedia which was politically captured? I think that the history of the Russian Branch provides some evidence to the contrary (which, alas, is accessible only to Russian-speaking users like @Mikhail Samin). The attempt of pro-Russian and anti-LGBTQ users to politically capture the branch caused lots of conflicts and eventually caused these users to defect to various clones like Runiversalis.
Alas, Wikipedia's principles require it to rely on external analysts of news, and if someone politically captured the highest-quality media like the BBC, then Wikipedia's rules would require it to reflect the media's position.
Returning to LW and its mission, I don't understand how a change in culture could undermine it except for causing an onslaught of mechinterp-like slop by new users. But this seems to be more like Hitler's invasions into many countries than actual corruption.
Wikipedia's principles require it to rely on external analysts of news,
This seems like a nice example of Wikipedia preventing itself from conquering what it cannot defend.
Part of the difficulty in pointing specifically to how and when Wikipedia was politically captured is that this sort of ideological takeover happens in a pretty abstract and diffuse way without much of a paper trail, which is part of what makes it so hard to defend against.
Tracing Woodgrains documents some interesting cases like a major admin/user that consistently edits in bad faith and generally gets away with it or how figures like Mao get treated rather differently from other big dictators.
A good recent example I encountered was the Olympics Boxing scandal around Imane Khelif where having an SRY gene was a very salient point that was reported on and later confirmed, but there was a huge effort to conceal that info from the page, and then later downplay it heavily.
That said, I still think Wikipedia is a pretty substantial net good for providing information.
I am not immediately pulling to mind the reasons. I think I recall seeing a graph of Wikipedia edits going down over time (and serious editors leaving on-net) which is a bad sign for the health. It is also not uncommon for me to hear instances of politically motivated edits.
As one example, this week I was told that Maria Montessori—originator of the Montessori school of education—was an avowed and extreme racist, but that people who are involved in Montessor education reliably edit it out of her Wikipedia page (as can be evidenced by its absence on the page, but repeated presence on the talk page).
(I also heard accusation she was a eugenicist, but I failed to find corroboration of that fact while writing this comment.)
Almost everyone that considered themselves scientifically minded and rational at the dawn of the 20th century was an eugenicist, bad genetics was considered a real concern and danger to address. Doesn't mean they all were straight up Nazis but it was one of the big fashionable positivist beliefs.
To the extent that the effect is as big as you're making it out to be (surely it's "big", IDK the exact magnitude though), this seems to be mostly explainable by people trying to apply their new great toy to everything around them. A man with a hammer sees everything as a nail etc.
Sure, my main point was "it doesn't say much about someone of the time other than they were buying into a fad".
Wikipedia is a race condition in a simulationist context guys, Jesus Christ. Do not talk harshly about far branches of the tree until you have settled earlier branches. Not unless you have forensics.
Finally, I think that Wikipedia is an online community which stayed aligned with the interests of its founders or head admins of creating the encyclopedia...
Both of the cofounders recently said that Wikipedia was biased in the Gaza genocide article. In his comment urging for change, Jimmy Wales suggest that people should do things in the Gaza genocide that violate WP:RS/ReliableSources.
WP:RS/ReliableSources is the central policy that changed in the last ten years that reduced the amount of coverages of viewpoints that diverge from left-wing politics get. It's not the only change in that direction, but the most important one.
TBF I feel like at most if LessWrong really is drifting (it's hard for me to say as I'm a relatively late joiner) the worst it's doing is being less interesting to read. I don't see a lot of harm coming from it right now; there are parts of the rationalist adjacent/derived community that I find reprehensible or even dangerous, but none of them are particularly represented here. If the worst that can happen is that the big thing simply waters itself down and dies with a whimper, that's probably as good as it gets.
I see, so this is more about quitting LessWrong specifically and not about quitting Lightcone activities more generally?
Yeah, LessWrong is probably one of the best examples honestly, congrats! I think it's probably still worth trying but of course I don't have a good picture of what your opportunity and other costs are.
Ah, gotcha!
I see, so this is more about quitting LessWrong specifically and not about quitting Lightcone activities more generally?
I think Lighthaven is also not particularly federalist! Other things we do a bit more. I think in general Lightcone is pretty deeply entwined with this whole ecosystem, which maybe doesn't quite get federalism (I blame consequentialism).
(Also to be clear, I am using the word "federalism" to point to the thing in my post. I think it overlaps with the general meaning of "federalism", but I am not at all confident of that. My knowledge of federalism the political philosophy is mostly downstream of reading the SEP entry on federalism)
King George III of Great Britain called him "the greatest man in the world" upon hearing the news
History trivia: There seem to be two versions of this story out there; a quick internet search suggests that they both come from the same person recounting a conversation with King George III but telling the story differently on different occasions.
In the one I originally heard, and have heard more often, the remark "If he does that, he will be the greatest man in the world" was not about Washington declining to run for a third term, but rather about the news that Washington - having won the Revolutionary War, but not yet president - intended to resign from public life rather than seeking to lead the newly-formed USA.
(And he did retire from public life for some time, before being persuaded to attend the Constitutional Convention of 1787 and then becoming president.)
Curated.
Trying to be moral has many failure modes. I'm curating this ("Do not conquer what you cannot defend"), kind of in combination with the next post ("Let goodness conquer all that it can defend"). Together, they make both halves of a point that seems pretty important.
I think I grew up with something like the "innocence as the moral ideal" mindset, and it's been a shift in my adult life to think of myself as having the moral obligation to be powerful (if you want goodness to exist in the universe, someone needs to be defending it), and the moral obligation to be wise enough to do useful things with that power.
I think if I had written these two posts I would have framed them differently. ("conquer" sort of leans into a connotation of power that is specifically, ya know, the bad parts). But, naming things is hard, and the intensity of the word is doing some useful work.
Maybe the most important way ambitious, smart, and wise people leave the world worse off than they found it is by seeing correctly how some part of the world is broken and unifying various powers under a banner to fix that problem
I note that the generic hypotheticals of the great king, scientist, and advocate all end in a way where the conclusion is "it would have been better if the centralization never happened", while the actual historical cases are less clear. Yes, Rome fell, but the Pax Romana was long and many people's lives were better as a result, and it's unclear what the alternative would have been - possibly something much like the lives people lived after the fall and before the rise of Rome. And it's still remembered and analyzed and learned from to this day (unlike the work of the hypothetical scientist - I think it would be a more realistic hypothetical if people remembered and used her framework, given that it was a genuine advance, but just didn't make many further advances after that for a while because of academic incentives). Similarly with Singapore - if it grew 30x over 30 years, it seems like successors can make things worse than they are currently, but getting to the point where Singapore is 1/30th as prosperous in any similar timeframe seems unlikely. I don't know that this is true with much certainty, but my understanding is that ideals of the French revolution inspired the American revolution, and how it went wrong was something the American founders learned from? So probably that should be counted in the "pro" column for the French revolution? EDIT: After checking, I was wrong about this, the chronology doesn't work, most likely I was misremembering that the American revolution inspired the French revolution.
The sense I have is that decentralization is fertile ground for centralization, and centralization eventually leads to forces which cause the fall of the centralized entity, but this process doesn't reliably lead to "maybe better to not make big thing". More like "maybe better to design thing that works a bit like big thing, a bit like small thing, with an awareness of the advantages of each" - one example of an attempt to do something like this is Federalism.
I think the "Decentralized --> incentives for centralization --> centralized --> fall --> decentralized" cycle is like a business cycle - something that will be quite extreme if people are all just doing their locally optimal thing without a knowledge of the pattern that's unfolding and where they are within that pattern currently, but can be smoothed out a bit with some knowledge of what's happening - and even if it's not smoothed out, things gained during centralization aren't usually fully lost during decentralization, just as the (physical) capital built up during the "boom" phase of the business cycle doesn't disappear during the "crash" phase.
Also, a nitpick from a Canadian: Canada is slightly geographically larger than the US. I originally didn't have "slightly" in that sentence, because I thought the difference was significant, but after checking Wikipedia, it's actually tiny, we're about the same size. Still, the US does not cover almost all of the North American continent.
For what it's worth the French revolution inspired almost everything else afterwards. For example Napoleon disseminated a new legal order wherever he conquered and rolling it back was hard. Italian unification was strongly driven by how Napoleon's laws had been relatively liberal and people ill tolerated the attempt to return to the old order after the Congress of Vienna. And of course France itself kept having revolutions fairly regularly.
Also, I'm not in on all the internal politics of this community, but prima facie, quitting doesn't seem to make sense.
Good things are good, even if they aren't permanent. Lesswrong is good currently. The most intuitive-to-me way it would make sense to quit is if that's somehow the way to keep the good thing going longer, or prevent it from becoming a bad thing, neither of which I see evidence for. Of course it would also make sense to quit if you're burned out or for other emotional reasons, but from a practical standpoint, "this place is too centralized around me, should be more of a federated structure" is not a reason to quit, but to make changes so that it's less centralized around you.
Marcus Aurelius ... was succeeded by his son Commodus.
Rephrasing of your motto: don't build a huge empire, because eventually that empire will grow corrupt. It's better to have an Archipelago of City-states, because when an individual city decays, it's not a global catastrophe.
But Eliezer seems to think that we need a global regulatory agency for AI. It's a plausible enough idea, but what happens when the agency falls into corruption like all the other crappy 3-letter agencies run by the US and the UN?
To be precise, Yudkowsky called for a wholesale ban of AI research, not for an international agency which could build it. What would a corrupt international anti-AI agency do, let some GPUs bypass its monitoring? But a rogue ASI-related project would require rogues to create the GPUs, use them to create the ASI and be aware that If Anyone Leaks It, Everyone Dies...
Rephrasing of your motto: don't build a huge empire, because eventually that empire will grow corrupt.
Not my motto! Though a tempting generalization of it. My motto is "only build a huge empire if you actually really tried very hard to not make it corrupt and you are actually really pretty sure that in-expectation your empire will bend the future more towards goodness and justice than what it replaced".
See also my controversial follow-up post: Let goodness conquer all that it can defend.
Executive Summary: LessWrong 2.0 as it actually exists runs at Bus Factor Habryka, and this is probably fine.
(epistemic status: I notice my thesis is confused, but want comments on it anyway. Writing a long comment since I don't have time to write a shorter one.)
If we compare this post directly to LessWrong, things become less clear to me, because I'm not certain which elements of LessWrong are designed to persist.
When we look at LessWrong 1.0 (before my time), then, as described by "what Alex of LessWrong 2.0 believes about history", it consists of (i) The Sequences, which are widely read (tangent: probably not-required-reading, in that many modern community members pick up the norms without having read the original source material) and also (ii) comments on blog posts, which are archived, but ~nobody reads them. (tangent: I believe archiving the comments is strongly good for 'it makes the information environment better for future historians and regardless of whether we expect these future historians to actually exist, it's good for humans to act as if there will be a Future and be pro-social in relation to it').
LessWrong 2.0 is the site that Oliver is the chief moderator of. This is your walled garden. It's a good garden. I spend a lot of time here. It is inexorably linked with Lighthaven (technically separate) and Lightcone (umbrella org) and the surrounding community. However, unlike Washington and the US Government, or the French and their revolutions, where the institution is the mechanisms of state, and it is designed to be a legible and predictable monopoly over violence that persists for generations, I don't see why we have to do the same?
You have short timelines (meaning <20yrs with high probability, <40yrs with very high probability) (epistemic status: if this is wrong I have to throw out a lot of my models of the world).
My Oliver-model believes the current version of LessWrong 2.0 (with "bus factor 1=Ollie") can persist for as long as is relevant pre-AGI/ASI, unless the world drastically changes in ways he doesn't expect.
Therefore I'm not sure you need a succession plan? You don't need to defend the information ecosystem of LessWrong forever. This institution doesn't need to last forever - I certainly hope "we win AI", and we can continue with a "LessWrong 3.0" that is similar and yet also different, but LessWrong 2.0 doesn't need to be that.
probably not-required-reading, in that many modern community members pick up the norms without having read the original source material
If you think the Sequences were about inculcating "norms", then you definitely need to read the original source material, which explicitly denies this! (Norms are made and unmade by other people, but the question of which computations result in accurate beliefs and effective plans is determined by the structure of reality; it's "law" as in "laws of physics", not "common law".)
I disagree. Of course the Sequences are about inculcating social norms, as is almost all large-scale human communication. Norms are how humans actually communicate ideas.
An explanation: when humans think in groups, their thinking is shaped by the norms of the group. e.g: the way one would post to obtain social status and respect on 4chan is different from LessWrong 2.0.
LessWrong's norms (i.e. explain, don't persuade. avoid insulting people about their knowledge of source material, get curious about other people's models, etc) have been built (both in deliberate ways by Habryka/Lightcone, and in more nebulous social ways) to make it easier for people to be better at thinking when they communicate using them.
We can also reverse this. Someone who thoroughly understands LessWrong/rationalist/Yudkowskian/etc norms will find most of the content of the sequences obvious. E.g. the norm "offer concrete predictions" is making beliefs pay rent. the norm "offer concrete models" enables crux-finding. And so on.
Yes, the Sequences say they are about giving you the tools to know which computations result in accurate beliefs. However, the average person who benefits from a specific Yudkowsky essay probably has not read that essay (claim not fully justified in this comment, it's long enough already). To be more specific, the average person who benefits from the idea "the map is not the territory" has not read the original (Korzybski, 1931). Instead, they read/listened to someone who read/listened to... (some intermediate layers) until we get to an original source.
We call the ideas that propagate throughout a community until they become obvious to most members 'norms'.
(epistemic status: feeling incredibly insulted. I've read the Sequences)
Norms are how humans actually communicate ideas.
Surely not the only way. A lot of ideas can't be communicated via norms, because norms don't have the bandwidth. For an arbitrary example, take the relative state interpretation of quantum mechanics. You definitely need reading and math for that, not just norms.
Someone who thoroughly understands LessWrong/rationalist/Yudkowskian/etc norms will find most of the content of the sequences obvious.
I don't think this is true because of the bandwidth issue. The group norms don't get you to "A Technical Explanation of Technical Explanation".
avoid insulting people about their knowledge of source material
That's a catastrophically bad norm because it degrades propagation of the source material. If someone says something misleading or incorrect about the source material, the way to promote knowledge is to correct them, but that risks insulting them. People who care about the integrity of the source material should want to be corrected (and graciously tolerate some rate of false attempted "corrections" as the price of receiving true corrections).
get curious about other people's models
This seems like a suboptimal norm because it promotes inefficient allocation of attention. If you don't have the capacity to be curious about everything, you have to prioritize, and if you have to prioritize, that implies being less curious about some people's models if what you've heard from them so far doesn't seem promising.
opting out of this conversation.
I feel like I have a thesis about how generating cultural information in the modern world involves writing essays where, if you do well, you impact more people who didn't read the source material than did (e.g. my Korzybski point), and you are ignoring my central point.
Instead, you are aiming to persuade me of your point by using weaknesses in my analogies.
LessWrong is for learning about each other's models, not for having an argument. We're having an argument. I'm deliberately not engaging with your most recent points because I don't want arguments like this on my favourite website.
Wait, now I am curious! Please tell me more about your model that Less Wrong is for learning about each other's models, not for having an argument. That was actually not my understanding! Where did you learn that? Can you say more about why you think arguments are bad and model-sharing is good? (Maybe focus on the the former if you think the latter is too obvious to need elaboration.)
This is sociologically fascinating. I strong-upvoted your comment and will strong-upvote any more explanation you can give me.
According to me (and potentially nobody else), I view LessWrong as a place where we do argument in the truth-building/philosophical/debate sense, and not in the shouting match sense. I think there is a way where we can do argument that works, but the above was not working for me.
[I felt like the above was getting into "shouting match" territory more than "squishing our different models of the situation together and attempting to do our best to get to Aumann's Agreement Theorem in real life.
(note this is mostly because I noticed myself getting defensive in my own head - your comments may have worked perfectly well on the same words posted by someone else).]
According to me, this is good. The reason that comes to mind is "we want LessWrong to be a place of repeated idea exchange, and therefore people getting alienated is bad because then they might stop posting - model-sharing leads to much less alienation than bad-tempered argument", although this may not be cruxy.
(epistemic status - typed quickly. Am interested if you disagree with my central point. My examples almost certainly have non-cruxy holes in them)
While Singapore continued to thrive under his son's leadership
Not related to your post's thrust at all, but: I broadly disagree with this clause, and expect future historians to demarcate a few overlooked choices under the era as instrumental to her decline.
As it validates the models I espouse in this post you must certainly be right.
Things look to be going fine so far, I think? But I sure haven't looked into it that closely.
I love it, but, of course, no good leader, in the moment, thinks that they are over concentrating power - Each believes that they are only doing as much as is necessary for the greater good and so your analysis, can never hope to achieve more than to have every would be conqueror question themselves, which all the better ones do anyway.
Maybe a decent heuristic for executing "If you make a plan that involves concentrating a bunch of power, especially in the name of goodness and justice, really actually think about whether you can defend that power from corruption and adversaries" is "try extra hard to bake structures & incentives that support your goal into your organization", or more glibly "don't big brain".
The addition this heuristic gives is "do" vs "really actually think"
Obviously, this is easier said than done.
The principle of not-for-life rulers (including even the founder) that Washington established by stepping down prevented a concentration of power (as presidents will swap every few years), so that when Washington died there isn't as much of a chance that some bad ruler would take power for a long time.
In Washington's case, the power that could've been concentrated was presidential power + length of rule + lack of practice transfering power. Similarly for the king, Marcus Aurelius, and Singapore.
These seem to clump into a class of "ruler" scenarios. This class seems separate from the examples of the French Revolution, the social movement, the scientist, and (I claim) EA/rationality.
This second class are more like not having control over the masses, over what's in the zeitgeist. Unfortunately, it seems like this class is possibly harder to try to mitigate the problems of than the previous (where it seems like the main thing to shoot for is rule of law, an institution for the peaceful transfer of power, and limits on the power of the ruler (like separation of powers)).
My limited understanding of the EA ecosystem suggests that the big actors are the big funders like OpenPhil, and whatever other orgs garner the most money and talent. Is OpenPhil supposed to... find a dozen promising people to start their own orgs and then split up the funds amongst them, while giving them complete autonomy? Are you to do similar?
For the second class of problems, you need some way to keep a "mob" of people pointed in a focused direction; and so unless you suddenly have timelines longer than your healthy years, I don't see the benefit in you (well, really, Lightcone; I have little clue how much is your secret sauce vs what your successor would do) quitting now.
I have little experience with online community building, but wrt keeping online communities aligned to the original vision, Duncan Sabien's call to "make more grayspaces" might have merit.
In summary, have a 2-tiered system where a select few gatekeepers determine who gets to promote from the open tier to the higher tier, with work from the higher tier treated as exemplary for those in the open tier. You could also frame the open tier as "for those who want to promote" so you have a mandate to kick people out when they're there for different reasons.
Alignmentforum is already something like this.
Napoleon was not an aggressor except against Russia and arguably Spain. In the other cases, he did not start fights; he finished them.
And he was not an aggressor at all against the peoples of Europe. He was an aggressor against the deeply conservative feudal nobility who were enemies of progress, reason, and efficiency. Napoleon was far more rationalist and humanist than everyone he fought against, except Britain.
Epistemic status: All of the western canon must eventually be re-invented in a LessWrong post. So today we are re-inventing federalism.
Once upon a time there was a great king. He ruled his kingdom with wisdom and economically literate policies, and prosperity followed. Seeing this, the citizens of nearby kingdoms revolted against their leaders, and organized to join the kingdom of this great king.
While the kingdom's ability to defend itself against external threats grew with each person who joined the land, the kingdom's ability to defend itself against internal threats did not. One fateful evening, the king bit into a bologna sandwich poisoned by a rival noble. That noble quickly proceeded to behead his political enemies in the name of the dead king. The flag bearing the wise king's portrait known as "the great unifier" still flies in the fortified cities where his successor rules with an iron fist.
Once upon a time there was a great scientific mind. She developed a new theoretical framework that made large advances on the hardest scientific questions of the day. Seeing the promise of her work, new graduate students, professors, and corporate R&D teams flocked into the field, hungry to tackle new open problems and make their mark on the world. Within ten years, a vibrant new academic field had formed, with herself among its most respected members.
While the field's ability to make progress on the hard problems increased with each new researcher who joined the field, the field's ability to defend itself against the institutional incentives of the broader academic ecosystem did not. Low-quality researchers, seeing lucrative new opportunities for publication, began producing flashy results on the easier problems adjacent to her field with low attention to scientific rigor. Seeing their success, others began to join them, attracted to the social and financial rewards. Being conflict averse and not seeing it as her job to prosecute these people, a growing fraction of the field became careerists.
Twenty years later, her scientific field had become so diluted by uninteresting or irrelevant work that the great original problems remained unsolved, mired in bureaucracy, respectability politics, and academic warfare. Most of the scientists who joined early, attracted by the promise of great progress, stopped being scientists altogether and moved to industry. Almost nobody remembers her name in the history books.
Once upon a time there was a great advocate. She built a social movement around the protection of the rights of a marginalized group, and after many years of hard work, saw the day that the most severe forms of discrimination against the group had been outlawed, and wide social consensus had moved in favor of respecting the members of this group.
But in the success of the movement's aims, she also lost most of her authority. No longer having a compelling vision to offer the members of this movement, others who did became more influential. While she remained the acknowledged founder of the movement, she was no longer treated by the general public as its spokesperson. The press would always talk to the new, charismatic leaders of the movement who had the strongest and most unyielding views. She couldn't afford to make enemies in the movement that she considered hers, so she would publicly endorse the perspectives of these new leaders even when she privately disagreed with them.
Ten years later, her social movement had become so focused on purity and removing any remaining trace of its original enemy that it had begun causing substantially more harm than the original problem it was founded to address. In the history books, she would be briefly mentioned as one of the people who laid the groundwork for the new dark age.
Once upon a time emperor Marcus Aurelius (himself a great general and a great leader) died in 180 AD, and was succeeded by his son Commodus. Commodus, whom historian Cassius Dio described as "a greater curse to the Romans than any pestilence or any crime", turned out to be interested in gladiator fighting much more than in governing the Roman Empire. The Pax Romana began its long descent into the Crisis of the Third Century, and marked the start of the eventual collapse of the Roman Empire.
Once upon a time the French revolution swept across France, bringing the people liberty and executing the corrupt French aristocracy in an unprecedented flurry of violence. Within a decade the idealistic leaders of the revolution would mostly all be dead, executed by the political machine they themselves had created. And within another few years, Napoleon Bonaparte would claim power and proceed to wage aggressive war across all of continental Europe for another decade.
Once upon a time Lee Kuan Yew built modern Singapore out of what was, at the time, a small regional trading post in Southeast Asia. Under his leadership, Singapore's GDP per capita grew 30x over 30 years. But Lee Kwan Yew is dead and his son just handed over power to Lawrence Wong, not a member of the Lee family. While Singapore continued to thrive under his son's leadership, I find myself very worried about what happens once the Singapore story depends on a third generation of leaders, and wonder if Singapore has in fact already peaked.
Once upon a time George Washington retired. George Washington, the Continental Army general who defeated the British army and successfully established the United States of America as an independent nation, and later the first United States president, served his two terms as president and then voluntarily relinquished power. King George III of Great Britain called him "the greatest man in the world" upon hearing the news. Some say this decision singlehandedly saved American democracy.
Do not conquer what you cannot defend.
At the heart of classical liberalism, a philosophy I have much sympathy for, is the belief that allowing many individuals to act freely and autonomously (especially when they are empowered by markets, democratic processes, and the scientific method) will tend to produce outcomes that are better than the outcomes that can be produced by central authorities.
Maybe the most important way ambitious, smart, and wise people leave the world worse off than they found it is by seeing correctly how some part of the world is broken and unifying various powers under a banner to fix that problem — only for the thing they have built to slip from their grasp and, in its collapse, destroy much more than anything previously could have.
I sometimes consider quitting. When I do, my friends and colleagues often react with bafflement. "How can you think that what you've done is bad for the world? Do you not think that you are steering this boat we are in together into a good direction? Do you really think a world without the AI Safety movement, without LessWrong, without Effective Altruism would be better?".
And in their heads when they visualize the alternative, I can only imagine that they see a great big emptiness where rationality and EA and AI Safety is. And they compare our current community against nothingness, and come to the conclusion that even if its leadership is kind of broken, and the incentives are kind of messed up, that this is still clearly better than no one in the world working on the things we care about.
But what I am worried about, is that we conquered much more than we can defend. That the alternative to the work of me and others in the space is not nothingness, but a broken and dysfunctional and confusing patchwork of metaphorical city-states that barely does anything, but at least when any part of it fails, it doesn't all go down together, and in its distributed nature, promises much less nourishing food to predators and sociopaths.
In grug language: Smart man sees big problem. Often state of nature is many small things. Smart man make one big thing out of many small things to throw at big problem. But then evil man take big thing from smart man and make more problem. Or big thing grow legs and beat smart man without making problem go away. This is bad. Maybe better to throw small things at big problem and not make big thing, even if solve problem less. Or before make big thing have plan for how to not have big thing do evil.
But Moloch, in whom I sit lonely
"But what about Moloch" you say!
"Your principle betrays itself. If we want to have good things, we need to coordinate and work together. And death comes for us all, eventually, so nothing we build can truly be defended. Do you not see how one company owning one lake will produce more fish than 20 companies each polluting the commons until all fish are dead? Do you not see how having 20 AI companies all racing to the precipice is worse than having one clearly in the lead, even if the one that raced to the top might stray from the intentions of its creators?"
And you know, fair enough. Coordination problems are real. I am not saying that you should not centralize power.
Here I am arguing for a much narrower principle. Much has been written, and will continue to be written, about the tradeoff between freedom and justice. About small vs. big government. I am not trying to cover all of that.
Here I am just trying to highlight a single principle that seems robust across a wide range of tradeoffs: "If you make a plan that involves concentrating a bunch of power, especially in the name of goodness and justice, really actually think about whether you can defend that power from corruption and adversaries".
And if you can, then go ahead! When George Washington stepped down, he traded off direct power in favor of a system that would actually be able to defend the principles he cared about for much longer, birthing much of Western democracy. I am glad the US exists and covers almost all of the north American continent. Its leaders and founders did have a plan for defending what they conquered, and the world is better off for it.
But if your plan involves rallying a bunch of people under the banner of truth and goodness and justice, and your response to the question of "how are you going to ensure these people will stay on the right path?" is "they will stay on the right path because they will be truthseeking, good, and just people", or if as a billionaire your plan for distributing your wealth is "well, I'll hire some people to run a foundation for me to distribute all of my money according to my goals", then I think you are in for a bad time.