If you want it to be default, LW should enable it by default with a checkbox for "Hide score"
I guess I should review this post given I noticed the unit conversion error in the original. How did I do that? It was really nothing special, OP explicitly said they were confused about what the strange unit "ppm*hr" meant, so I thought about what it could mean, cross-referenced and it turned out the implied concentration was lower than expected. It's super important to have clear writing, the skill of tracking orders of magnitude and units will be familiar to anyone who does Fermi estimates regularly, and it probably helped too to read OP's own epistemic spot check blog posts as a baby rationalist.
This is one of the best April Fool's jokes ever on this platform. It's well executed, is still extremely funny, and illustrates the folly (from the alignment community perspective anyway) of doing capabilities research while not really thinking about whether your safety plan makes sense. The only way it could be better is if it started a conversation in the media or generated broad agreement or something, which it doesn't appear to have (eg Matthew Barnett doesn't agree). But this is a super high bar so I still think it deserves 4.
This gets 9 points from me. I think it's the first I had heard of the Jones Act, and the post's anti-Jones-Act stance is now one that I am proud to still hold. It's so distortionary that shipping between US ports is more than twice the cost of equivalent international ports, and for very dubious strategic benefit. Imagine if the law were instead that 50% of the volume of all ships between US ports must be filled with rubber ducks. The Jones act is actually WORSE than this in many respects because not only does it >double the price, it removes flexibility from supply chains and surge capacity during disasters.
The post also touches on how special interest groups control American politics beyond just "big oil" etc., all the ways a market economy should make its citizens' lives better and how many of them go through shipping, and the failure of American shipbuilding [1]. It predates Abundance, which was 4 months later in March 2025, and is certainly an abundance idea.
As for downsides, it's somewhat long-winded and I'm a bit skeptical that repeal is actually feasible (some of the commenters point out the large number of people who would actually need to be compensated, and I don't think a government at our current competence level could do this).
[1] This last topic is getting more relevant, as the US Navy recently canceled the Constellation program, which marks its third straight failed frigate program.
I'm giving this -4 points because it seems anti-helpful on net, but not all bad.
This is not the type of post that fits a top 50 list, but Nanosystems was still relevant in 2024 and is still relevant in 2025. The nanosystems of 2060 will not look exactly like in Drexler's books, but we are heading for a nanotechnology future of which Drexler was very prescient. The online version is very usable and fast.
Just for context, the reason we might not report something like today's time horizon metric is we don't have enough tasks beyond 8 hours. We're actively working on several ways to extend this, but there's always a chance none of them will work out and we won't have enough confidence to report a number by the end of 2026.
I don't think those would count enough as foreign soil to get around the Jones Act, for the same reason that you don't pay tariffs when receiving goods at the US Embassy in Beijing. We would need to actually cede the land, maybe to Japan in exchange for buying their shipyards, which in the long term could also circumvent the Jones Act if buying 51% ownership in their shipbuilding companies, having them be US crewed, etc. doesn't ruin everything.
This is clearly one of the most important posts of 2024, so I'm giving it 9 points.
The only negative (other than that it could read better to progressives) is it doesn't seem to have had much impact on Wikipedia. When I pull up the Wikipedia page on LessWrong I find sections on Roko's Basilisk and neoreaction and a link to TESCREAL, but not ideas rationalists actually like, that LW has become the main hub for AI safety discussion, that it's run by Lightcone, or other objectively more important info ChatGPT could tell you.
This casts some doubt on the thesis, though I don't know whether it's because Gerard is still influential, because non-corrupt Wikipedia editors also think the negative aspersions are justified/informative, because the procedural issue of reliable sources and history of negative press dictate the article's focus, or something else.
I'm spending about 1/4 of my time thinking about how to best get data on this and predict whether we're heading for a software intelligence explosion. For now, one thought is that the inference scaling curve is more likely to be a power law, because it's scalefree and consistent with a world where AIs are prone to get stuck when doing harder tasks, but get stuck less and less as their capability increases.
My current guess is still something like the independent-steps model which has a power law.