Well that's at least a completely different kind of regulatory failure than the one that was proposed on Twitter. But this is probably motivated reasoning on Microsoft's part. Kernel access is only necessary for IDS because of Microsoft's design choices. If Microsoft wanted, they could also have exported a user API for IDS services, which is a project they are working on now. MacOS already has this! And Microsoft would never ever have done as good a job on their own if they hadn't faced competition from other companies, which is why everyone uses CrowdStrike in the first place.
I have more than once noticed gell-mann amnesia (either in myself or others) about standard LessWrong takes on regulation. I think this community has a bias toward thinking regulations are stupider and responsible for more scarcity than they actually are. I would be skeptical of any particular story someone here tells you about how regulations are making things worse unless they can point to the specific rules involved.
For example: there is a persistent meme here and in sort of the rat-blogosphere that the FDA is what's causing the food you make at home to be so much less expensive than the food you order out. But any person who has managed or owned a restaurant will tell you that the actual two biggest things making your hamburger expensive are labor and real estate, not complying with food service codes. People don't spend as much money cooking at home because they're getting both the kitchen and labor for free (or at least paying for it in other ways), and this would remain true even if it were legal to sell that food you're making on the street without a license.
Another example that's more specific and in my particular trade: Back in May, when the Crowdstrike bug happened, people were posting wild takes on Twitter and in my signal groupchats about how Crowdstrike is only used everywhere because the government regulators subject you to copious extra red tape if you try to switch to something else.
I cannot for the life of me imagine what regulators people were talking about. First of all a large portion of cybersecurity regulation, like SOC2, is self-imposed by the industry; second anyone who's ever had to go through something unusual like ISO 27001 or FedRAMP knows that they do not give a rats ass what particular software vendor you use for anything. At most your accountant will ask if you use an endpoint defense product, and then require you to upload some sort of logfile regularly to make sure you're using the product. Which is a different kind of regulatory failure, I suppose, but it's not what caused the Crowdstrike bug.
As the name suggests, Leela Queen Odds is trained specifically to play without a queen, which is of course an absolutely bonkers disadvantage against 2k+ elo players. One interesting wrinkle is the time constraint. AIs are better at fast chess (obviously), and apparently no one who's tried is yet able to beat it consistently at 3+0 (3 minutes with no timing increment)
Epstein was an amateur rapist, not a pro rapist. His cabal - the parts of it that are actually confirmed and not just speculated about baselessly - seems extremely limited in scope compared to the kinds of industrial conspiracies that people propose about child sex work. Most of epstein's victims only ever had sex with Epstein, and only one of them - Virginia Giuffre - ever appears to have publicly claimed being passed around to many of Epstein's friends.
What I am referring to are claims an underworld industry for exploiting children the primary purpose of which is making money. For example, in the Sound of Freedom, a large part of the plot hinges on the idea that there are professionals who literally bring kidnapped children from South America into the United States so that pedophiles here can have sex with them. I submit that this industry in particular does not exist, or at least would be a terrible way to make money on a risk-adjusted basis compared to drug dealing.
I think P(DOOM) is fairly high (maybe 60%) and working on AI research or accelerating AI race dynamics independently is one of the worst things you can do. I do not endorse improving the capabilities of frontier models and think humanity would benefit if you worked on other things instead.
That said, I hope Anthropic retains a market lead, ceteris paribus. I think there's a lot of ambiguous parts of the standard AI risk thesis, and that there's a strong possibility we get reasonablish alignment with a few quick creative techniques at the finish like faithful CoT. If that happens I expect it might be because Anthropic researchers decided to pull back and use their leverage to coordinate a pause. I also do not see what Anthropic could do from a research front at this point that would make race dynamics even worse than they already are, besides split up the company. I also do not want to live in a world entirely controlled by Sam Altman, and think that could be worse than death.
So one of the themes of sequences is that deliberate self-deception or thought censorship - deciding to prevent yourself from "knowing" or learning things you would otherwise learn - is almost always irrational. Reality is what it is, regardless of your state of mind, and at the end of the day whatever action you're deciding to take - for example, not talking about dragons - you could also be doing if you knew the truth. So when you say:
But if I decided to look into it I might instead find myself convinced that dragons do exist. In addition to this being bad news about the world, I would be in an awkward position personally. If I wrote up what I found I would be in some highly unsavory company. Instead of being known as someone who writes about a range of things of varying levels of seriousness and applicability, I would quickly become primarily known as one of those dragon advocates. Given the taboos around dragon-belief, I could face strong professional and social consequences.
It's not a reason not to investigate. You could continue to avoid these consequences you speak of by not writing about Dragons regardless of the results of your investigation. One possibility is that what you're also avoiding, is guilt/discomfort that might come from knowing the truth and remaining silent. But through your decision not to investigate, the world is going to carry the burden of that silence either way.
Another theme of the sequences is that self-deception, deliberate agnosticism, and motivated reasoning are a source of surprising amounts of human suffering. Richard explains one way it goes horribly wrong here. Whatever subject you're talking about, I'm sure there a lot of other people in your position who have chosen not to look into it for the same reasons. But if all of those people had looked into it, and faced whatever conclusion that resulted squarely, you yourself might not be in the position of having to face a harmful taboo in the first place. So the form of information hiding you endorse in the post is self-perpetuating, and is part of what helps keep the taboo strong.
I think the entire point of rationalism is that you don't do things like this.
I plan to cross-post to LessWrong but to not read or reply to comments (with a few planned exceptions).
:( why not?
Why hardware bugs in particular?