DanArmak

Wiki Contributions

Comments

Sorted by

It's impossible to prove that an arbitrary program, which someone else gave you, is correct. That's halting-problem equivalent, or Rice's theorem, etc.

Yes, we can prove various properties of programs we carefully write to be provable, but the context here is that a black-box executable Crowdstrike submits to Microsoft cannot be proven reliable by Microsoft.

There are definitely improvements we can make. Counting just the ones made in some other (bits of) operating systems, we could:

  • Rewrite in a memory-safe language like Rust
  • Move more stuff to userspace. Drivers for e.g. USB devices can and should be written in userspace, using something like libusb. This goes for every device that doesn't need performance-critical code or to manage device-side DMA access, which still leaves a bunch of things, but it's a start.
  • Sandbox more kinds of drivers in a recoverable way, so they can do the things they need to efficiently access hardware, but are still prevented from accessing the rest of the kernel and userspace, and can 'crash safe'. For example, Windows can recover from crashes in graphics drivers specifically - which is an amazing accomplishment! Linux eBPF can't access stuff it shouldn't.
  • Expose more kernel features via APIs so people don't have to write drivers to do stuff that isn't literally driving a piece of hardware, so even if Crowdstrike has super-duper-permissions, a crash in Crowdstrike itself doesn't bring down the rest of the system, it has to do it intentionally

Of course any such changes both cost a lot and take years or decades to become ubiquitous. Windows in particular has an incredible backwards compatibility story, which in practice means backwards compatibility with every past bug they ever had. But this is a really valuable feature for many users who have old apps and, yes, drivers that rely on those bugs!

DanArmak195

Addendum 2: this particular quoted comment is very wrong, and I expect this is indicative of the quality of the quoted discussion, i.e. these people do not know what they are talking about.

Luke Parrish: Microsoft designed their OS to run driver files without even a checksum and you say they aren’t responsible? They literally tried to execute a string of zeroes!

Luke Parrish: CrowdStrike is absolutely to blame, but so is Microsoft. Microsoft’s software, Windows, is failing to do extremely basic basic checks on driver files before trying to load them and give them full root access to see and do everything on your computer.

The reports I have seen (of attempted reverse-engineering of the Crowdstrike driver's segfault) say it did not attempt to execute the zeroes from the file as code, and the crash was unrelated, likely while trying to parse the file. Context: the original workaround for the problem was to delete a file which contains only zeroes (at least on some machines, reports are inconsistent), but there's no direct reason to think the driver is trying to execute this file as code.

And: Windows does not run drivers "without a checksum"! Drivers have to be signed by Microsoft, and drivers with early-loading permissions have to be super-duper-signed in a way you probably can't get just by paying them a few thousand dollars.

But it's impossible to truly review or test a compiled binary, for which you have no sourcecode or even debug symbols, and which is deliberately obfuscated in many ways (as people have been reporting when they looked at this driver crash) because it's trying to defend itself against reverse-engineers designing attacks. And of course it's impossible to formally prove that a program is correct. And of course it's written in a memory-unsafe language, i.e. C++, because every single OS kernel and its drivers are written in such a language.

Furthermore, the Crowdstrike product relies on very quickly pushing out updates to (everyone else's) production to counter new threats / vulnerabilities being exploited. Microsoft can't test anything that quickly. Whether Crowdstrike can test anything that quickly, and whether you should allow live updates to be pushed to your production system, is a different question.

Anyway, who's supposed to pay Microsoft for extensive testing of Crowdstrike's driver? They're paid to sign off on the fact that Crowdstrike are who they say they are, and at best that they're not a deliberately malicious actor (as far as we know they aren't). Third party Windows drivers have bugs and security vulnerabilities all the time, just like most software.

Finally, Crowdstrike to an extent competes with Microsoft's own security products (i.e. Microsoft Defender and whatever the relevant enterprise-product branding is); we can't expect Microsoft to invest too much in finding bugs in Crowdstrike!

Addendum: Crowdstrike also has MacOS and Linux products, and those are a useful comparison in the matter of whether we should be blaming Microsoft.

On MacOS they don't have a kernel module (called a kext on MacOS). For two reasons; first, kexts are now disabled by default (I think you have to go to recovery mode to turn them on?) and second, the kernel provides APIs to accomplish most things without having to write a kext. So Crowdstrike doesn't need to (hypothetically) guard against malicious kexts because those are not a threat nearly as much as malicious or plain buggy kernel drivers are on Windows.

One reason why this works well is that MacOS only supports a small first-party set of hardware, so they don't need to allow a bunch of third party vendor drivers like Windows does. Microsoft can't forbid third party kernel drivers, there are probably tens of thousands of legitimate ones that can't be replaced easily or at all, even if someone was available to port old code to hypothetical new userland APIs. (Although Microsoft could provide much better userland APIs for new code; e.g. WinUSB seems to be very limited.)

(Note: I am not a Mac user and this part is not based on personal expertise.)

On Linux, Crowdstrike uses eBPF, which is a (relatively novel) domain-specific language for writing code that will execute inside the Linux kernel at runtime. eBPF is sandboxed in the kernel, and while it can (I think) crash it, it cannot e.g. access arbitrary memory. And so you can't use eBPF to guard against malicious linux kernel modules.

This is indeed a superior approach, but it's hard to blame Microsoft for not having an innovation in place that nobody had ten years ago and that hasn't exactly replaced most preexisting drivers even on Linux, and removing support for custom drivers entirely on Windows would probably stop it from working on most of the hardware out there.

Then again, most linux systems aren't running a hardened configuration, and getting userspace root access is game over anyway - the attacker can just install a new kernel for the next boot, if nothing else. To a first approximation, Linux systems are secure by configuration, not by architecture.

ETA: I'm seeing posts [0] that say Crowdstrike updates broke Linux installations, multiple times over the years. I don't know how they did that specifically, but it doesn't require a kernel module to make a machine crash or become unbootable... But I have not checked those particular reports.

Either way, no-one is up in arms saying to blame Linux for having a vulnerable kernel design!

[0] https://news.ycombinator.com/item?id=41005936, and others.

DanArmak8323

Did Microsoft massively screw up by not guarding against this particular failure mode? Oh, absolutely, everyone agrees on that.

I'm sorry, this is wrong, and that everyone thinks so is also wrong - some people got this right.

Normal Windows kernel drivers are sandboxed to some extent. If a driver segfaults, it will be disabled on the next boot and the user informed; if that fails for some reason, you can tell the the computer to boot into 'safe mode', and if that fails, there is recovery mode. None of these options require the manual, tedious, error-prone recovery procedure that the Crowdstrike bug does.

This is because the Crowdstrike driver wants to protect you from malware in other drivers (i.e. malicious kernel modules). (ETA: And it wants to inspect and potentially block syscalls from user-space applications too.) So it runs in early-loading mode, before any other drivers, and it has special elevated privileges normal drivers do not get, in order to be able to inspect other drivers loaded later.

(ETA: I've seen claims that Crowdstrike also adds code to UEFI and maybe elsewhere to prevent anyone from disabling it and to reenable it; which is another reason the normal Windows failsafes for crashing drivers wouldn't work. UEFI is, roughly, the system code running before your OS and bootloader.)

The fact that you cannot boot Windows if Crowdstrike does not approve it is not a bug in either Windows or Crowdstrike. It's by design, and it's an advertised feature! It's supposed to make sure your boot process is safe, and if that fails, you're not supposed to be able to boot!

Crowdstrike are to be blamed for releasing this bug - at all, without testing, without a rolling release, without a better fallback/recovery procedure, etc. Crowdstrike users who run mission-critical stuff like emergency response are also to blame for using Crowdstrike; as many people have pointed out, this is usually pure box-ticking security theater for the individuals making the choice whether to use Crowdstrike; and Crowdstrike is absolutely not at the quality level of the core Windows kernel, and this is not a big surprise to experts.

But Crowdstrike - the product, as advertised - cannot be built without the ability to render your computer unbootable, at least not without a radically different (micro)kernel design that no mainstream OS kernel in the world actually has. (A more reasonable approach these days would be to use a hypervisor, which can be much smaller and simpler and so easier to vet for bugs, as the recovery mechanism.)

About the impossibility result, if I understand correctly, that paper says two things (I'm simplifying and eliding a great deal):

  1. You can take a recognizable, possibly watermarked output of one LLM, use a different LLM to paraphrase it, and not be able to detect the second LLM's output as coming from (transforming) the first LLM.

  2. In the limit, any classifier that tries to detect LLM output can be beaten by an LLM that is sufficiently good at generating human-like output. There's evidence that a LLMs can soon become that good. And since emulating human output is an LLM's main job, capabilities researchers and model developers will make them that good.

The second point is true but not directly relevant: OpenAI et al are committing not to make models whose output is indistinguishable from humans.

The first point is true, BUT the companies have not committed themselves to defeating it. Their own models' output is clearly watermarked, and they will provide reliable tools to identify those watermarks. If someone else then provides a model that is good enough at paraphrasing to remove that watermark, that is that someone else's fault, and they are effectively not abiding by this industry agreement.

If open source / widely available non-API-gated models become good enough at this to render the watermarks useless, then the commitment scheme will have failed. This is not surprising; if ungated models become good enough at anything contravening this scheme, it will have failed.

There are tacit but very necessary assumptions in this approach and it will fail if any of them break:

  1. The ungated models released so far (eg llama) don't contain forbidden capabilities, including output and/or paraphrasing that's indistinguishable from human, but also of course notkillingeveryone, and won't be improved to include them by 'open source' tinkering that doesn't come from large industry players
  2. No-one worldwide will release new more capable models, or sell ungated access to them, disobeying this industry agreement; and if they do, it will be enforced (somehow)
  3. The inevitable use of more capable models, that would be illegal if released publicly, by some governments, militaries, etc. will not result in the public release of such capabilities; and also, their inevitable use of e.g. indistinguishable-from-human output will not cause such (public) problems that this commitment not to let private actors do it will become meaningless

Charles P. Steinmetz saw a two-hour working day on the horizon—he was the scientist who made giant power possible

What is giant power? I can't figure this out.

So we can imagine AI occupying the most "cushy" subset of former human territory

We can definitely imagine it - this is a salience argument - but why is it at all likely? Also, this argument is subject to reference class tennis: humans have colonized much more and more diverse territory than other apes, or even all other primates.

Once AI can flourish without ongoing human support (building and running machines, generating electricity, reacting to novel environmental challenges), what would plausibly limit AI to human territory, let alone "cushy" human territory? Computers and robots can survive in any environment humans can, and in some where we at present can't.

Also: the main determinant of human territory is inter-human social dynamics. We are far from colonizing everywhere our technology allows, or (relatedly) breeding to the greatest number we can sustain. We don't know what the main determinant of AI expansion will be; we don't even know yet how many different and/or separate AI entities there are likely to be, and how they will cooperate, trade or conflict with each other.

Answer by DanArmak3-1

Nuclear power has the highest chance of The People suddenly demanding it be turned off twenty years later for no good reason. Baseload shouldn't be hostage to popular whim.

Thanks for pointing this out!

A few corollaries and alternative conclusions to the same premises:

  1. There are two distinct interesting things here: a magic cross-domain property that can be learned, and an inner architecture that can learn it.
  2. There may be several small efficient architectures. The ones in human brains may not be like the ones in language models. We have plausibly found one efficient architecture; this is not much evidence about unrelated implementations.
  3. Since the learning is transferable to other domains, it's not language specific. Large language models are just where we happened to first build good enough models. You quote discussion of the special properties of natural language statistics but, by assumption, there are similar statistical properties in other domains. The more a property is specific to language, or necessary because of the special properties of language, the less it's likely to be a universal property that transfers to other domains.
Load More