You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

kpreid comments on Open Thread, Jul. 20 - Jul. 26, 2015 - Less Wrong Discussion

4 Post author: MrMind 20 July 2015 06:55AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (202)

You are viewing a single comment's thread. Show more comments above.

Comment author: solipsist 23 July 2015 04:00:34AM *  9 points [-]

I am not close to an expert in security, but my reading of one is that yes, the NSA et. al. can get into any system they want to, even if it is air gapped.

Dilettanting:

  • It is really really hard to produce code without bugs. (I don't know a good analogy for writing code without bugs -- writing laws without any loopholes, where all conceivable case law had to be thought of in advance?)
  • The market doesn't support secure software. The expensive part isn't writing the software -- it's inspecting for defects meticulously until you become confident enough that defects which remain are sufficiently rare. If a firm were to go though the expense of producing highly secure software, how could they credibly demonstrate to customers the absence of bugs? It's a market for lemons.
  • Computers systems comprise hundreds of software components and are only as secure as the weakest one. The marginal returns from securing any individual software component falls sharply -- there isn't much reason to make any component of the system too much more secure than the average component. The security of most consumer components is very weak. So unless there's an entire secret ecosystem of secured software out there, "secure" systems are using a stack with insecure, consumer, components.
  • Security in the real world is helped enormously by the fact that criminals must move physically near their target with their unique human bodies. Criminals thus put themselves at great risk when committing crimes, both of leaking personally identifying information (their face, their fingerprints) and of being physically apprehended. On the internet, nobody knows you're a dog, and if your victim recognizes your thievery in progress, you just disconnect. It is thus easier for a hacker to make multiple incursion attempts and hone his craft.
  • Edward Snowden was, like, just some guy. He wasn't trained by the KGB. He didn't have spying advisors to guide him. Yet he stole who-knows-how-many thousands of top-secret documents in what is claimed to be (but I doubt was) the biggest security breach in US history. But Snowden was trying to get it in the news. He stole thousands of secret document, and then yelled though a megaphone "hey everyone I just stole thousand of secret documents". Most thieves do not work that way.
  • Intelligence organizations have budgets larger than, for example, the gross box office receipts of the entire movie industry. You can buy a lot for that kind of money.
Comment author: kpreid 01 September 2015 01:30:10AM *  0 points [-]

Computers systems comprise hundreds of software components and are only as secure as the weakest one.

This is not a fundamental fact about computation. Rather it arises from operating system architectures (isolation per "user") that made some sense back when people mostly ran programs they wrote or could reasonably trust, on data they supplied, but don't fit today's world of networked computers.

If interactions between components are limited to the interfaces those components deliberately expose to each other, then the attacker's problem is no longer to find one broken component and win, but to find a path of exploitability through the graph of components that reaches the valuable one.

This limiting can, with proper design, be done in a way which does not require the tedious design and maintenance of allow/deny policies as some approaches (firewalls, SELinux, etc.) do.