You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

hg00 comments on Open Thread, Jul. 20 - Jul. 26, 2015 - Less Wrong Discussion

4 Post author: MrMind 20 July 2015 06:55AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (202)

You are viewing a single comment's thread. Show more comments above.

Comment author: solipsist 23 July 2015 04:00:34AM *  9 points [-]

I am not close to an expert in security, but my reading of one is that yes, the NSA et. al. can get into any system they want to, even if it is air gapped.

Dilettanting:

  • It is really really hard to produce code without bugs. (I don't know a good analogy for writing code without bugs -- writing laws without any loopholes, where all conceivable case law had to be thought of in advance?)
  • The market doesn't support secure software. The expensive part isn't writing the software -- it's inspecting for defects meticulously until you become confident enough that defects which remain are sufficiently rare. If a firm were to go though the expense of producing highly secure software, how could they credibly demonstrate to customers the absence of bugs? It's a market for lemons.
  • Computers systems comprise hundreds of software components and are only as secure as the weakest one. The marginal returns from securing any individual software component falls sharply -- there isn't much reason to make any component of the system too much more secure than the average component. The security of most consumer components is very weak. So unless there's an entire secret ecosystem of secured software out there, "secure" systems are using a stack with insecure, consumer, components.
  • Security in the real world is helped enormously by the fact that criminals must move physically near their target with their unique human bodies. Criminals thus put themselves at great risk when committing crimes, both of leaking personally identifying information (their face, their fingerprints) and of being physically apprehended. On the internet, nobody knows you're a dog, and if your victim recognizes your thievery in progress, you just disconnect. It is thus easier for a hacker to make multiple incursion attempts and hone his craft.
  • Edward Snowden was, like, just some guy. He wasn't trained by the KGB. He didn't have spying advisors to guide him. Yet he stole who-knows-how-many thousands of top-secret documents in what is claimed to be (but I doubt was) the biggest security breach in US history. But Snowden was trying to get it in the news. He stole thousands of secret document, and then yelled though a megaphone "hey everyone I just stole thousand of secret documents". Most thieves do not work that way.
  • Intelligence organizations have budgets larger than, for example, the gross box office receipts of the entire movie industry. You can buy a lot for that kind of money.
Comment author: hg00 26 July 2015 02:10:42AM 2 points [-]

Great info... but even air-gapped stuff? Really?

Comment author: solipsist 26 July 2015 04:11:21AM *  0 points [-]

My understanding is that a Snowden-leaked 2008 NSA internal catalog contains airgap-hopping exploits by the dozen, and that the existence of successful attacks on air gapped networks (like Stuxnet) are documented and not controversial.

This understanding comes in large measure from a casual reading of Bruce Schneier's blog. I am not an security expert and my "you don't understand what you're talking about" reflexes are firing.

But moving to areas where I know more, I think e.g. if I tried writing a program to take as input the sounds of someone typing and output the letters they typed, I'd have a decent chance of success.