Nanashi comments on A pair of free information security tools I wrote - LessWrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (97)
Yeah... Ummmm..... There's a lot wrong with this.
Don't get me wrong. I appreciate the need for constant vigilance, but this type of knee-jerk reaction is what prevents the wider scale adoption of good crypto practices.
Edit- for posterity's sake: I accidentally down voted your post when I meant to upvote it. I wasn't just being snide when I said "I appreciate the need for constant vigilance", and it definitely resulted in a good discussion. I updated my vote.
Actually, there is still a small danger to executing this via a non-SSL encrypted web site, even if I trust that you have no malicious intent, your site has not been compromised and the script runs client-side. The danger is a man-in-the-middle attack, in which an attacker intercepts my http request for the script and replaces your script (in the response) with a version that captures my private key and sends it to a server controlled by the attacker.
I realize that most browsers won't let client-side javascript send requests to hosts other than the original host from which the javascript was loaded, but that fact won't solve the issue; the modified version of the script could send the private key to a URL apparently on your host, and the man-in-the middle daemon could intercept the request and send it to the attacker's host.
Wouldn't this be circumvented by performing Step 5 followed immediately by Step 4 before running the script? (Of course, the level of inspection necessary to determine what's going on in the script may be high enough that you may as well write your own script by that point.)
Yes, presumably this danger could be mitigated through a combination of code inspection and sandboxing. But, one of Nanashi's stated motivations for developing this was that "I had yet to find an easy way to do this that didn't involve downloading command-line based software". I doubt that anyone who is adverse to running a command-line program would be very excited about inspecting a javascript application for security vulnerabilities and/or sandboxing it each time he/she wants to sign a message.
As much as I dislike blindly following rules in most cases, I think that ChristianKI is correct that any security-related web application ought to be secured via SSL.
I think what I could have been more clear about was the use case here: this tool in its current form is sort of "pre-Alpha". Previously the script was just sitting locally on my computer and I thought it could be useful to others. If I ever try to deploy this on any sort of larger scale, absolutely it will be done on an SSL server. But for right now, it's just being shared amongst a few friends and the LW community.
If it turns out that people are using the tool frequently, I'll probably go ahead and pay to upgrade my hosting plan to SSL. But for something that's a free tool not meant for wide distribution, which takes about 7 seconds to right click and hit "Save As", I just didn't see it as necessary just yet.
Understood. Obtaining an SSL cert is a hassle (and an expense if you obtain it from a cert authority) that may not be warranted for a pre-alpha release. As long as your users use discretion before signing using their "real" private keys, I don't see any issue.
Thanks for making these available; the Decoy app sounds particularly innovative.
The scenario that everyone runs Step 5 and Step 4 everytime they run the script is unrealistic.
It wouldn't be every time you run the script; you would just need to vet it the first time. I expect that anyone using this for serious security purposes would just save the script locally. The point of this is that it's browser-based, not cloud-based. Saving an HTML + Javascript file with a (admittedly rudimentary) GUI is infinitely easier than downloading a command-line based program.
I'm not certain that it's impossible to hide a private PGP key in the the PGP signature. Are you?
I don't really understand the question. Why would someone want to hide their private PGP key in their public PGP signature?
You assume that the script can't leak the key if it's sandboxed.
For that to be true, it has to be impossible to hide the information from the private PGP key in the signature.
I did ask in security.stackexchange and according to it it's possible to steal the key.
5) doesn't guarantee security.
My own thinking on security is strongly influenced by the CCC hacker thinking. Seeing someone on stage holding a lecture on how he tracked Taiwanese money cards when he was for a few weeks there because the Taiwanese were just to stupid to implement proper security. There are a lot of cases where bad security failed and where the justification for thinking through security implications come from.
On the other hand you are right that the usability that comes out of that paradigm is lacking.
Now that I understand what you are asking, yes, it is all but impossible to hide a private PGP key in the PGP signature which would successfully verify.
The "answer" described in that Stack Exchange post doesn't work. If you attempted that, the signature would not verify.
How do you know?
A signed PGP message has three parts and thus only three places where additional information could be hidden. 1. The header 2. The message itself 3. The signature
The header is standardized. Any changes to the header itself (especially something as blatant as inserting a private key) would be enormously obvious, and would most likely result in a message that would fail to verify due to formatting issues.
The message itself can be verified by the author of the message. If anything shows up on this field that does not exactly match up with what he or she wrote, it will also be extremely obvious.
The signature itself, firstly, must be reproduced with 100% accuracy in order for the message to verify successfully. Any after-the-fact changes to either the message or the signature, will result in a message that does not verify successfully. (This is, of course, the entire purpose of a digital signature). Furthermore, the signature is generated algorithmically and cannot be manipulated by user input. The only way to change the signature would be to change the message prior to signing. However, as indicated above, this would be extremely obvious to the author.
https://tools.ietf.org/html/rfc4880#section-5.2.3.1 has a list of several subpackets that can be included in a signature. How many people check to make sure the order of preferred algorithms isn't tweaked to leak bits? Not to mention just repeating/fudging subpackets to blatantly leak binary data in subpackets that look "legitimate" to someone who hasn't read and understood the whole RFC.
Remember that I did not invent the PGP protocol. I wrote a tool that uses that protocol. So, I don't know if what you are suggesting is possible or not. But I can make an educated guess.
If what you are suggesting is possible, it would render the entire protocol (which has been around for something like 20 years) broken, invalid and insecure. It would undermine the integrity of vast untold quantities of data. Such a vulnerability would absolutely be newsworthy. And yet I've read no news about it. So of the possible explanations, what is most probable?
Such an obvious and easy to exploit vulnerability has existed for 20ish years, undiscovered/unexposed until one person on LW pointed it out?
The proposed security flaw sounds like maybe it might work, but doesnt.
I'd say #2 is more probable by several orders of magnitude
It's not a vulnerability. I trust gnupg not to leak my private key, not the OpenPGP standard. I also trust gnupg not to delete all the files on my hard disk, etc. There's a difference between trusting software to securely implement a standard and trusting the standard itself.
For an even simpler "vulnerability" in OpenPGP look up section 13.1.1 in RFC4880; encoding a message before signing. Just replace the pseudo-random padding with bits from the private key. Decoding (section 13.1.2) does not make any requirements on the content of PS.
NOTE: lesswrong eats blank quoted lines. Insert a blank line after "Hash: SHA1" and "Version: GnuPG v1".
Output of gpg --verify:
Output of gpg -vv --verify:
I ran the exported (unencrypted) private key through
tr '\n' '|'to get a single line of text to set, and created the signature with:Let me know if your OpenPGP software of choice makes it any more clear that the signature is leaking the private key without some sort of verbose display.
I've never seen it stated as a requirement of the PGP protocol that it is impossible to hide extra information in a signature. In an ordinary use case this is not a security risk; it's only a problem when the implementation is untrusted. I have as much disrespect as anyone towards people who think they can easily achieve what experts who spent years thinking about it can't, but that's not what is going on here.
"Algorithmically" doesn't mean that there exactly one way to create a valid signature. Hash functions quite often have collisions.
I'm downvoting this comment because it's misleading.
First of all, no one has ever found an SHA-2 hash collision yet. Second of all, the chances of two SHA-2 hashes colliding is about 1 in 1 quattuorvigintillion. It's so big I had to look up what the number name was. It's 1 with 77 zeroes after it. We're talking universe-goes-into-heat-death-before-it-happens type odds. Only under the most absurd definition of "quite often" could anyone ever reasonably claim that a cryptographic hash function like SHA-2 "quite often" has collisions.
Not that I disagree with your general point, but... 77 isn't a multiple of 3.
Don't set the bar lower; encourage competence. I'll quote this in order to further explain:
Why not use command line software? This is an important question I have a cached answer to, but I often find my own answer non-satisfactory. We should be living in a tell culture, so I'll tell you that in my experience, there's some sort of dichtomy between CLI and GUI and that gap is usually experienced people on one hand and inexperienced on the other.
I don't have anything to say against the experienced people, but I will say that the inexperienced ones, that seemingly always prefer the GUI also tend to suffer from learned helplessness, and more directly, baby duck syndrome.
That's not to say that they aren't right in a certain way of thought - they want things to be simple - but I often wonder if this is my own optimism about people in general, rather than the inexperienced people's refusal to learn and adapt a factually better way. Many a nerd/geek/enthusianist are baffled and infuriated when their attempts to actually better the world around them is simply bounced off despite their indisputable intent to help those around them. (This is how it feels like, by the way http://www.coding2learn.org/blog/2013/07/29/kids-cant-use-computers/)
But because they're inexperienced, wouldn't that mean that in due time they're going to run into problems? If you have a problem you did not know how to adequately solve, and someone is offering you a solution, would that mean your problem is solved? Not at all, unless you know both your problem good enough that you can say the solution will solve it. The inexperienced people are at the mercy of anyone who is going to give them a solution - and wouldn't you say it's a rationality failure to not correctly solve your problems?
I think that it was in the "shut up and multiply" page on the wiki that says that a whole human life is simply too significant in proximity to a certain fear, or a problem of unknown complexity. Or rather the opposite, fears and problems and other demotivating things that actively delay or even stunt growth are simply too insignificant in front of a whole human life (and that could be your life as well!) that stopping at those should simply not be an option, and is a negative consequence option no matter the situation.
Now, can you tell me why us, the people that actively try to improve our surroundings, the people that care about our environment, the people who are consistently shunned despite our unmistakable, undeniable, and unbelievable effort we put in, are putting in, and will put in, deserve to be completely, unforgivingly, and absolutely pushed aside, once again? Other that being just another plus one to the statistical curve; and secondly, why the implications that people pick up a few books, read a few articles here and there, and perhaps, more bottom line than those previous suggestions, learn some helpfulness, shut up and multiply, and grow up from their baby duck syndrome is such an horrendous thought, and worse, very seldom a suggestion?
And as for a good closing paragraph, I'd like to say that it's my belief that as an adult, you are responsible and indeed must work to self-improve yourself, and in extent, to anything and everyone around you. (Anyone here playing cops and robbers need not apply) I had some more to add but it slipped from my mind. Now, can I please have some answers?
(HAPPY WATCHLIST FOR ME)
If you are trying to trying to make the world a better place and find yourself pushed aside, shunned, bounced off, etc. you are doing it wrong. Stop blaming the people in the world for your inability to change them.
People can be stupid and stubborn. There are two ways around the problem. You can either convince them to stop being stupid and stubborn, in which case you are a salesperson. Or you can develop a solution that works around the problem, in which case you are an engineer. If you do neither and instead complain about how stubborn and stupid people are, then you are a whiner.
With this issue, the constraint is simple: people use GUIs, not CLIs. Doesn't matter which one is better. It matters which one people use. If you are taking the sales approach to the problem, you can try to convince people to use a CLI insetad of a GUI. If you are taking the engineering approach to the problem, you can try to build a better GUI. If you are taking the whiner's approach to the problem, you can tell people who build GUIs that CLIs are better.