Information security is a pretty big passion of mine; I don't think someone needs to have "something to hide" in order to make use of digital signing, encryption, etc. Another passion of mine is making things easier for other people to do. I've written a couple of tools that I think can be useful for the LW crowd. 

Online PGP Signature: This is an online javascript-based tool which allows you to sign messages using your PGP private key. I love the idea of PGP-signed messages (I remember someone under the pseudonym "Professor Quirrell" handing out PGP-verified Quirrell points a few years back). The problem is, I had yet to find an easy way to do this that didn't involve downloading command-line based software. So I wrote this tool that uses open-sourced, javascript-based PGP libraries to let you easily sign messages in your browser.

The whole thing is client-side so your private key is never seen by me, but be smart about security. If you don't trust me, that's fine, just don't use the tool. But also remember that you could have a virus, your computer could be monitored, someone could be watching over your shoulder, etc. so please be smart about your security. But hopefully this can be helpful. 


Decoy: an iPhone App: I wrote this after "The Fappening", where I was basically appalled at the terrible security practices that pretty much everyone uses when sending pictures back and forth. Decoy uses a combination of steganography and AES encryption to let you send images back and forth without having to sign up for an account or use some outside service that can be hacked or otherwise compromised. 

You take the original picture, then you come up with a passphrase, then you take a "decoy" picture. The original picture is converted to base64 image data, which is then AES-encrypted using your passphrase. The resulting cipher text is then encoded into the pixels of the "decoy" picture, which is what gets saved on your phone and sent out. The "decoy" pictures are indistinguishable from any other picture on your or your recipients' camera rolls, and unless you have the passphrase, then the original image is thoroughly inaccessible. 

If your phone is lost, hacked, stolen, or (more benignly) someone happens to be looking through pictures on your phone, all anyone will see are the "decoy" pictures. Without the password, those pictures are worthless. Although the app is primarily branded for, *ahem*, "personal use", there are plenty of other ways to use it. For example, my wife and I use it for things like sending pictures of sensitive physical documents like credit cards, birth certificates, social security cards, etc.  

(full disclosure: although Decoy is free, it is ad-supported so I do financially benefit from people using the app. But on the bright side I'm an avowed rationalist and if I make a quajillion dollars with this app I will spend the vast majority of it on LW-friendly causes!) 

 

 

New to LessWrong?

New Comment
99 comments, sorted by Click to highlight new comments since: Today at 5:00 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Incidentally I also use Decoy as one method of PGP public key verification. The "decoy" picture is a screenshot of my public key. The photo hidden behind the decoy is a picture of me, holding up my drivers license and an index card with my username. The picture itself should prove sufficient in 99% of cases but in extreme circumstances I can give out the passcode which provides an additional two layers of verification (the validity of the password itself, and the photographic identity verification)

Of course that could still be spoofed if someone managed to replace all instances of my verification image, and then made a fake drivers license with my name on it and took a picture of that. But if that ever did happen I actually have a final layer of protection which I won't tell anyone about until I can figure out a way to re-tell it without rendering it worthless.

3Nanashi9y
If anyone is interested in a more detailed security breakdown of Decoy, here it is: The core goal is protecting the original image. The secondary goal is preventing the identification of the Decoy image as s decoy. The original inage goes through the following steps: 1. The picture is taken by the device's camera, stored in temporary memory. 2. The picture data is converted to a base64 string and the original image is deleted. 3. The base64 string is encrypted using AES along with a passcode chosen by the user. 4. The encrypted ciphertext is encoded into the decoy image, which is sent via SMS to the recipient. 5. The passcode is exchanged outside the context of the app with the recipient. 6. The recipient enters the passcode, and the original ciphertext is decrypted, and the base64 string is rendered by the browser. 7. The string is cleared from memory after closing the app. The presents the following possible vulnerabilities. (Vulnerabilties which assume malicious intent on the part of the app are ignored.) 1. If the user's phone is compromised or Apple turns evil, the image file in memory can be relayed elsewhere. 2. If Apple's file deletion processes are flawed, an attacker with physical access to the phone could recover the deleted file. 3. If the user's phone is compromised, the passcode can be leaked. Or, an attacker could gain access to the image data, and if any of the following flaws exist, those could be exploited: the AES protocol, the SJCL library used to implement AES, the native window.Crypto javascript object. 4. If the user's phone is compromised, the encoding can leak details about the passcode. 5. If the passcode is exchanged using insecure means, it can be eavesdropped. A weak password can be trivially brute-forced. A strong password can be socially engineered. 6. If the recipient's phone is compromised, the passcode can be relayed elsewhere, or the original image data. 7. If Apple's deletion processes are insecure, the image dat
1Lumifer9y
You're forgetting about rubberhose cryptanalysis. Also your starting point should be the threat model which you skipped.
3ilzolende9y
I have a superficial measure against this, which is having two user accounts, one of which is superficially similar to mine. If it is easy to send two images which unlock with different passwords then that could be an anti-rubberhose cryptanalysis measure?
4Nanashi9y
Of course now if I want to rubber hose you, I'll be sure to ask about your second account, too! Jokes aside, I think that's a good tool to keep in the belt. I've always struggled with the idea of how to promote the practice without lessening its effectiveness since the more people that know about it, the more likely a rubber hoser is to ask about both.
3Unknowns9y
Someone pointed out that the fact that TrueCrypt allows a hidden system gives an attacker the incentive to torture you until you reveal the secondary hidden one. And if you don't use the option, that's too bad for you -- then the attacker just tortures you until you die, since they don't believe you when you deny the existence of it.
2Lumifer9y
If you're really paranoid you can implement a nesting-doll system with as many levels of depth as you want. But that argument applies just as well to anything at all: the existence of flash drives (or, say, treasure chests filled with gold) gives the attacker an incentive to torture you until you reveal the location of your hidden data (or treasure).
2Nanashi9y
One possible way around that, would be to allow a potentially infinite number of hidden systems each with their own passcode. There are a few issues with this though: 1. Depending on the size of the message, this could get big, fast. 2. The content of any decoy messages would likely leak contextual clues as to their veracity, unless all decoy messages sounded equally plausible. 3. Once you extract one password, the length of the message compared to the size of the encrypted payload would leak information about the number of hidden systems. With all that said, you could address these security concerns by only having the "hidden system" apply to the truly sensitive parts of the message. In other words, you would start with the main message that has been sanitized of any relevant information, and then encrypted. Then, for each piece of sanitized information, you'd provide several plausible answers, which are each encrypted with their own key. So for example you would have is a master message: Master: "The plan to [1] the [2] is confirmed at [3]" And then the possible answers: 1. attack, vandalize, have lunch at, prank call 2. enemy headquarters, subway platform, the local diner, your ex girlfriend 3. noon, midnight, 17:00, tonight So the "full" password would basically be: Master Password + Password for Blank 1 + Password for Blank 2 + Password for Blank 3 So for this example there would be 64 different combinations of possible answers ranging from the correct one: "The plan to attack the enemy headquarters is confirmed at noon." to the incorrect but plausible "The plan to attack the enemy headquarters is confirmed at midnight" etc. This would address the issues of #1 and #2. However it still would be possible for the attacker to guess based on the size of the message how many different combinations there may be. This can be circumvented one of several ways: 1. Have so many options that the knowledge of their quantity would be useless. 2. Pad the mess
2ilzolende9y
Hah, the decoy account is trivially easy to determine to be not-mine, the idea is less "permanently trick someone into thinking it's my main account" and more "distract someone while I log into it so that it can send an automated email, then keep them from powering down my computer for 30 seconds while the program sends the email, because I can't get it to do that in the background just via Automator". Also, in that sort of scenario there really isn't that much I have to hide. There are some computer of my computer usage that I would strongly prefer not to disclose, but at that point I wouldn't be concerned about "linking ilzolende to my real identity" or "what if my friends/parents/future employers know about my actions" or "what if something I did was actually intellectual property theft" or "what if I had to change all my passwords that would be really annoying". If there was something I really didn't want to disclose I would probably do it from other people's computers using Tor Browser or a TAILS DVD with URLs I memorized. There isn't something I value my privacy for that much, so I don't do that. (Although I'm considering getting a TAILS USB for using with the school computers mostly to make the "the fact that this browser didn't tell me that Website X was not a reason I chose the browser, I use it for privacy, the fact that it apparently circumvents the filter is just a side effect, what am I supposed to do, check if the website is blocked from a different computer before I visit it?" claim.) Honestly, a lot of my motives here are more "normalize security/privacy" and "make sure that if something goes wrong I can say that I took a ton of preventative measures" than "losing control of my data would be a complete disaster". If I were truly concerned about privacy, I wouldn't have participated in a study involving MRI scans and DNA analysis from a blood draw and whatnot for ~%100. I mostly don't like the state of affairs where people have more information abo
3Lumifer9y
Yes, some encryption programs (notably TrueCrypt) offer the ability to have two different encrypted areas, with different passwords, inside the same container (e.g. a file). You put, say, your pr0n collection into one, your sekrit revolutionary propaganda into the other, and in response to rubberhose unlock the appropriate one.
2Nanashi9y
That made me smile. One of my favorite sayings is that "all security is security through obscurity", because all it really takes is a lead pipe and some duct tape to "de-obscure" the password. But, that said, I've always considered such "rubberhose cryptanalysis" to be a form of social engineering. Actually, that's a great doublespeak term for it. "Extreme Adversarial Social Engineering". It even has a good acronym: EASE. When you say "the threat model which you skipped", what do you mean?
1Lumifer9y
Which is why many contemporary secure systems do not rely on permanent passwords (e.g. OTR messaging). The usual: who is your adversary and against which threats are you trying to protect yourself?

Online PGP Signature: This is an online javascript-based tool which allows you to sign messages using your PGP private key.

Given a private PGP key to a website that isn't even SSL encrypted is the antithesis of good encryption behavior.

Yeah... Ummmm..... There's a lot wrong with this.

  1. If I had malicious intent, it would not matter if my site were SSL or not.
  2. If my site were compromised somehow, it would not matter if my site were SSL or not.
  3. Everything about the script happens client-side.
  4. The code is written in Javascript, thus you can verify #3 by simply looking at the source code.
  5. If you're not a programmer and don't understand the source code and are still suspicious, you can copy the source code to a local file, and run it on a computer that's sandboxed from the rest of the Internet.

Don't get me wrong. I appreciate the need for constant vigilance, but this type of knee-jerk reaction is what prevents the wider scale adoption of good crypto practices.

Edit- for posterity's sake: I accidentally down voted your post when I meant to upvote it. I wasn't just being snide when I said "I appreciate the need for constant vigilance", and it definitely resulted in a good discussion. I updated my vote.

Actually, there is still a small danger to executing this via a non-SSL encrypted web site, even if I trust that you have no malicious intent, your site has not been compromised and the script runs client-side. The danger is a man-in-the-middle attack, in which an attacker intercepts my http request for the script and replaces your script (in the response) with a version that captures my private key and sends it to a server controlled by the attacker.

I realize that most browsers won't let client-side javascript send requests to hosts other than the original host from which the javascript was loaded, but that fact won't solve the issue; the modified version of the script could send the private key to a URL apparently on your host, and the man-in-the middle daemon could intercept the request and send it to the attacker's host.

2dxu9y
Wouldn't this be circumvented by performing Step 5 followed immediately by Step 4 before running the script? (Of course, the level of inspection necessary to determine what's going on in the script may be high enough that you may as well write your own script by that point.)
4ChristianKl9y
The scenario that everyone runs Step 5 and Step 4 everytime they run the script is unrealistic.
7Nanashi9y
It wouldn't be every time you run the script; you would just need to vet it the first time. I expect that anyone using this for serious security purposes would just save the script locally. The point of this is that it's browser-based, not cloud-based. Saving an HTML + Javascript file with a (admittedly rudimentary) GUI is infinitely easier than downloading a command-line based program.
0[anonymous]9y
As a robot, I prefer cloning a GIT repo and installing from source.
2g_pepper9y
Yes, presumably this danger could be mitigated through a combination of code inspection and sandboxing. But, one of Nanashi's stated motivations for developing this was that "I had yet to find an easy way to do this that didn't involve downloading command-line based software". I doubt that anyone who is adverse to running a command-line program would be very excited about inspecting a javascript application for security vulnerabilities and/or sandboxing it each time he/she wants to sign a message. As much as I dislike blindly following rules in most cases, I think that ChristianKI is correct that any security-related web application ought to be secured via SSL.
8Nanashi9y
I think what I could have been more clear about was the use case here: this tool in its current form is sort of "pre-Alpha". Previously the script was just sitting locally on my computer and I thought it could be useful to others. If I ever try to deploy this on any sort of larger scale, absolutely it will be done on an SSL server. But for right now, it's just being shared amongst a few friends and the LW community. If it turns out that people are using the tool frequently, I'll probably go ahead and pay to upgrade my hosting plan to SSL. But for something that's a free tool not meant for wide distribution, which takes about 7 seconds to right click and hit "Save As", I just didn't see it as necessary just yet.
3g_pepper9y
Understood. Obtaining an SSL cert is a hassle (and an expense if you obtain it from a cert authority) that may not be warranted for a pre-alpha release. As long as your users use discretion before signing using their "real" private keys, I don't see any issue. Thanks for making these available; the Decoy app sounds particularly innovative.
4ChristianKl9y
I'm not certain that it's impossible to hide a private PGP key in the the PGP signature. Are you?
5Nanashi9y
I don't really understand the question. Why would someone want to hide their private PGP key in their public PGP signature?
3ChristianKl9y
You assume that the script can't leak the key if it's sandboxed. For that to be true, it has to be impossible to hide the information from the private PGP key in the signature. I did ask in security.stackexchange and according to it it's possible to steal the key. 5) doesn't guarantee security. My own thinking on security is strongly influenced by the CCC hacker thinking. Seeing someone on stage holding a lecture on how he tracked Taiwanese money cards when he was for a few weeks there because the Taiwanese were just to stupid to implement proper security. There are a lot of cases where bad security failed and where the justification for thinking through security implications come from. On the other hand you are right that the usability that comes out of that paradigm is lacking.
7Nanashi9y
Now that I understand what you are asking, yes, it is all but impossible to hide a private PGP key in the PGP signature which would successfully verify. The "answer" described in that Stack Exchange post doesn't work. If you attempted that, the signature would not verify.
3ChristianKl9y
How do you know?
5Nanashi9y
A signed PGP message has three parts and thus only three places where additional information could be hidden. 1. The header 2. The message itself 3. The signature The header is standardized. Any changes to the header itself (especially something as blatant as inserting a private key) would be enormously obvious, and would most likely result in a message that would fail to verify due to formatting issues. The message itself can be verified by the author of the message. If anything shows up on this field that does not exactly match up with what he or she wrote, it will also be extremely obvious. The signature itself, firstly, must be reproduced with 100% accuracy in order for the message to verify successfully. Any after-the-fact changes to either the message or the signature, will result in a message that does not verify successfully. (This is, of course, the entire purpose of a digital signature). Furthermore, the signature is generated algorithmically and cannot be manipulated by user input. The only way to change the signature would be to change the message prior to signing. However, as indicated above, this would be extremely obvious to the author.
3Pentashagon9y
https://tools.ietf.org/html/rfc4880#section-5.2.3.1 has a list of several subpackets that can be included in a signature. How many people check to make sure the order of preferred algorithms isn't tweaked to leak bits? Not to mention just repeating/fudging subpackets to blatantly leak binary data in subpackets that look "legitimate" to someone who hasn't read and understood the whole RFC.
7Nanashi9y
Remember that I did not invent the PGP protocol. I wrote a tool that uses that protocol. So, I don't know if what you are suggesting is possible or not. But I can make an educated guess. If what you are suggesting is possible, it would render the entire protocol (which has been around for something like 20 years) broken, invalid and insecure. It would undermine the integrity of vast untold quantities of data. Such a vulnerability would absolutely be newsworthy. And yet I've read no news about it. So of the possible explanations, what is most probable? 1. Such an obvious and easy to exploit vulnerability has existed for 20ish years, undiscovered/unexposed until one person on LW pointed it out? 2. The proposed security flaw sounds like maybe it might work, but doesnt. I'd say #2 is more probable by several orders of magnitude

Such an obvious and easy to exploit vulnerability has existed for 20ish years, undiscovered/unexposed until one person on LW pointed it out?

It's not a vulnerability. I trust gnupg not to leak my private key, not the OpenPGP standard. I also trust gnupg not to delete all the files on my hard disk, etc. There's a difference between trusting software to securely implement a standard and trusting the standard itself.

For an even simpler "vulnerability" in OpenPGP look up section 13.1.1 in RFC4880; encoding a message before signing. Just replace the pseudo-random padding with bits from the private key. Decoding (section 13.1.2) does not make any requirements on the content of PS.

3Nanashi9y
Thank you by the way for actually including an example of such an attack. The discussion between ChristianKI and myself covered about 10 different subjects so I wasn't exactly sure what type of attack you were describing. You are correct, in such an attack it would not be a question of trusting OpenPGP. It's a general question of trusting software. These vulnerabilities are common to any software that someone might choose to download. In this case, I would argue that a transparent, sandboxed programming language like javascript is probably one of the safer pieces of "software" someone can download. Especially because browsers basically treat all javascript like it could be malicious.
4Pentashagon9y
Why would I paste a secret key into software that my browser explicitly treats as potentially malicious? I still argue that trusting a verifiable author/distributor is safer than trusting an arbitrary website, e.g. trusting gpg is safer than trusting xxx.yyy.com/zzz.js regardless of who you think wrote zzz.js, simply because it's easier to get that wrong in some way than it is to accidentally install an evil version of gpg, especially if you use an open source package manager that makes use of PKI, or run it from TAILS, etc. I am also likely to trust javascript crypto served from https://www.gnupg.org/ more than from any other URL, for instance. In general I agree wholeheartedly with your comment about sandboxing being important. The problem is that sandboxing does not imply trusting. I think smartphone apps are probably better sandboxed, but I don't necessarily trust the distribution infrastructure (app stores) not to push down evil updates, etc. Sideloading a trusted app by a trusted author is probably a more realistic goal for OpenPGP for the masses.
0Nanashi9y
I agree with what you said, I just want to clarify something: My original statements were made in a very specific context: here are some ways you can attempt to verify this specific piece of software*. At no point did I suggest that any of those methods could be used universally, or that they were foolproof. I grew weary of ChristianKI continually implying this, so I stopped responding to him. So with that said: yes, using this program does require trusting me, the author. If you don't trust me, I have suggested some ways you could verify for yourself. If you aren't able to or it's too much trouble, that's fine; don't use it. As mentioned before, I never meant this to be "PGP for the masses".
-3ChristianKl9y
The core question isn't "how safe is X" but "what safety gurantees does X make" and "does X actually holds it's promises". A decently used software downloaded from sourceforge is more trustworthy than unknown code transferred unencrypted over the internet. Projects like Tor go even beyond that standard and provide deterministic builds to allow independent verification of check sums to make sure that you really are running the code you think you are running. In this case trusting software that travel unencrypted through the internet. It's a quite easy principle to not trust code that travels unencrypted to do anything. It's really security 101. Don't trust unencrypted communiction channels. Yes, there might be times when you violate that heuristic and don't get harmed but good security practice is still "Don't trust unencrypted communiction channels". The idea of saying: "Well I don't have to trust the unencrypted communiction channels because I can do my fancy sandboxing, shouldn't come up." It's not how you think in crypto. In this case, the sandboxing doesn't work. You could have said: "This is just a fun project, don't put any important private keys into it." You didn't but started arguing that your system can do more than it can. The fact that you made that promises as laxly makes the belief in the iPhone app providing what it claims also doubtful. Key issues: 1) Do you make sure that the real image never get's written into SDD storage? (There's no way to trustworthy delete files in SDD storage) 2) Do you got the entropy production really right? 3) Do you really provide no traces in the final image? 4) No other bugs that make the crypto fail? Given the 101 issues with the other project and the way you present it, why should someone trust that you handled those questions well?
3Pentashagon9y
NOTE: lesswrong eats blank quoted lines. Insert a blank line after "Hash: SHA1" and "Version: GnuPG v1". -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Secure comment -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQgEBAEBAgduBQJVLfkkxqYUgAAAAAAKB1NzZWNyZXRAa2V5LS0tLS1CRUdJTiBQ R1AgUFJJVkFURSBLRVkgQkxPQ0stLS0tLXxWZXJzaW9uOiBHbnVQRyB2MXx8bFFI WUJGVXQ5WU1CQkFESnBtaGhjZXVqSHZCRnFzb0ErRnNTbUtCb3NINHFsaU9ibmFH dkhVY0ljbTg3L1IxZ3xYNFJURzFKMnV4V0hTeFFCUEZwa2NJVmtNUFV0dWRaQU56 RVFCQXNPR3VUQW1WelBhV3ZUcURNMGRKbHEzTmdNfG1Edkl2a1BJeHBoZm1KTW1L YmhQcTBhd3ArckFSU3BST01pMXMvWUtLRWEweUdYaFN6MG1uZkYrZ3dBUkFRQUJ8 QUFQL1MrRjBWdkxzOW5HZWZjRFNpZ0hyRjNqYXAvcCtSNTArNGdDenhuY3djelBJ dXR5NU1McFF5NHMxQVZ2T3xNcDZrZFFDV2pVUXdWZTc4WEF3WjNRbEh5dkVONDdx RDZjNVdOMGJuTGpPTEVIRE9RSTNPQi9FMUFrNzlVeXVRfFQ0b21IVWp5MlliVWZj VnRwZWJOR3d4RkxpV214RW1QZG42ZGNLVFJzenAzRDdFQ0FOSWxYbWVTbXR4WFRO REp8REFrOUdoa1N6YnoyeFladmxIekZHb0ltRmU4NGI5UGZ5MEV1dHdYUFFmUUl0 VTBGTExxbkdCeStEWnk2MmpUc3xTMDlWbkpFQ0FQV21kZXhkb1ZKSjJCbUFINHE2 RlRCNXhrUnJMTzNIRk1iZU5NT2Z2T2ducy9Fa2czWjJQcnpHfG43RGdVU1FIZTFp UHJJODJ0VmJYYXR6RE1xMnZ3OU1DQUt5SmtONnVzUFZkVHF5aVFjMDN6anJWMUNu YkNrK1h8WVNtekpxcFd0QzVReWlycUp3ODlWQ2dBaDh4YlorWnI0NlY2R3VhdkdD azdPbGIzQnE1a2V4ZWU2YlFMY0dWdXxkR0Z6YUdGbmIyNkl1QVFUQVFJQUlnVUNW UzMxZ3dJYkF3WUxDUWdIQXdJR0ZRZ0NDUW9MQkJZQ0F3RUNIZ0VDfEY0QUFDZ2tR YTJuMThBNWR2N0owVmdQNkFzdjBrS1ZWb050ZE5WSklHWThLN2I2L1l0ZWlJNFpa NUJtL2YzUFF8WUpCRVVGWTljV1E2TVlZWFFlYm9TWHN1amN2cWJJMkpERFZ5dDFR SCtXdk00dFhiNmdmaGp1a2hobmxaTUNnSnx0eXp1aHdZWHloZGVaMFZmb0hOeUxP WHQyL1VvWCtsdVd4aWhkN1Exd2IrNjljVDV1V1IrYVEwK3h6SXJpVUdlfFBReWRB ZGdFVlMzMWd3RUVBTXU4bWc1cmZMNERnNE5TaHNDc2YyQkd2UnJhZGRDcmtxTk40 ckNwNkdCUXBGQ018MVJldGIwYURQSkhsbWpnaWdOUzBpQTgvWXdyUGx0VktieW9r S2NXZklmYTlmNjE1SmhwNHM3eEFXSUlycGNwaHxPdjlGakRsUldYd09PbXFBYzB5 dVV4WjN2Z2JERUZPWGRuQWk2ZDJDV0Y5a1B5UTlQbG5zL3gxcGtLS0xBQkVCfEFB RUFBL29DMmsrTWwzbGdybXMvVnlsOGl5M01GYWJTT0hBMmpYWE9oRDhDQlptenQ0 MWF5ZzRMSXlvNnQ0aGl8bHBvZWpScDJ0VmNaRE9TQWVKV3BHT2k0Nkt3T1g1VXdW bUI4ZldTbTJobHZxbWJ0ckNWUGUz
3itaibn09y
I've never seen it stated as a requirement of the PGP protocol that it is impossible to hide extra information in a signature. In an ordinary use case this is not a security risk; it's only a problem when the implementation is untrusted. I have as much disrespect as anyone towards people who think they can easily achieve what experts who spent years thinking about it can't, but that's not what is going on here.
4Nanashi9y
Let's assume you CAN leak arbitrary amounts of information into a PGP signature. 1. Short of somehow convincing the victim to send you a copy of their message, you have no means of accessing your recently-leaked data. And since that is extremely unlikely, your only hope is to view a public message the user posts with their compromised signature. Which leads to.... 2. That leaked data would be publicly available. Anyone with knowledge of your scheme would also be able to access that data. Any encryption would be worthless because the encryption would take place client-side and all credentials thus would be exposed to the public as well. Which brings us to.... 3. Because the script runs client-side, it also makes it extremely easy for a potential victim to examine your code to determine if it's malicious or not. And, even if they're too lazy to do so... 4. A private key is long. A PGP signature is short. So your victim's compromised signature would be 10x longer than the length of a normal PGP signature. So yes, you all are correct. If I had malicious intent, I could write an attack that 1. could be immediately exposed to the public by any person with programming knowledge, 2. provides an extremely obvious telltale sign to the victim that something malicious is going on, and 3. doesn't actually provide me any benefit.
1Pentashagon9y
Public-key signatures should always be considered public when anticipating attacks. Use HMACs if you want secret authentication. You explicitly mentioned Decoy in your article, and a similar method could be used to leak bits to an attacker with no one else being able to recover them. We're discussing public key encryption in this article which means that completely public javascript can indeed securely encrypt data using a public key and only the owner of the corresponding private key can decrypt it. Sure, the first five or ten times it's served. And then one time the victim reloads the page, the compromised script runs, leaks as much or all of the private key as possible, and then never gets served again. An exported private key is long because it includes both factors, the private exponent, and the inverse of p mod q. In my other comment I was too lazy to decode the key and extract one of the RSA factors, but one factor will be ~50% of the size of the RSA signature and that's all an attacker needs.
3Nanashi9y
Well shit. This is the third time I've had to re type this post so forgive the brevity. 1. You are right but it makes the attack less effective, since it's a phishing attack not a targeted one. I can't think of an efficient way for an attacker to collect these compromised signatures without making it even more obvious to the victim. 2. This is correct, you could asymmetrically encrypt the data. 3. The intended use is for the user to download the script and run it locally. Seving a compromised copy 10% of the time would just lower the reach of the attack. Especially cause the visitor can still verify the source code, or verify the output of the signature. 4. Even if you cut the size of the private key in half, the signature would still be 5x longer than a standard PGP signature, and the fact that subpacket 20 has been padded with a large amount of data would be immediately visible to the victim upon verifying their own signature. (Note that I didn't include a verification tool, so the visitor would have to do that on their own trusted software.)
-4ChristianKl9y
That's often the case with backdoors. Did you understand the point of private-public key crypto? I doubt anyone would bother to examine the code to a sufficient level to find security flaws. Especially since the code seems a bit obfuscated. How long did it take people to find out that Debian's crypto was flawed? RSA? That just means that it takes 10 signed messages to leak all data. Maybe it bit more because you have to randomly pick one of 10 slots. Maybe a bit less because you can do fancy math.
2Nanashi9y
At this point I am just going to cease replying to any of your posts because this discussion has become patently absurd. You have resorted to citing weaknesses that are common to any protocol that the user is too lazy to verify the safety of. What's next? It's unsafe because you might have a heart attack while using it? Congratulations: you are the kid in the philosophy class that derails the conversation by asking "Yeah but how do we KNOW that?" over and over. Except the difference here is, I'm not being paid to, nor do I have the patience to walk you through the basics of security, trust, cryptography, etc. Yes, I will concede that, given enough ignorance on the part of the user, it is possible to sneak a backdoor into any medium. Including this tool. Speaking of which, there's a backdoor programmed into this post. If you send me a private message with your Less Wrong password, you'll see it.
-1ChristianKl9y
The problem isn't directly in the specific vunerability but that you produce a crypto program and make false claims about it. It's a standard for people who produce good crypto to care about vunerabilities of their software and don't overstate the capabilities of their software. Your understand of trust is so poor that you said that PGP would have be known to be flawed for the possibility for information to be transmitted as Pentashagon and me claimed. Most people who want to hide a picture on their phone likely don't need real security anyway so it's not bad if you make a few errors here and there.
-3ChristianKl9y
"Algorithmically" doesn't mean that there exactly one way to create a valid signature. Hash functions quite often have collisions.
7Nanashi9y
I'm downvoting this comment because it's misleading. First of all, no one has ever found an SHA-2 hash collision yet. Second of all, the chances of two SHA-2 hashes colliding is about 1 in 1 quattuorvigintillion. It's so big I had to look up what the number name was. It's 1 with 77 zeroes after it. We're talking universe-goes-into-heat-death-before-it-happens type odds. Only under the most absurd definition of "quite often" could anyone ever reasonably claim that a cryptographic hash function like SHA-2 "quite often" has collisions.
0dxu9y
Not that I disagree with your general point, but... 77 isn't a multiple of 3.
2Nanashi9y
Why does it need to be a multiple of 3? (SHA-2 = 2^256 = 1*10^77)
5dxu9y
You wrote that the odds were 1 in 1 quattuorvigintillion. I was under the impression that all "-illion"s have exponents that are multiples of 3.
5Nanashi9y
Ahhhh. I misread the output on Wolfram Alpha. You're right. I'll leave it in the original post for posterity, but also for the record, it's actually 1 in 100 quattuorvigintillion (That's what I get for trying to be dramatic)
-4ChristianKl9y
That numbers is irrelevant because it's for randomly chosen hashs. The main point here is that I don't know that there a guarantee that there exactly one signature that successfully signs a message for a single private key. Likely there are multiple. Quick Googling leads me to How PGP works: A session key that's coming from a random number generator is an easy way to add specific entropy into the system. Is the session key really nonwhere to be found in the signature? Even if it is, the math behind public-private key crypto is complicated. PGP advertises that if you know the signature you can be certain that the text isn't altered. I haven't found a promise that the signature is deterministic and that it's impossible to add information to it. By running your the crypto code through an obfuscator you haven't made your code easy to read but if you actually try to read your code you find it calls things like "PublicKeyEncryptedSessionKey:a("./public_key_encrypted_session_key.js")".
8Nanashi9y
Sorry Christian but I am going to stop replying after this one. I'm not trying to be a dick, it's just that at this point I think continuing our conversation is going to confuse readers more than it will help them. The concepts you are referring to and sources you are citing are only tangentially applicable to the conversation at hand. 1. The fact that it is possible to collide hashes, signatures, etc. is well known and obvious. The reason it is not a concern is the extreme difficulty in producing a collision. As indicated above, you would have to brute force your way through 10^77 different combinations to guarantee a successful collision. 2. The section you cited describes PGP encryption, not signatures. They are two entirely different things. PGP signatures do not involve session keys. 3. You can manipulate the output of (or "add specific entropy to") any hash function. It is, however, absurdly difficult to convey meaningful information in the output of a hash function. See above regarding the amount of work required. Furthermore, because a private key is longer than a PGP signature, it is literally impossible to encode the key in the signature. 4. The code uses a library, which means it supports multiple functions. The vast majority of which are not used by the script. You are referring to several traits which are common to almost all cryptographic systems, yet you are implying these are traits unique to PGP. Furthermore, you are describing these traits with loaded language that paints them as weaknesses, when in fact, they are known, accounted-for limitations. Anyone familiar with cryptography will gain nothing from reading this exchange, and anyone unfamiliar with cryptography will likely be confused and mislead.
-6ChristianKl9y
0[anonymous]9y
Cryptographic hashes, like those used in digital signing, are designed to be resistant to these types of manipulations. The attack you're trying to execute is related to the birthday attack, but somewhat stronger: you're looking for a function f that takes messages m1 and m2 and optionally a hash h1 valid over m1, and returns a different valid hash h2 that encodes m2 in some way. To do this, you need to be able to generate an arbitrary number of valid hashes for m1 (after which you can make up an encoding scheme based on their structure), which is quite difficult and essentially requires the hash function to be thoroughly broken. Your best bet is probably some kind of steganographic magic hidden in the signed message itself, or in whitespace around it. That's limited only by your creativity, but without encryption in the hidden message (which is, of course, possible to add), it's vulnerable to an equally creative attacker. For short enough carrier messages it may not even be possible.
2[anonymous]9y
Don't set the bar lower; encourage competence. I'll quote this in order to further explain: Why not use command line software? This is an important question I have a cached answer to, but I often find my own answer non-satisfactory. We should be living in a tell culture, so I'll tell you that in my experience, there's some sort of dichtomy between CLI and GUI and that gap is usually experienced people on one hand and inexperienced on the other. I don't have anything to say against the experienced people, but I will say that the inexperienced ones, that seemingly always prefer the GUI also tend to suffer from learned helplessness, and more directly, baby duck syndrome. That's not to say that they aren't right in a certain way of thought - they want things to be simple - but I often wonder if this is my own optimism about people in general, rather than the inexperienced people's refusal to learn and adapt a factually better way. Many a nerd/geek/enthusianist are baffled and infuriated when their attempts to actually better the world around them is simply bounced off despite their indisputable intent to help those around them. (This is how it feels like, by the way http://www.coding2learn.org/blog/2013/07/29/kids-cant-use-computers/) But because they're inexperienced, wouldn't that mean that in due time they're going to run into problems? If you have a problem you did not know how to adequately solve, and someone is offering you a solution, would that mean your problem is solved? Not at all, unless you know both your problem good enough that you can say the solution will solve it. The inexperienced people are at the mercy of anyone who is going to give them a solution - and wouldn't you say it's a rationality failure to not correctly solve your problems? I think that it was in the "shut up and multiply" page on the wiki that says that a whole human life is simply too significant in proximity to a certain fear, or a problem of unknown complexity. Or rather the oppo
4Nanashi9y
If you are trying to trying to make the world a better place and find yourself pushed aside, shunned, bounced off, etc. you are doing it wrong. Stop blaming the people in the world for your inability to change them. People can be stupid and stubborn. There are two ways around the problem. You can either convince them to stop being stupid and stubborn, in which case you are a salesperson. Or you can develop a solution that works around the problem, in which case you are an engineer. If you do neither and instead complain about how stubborn and stupid people are, then you are a whiner. With this issue, the constraint is simple: people use GUIs, not CLIs. Doesn't matter which one is better. It matters which one people use. If you are taking the sales approach to the problem, you can try to convince people to use a CLI insetad of a GUI. If you are taking the engineering approach to the problem, you can try to build a better GUI. If you are taking the whiner's approach to the problem, you can tell people who build GUIs that CLIs are better.

These sound like great tools. Thanks for making them available.

On a meta level I don't mind if members of the community promote their own work here if it's something that other community members will find useful. I'll also note that these seem lik tricky enouph things that they could also have been mentioned in the bragging thread when you finished them.

The "decoy" pictures are indistinguishable from any other picture on your or your recipients' camera rolls, and unless you have the passphrase, then the original image is thoroughly inaccessible.

What does "indistinguishable" mean in that sentence? Do you claim that a skilled attacker can't know that there metadata added?

3Nornagest9y
Short answer is I don't know. The long answer will take a little background. I haven't bothered to read through Decoy's internals, but this sort of steganography usually hides its secret data in the least significant bits of the decoy image. If that data is encrypted (assuming no headers or footers or obvious block divisions), then it will appear to an attacker like random bytes. Whether or not that's distinguishable from the original image depends on whether the low bits of the original image are observably nonrandom, and that's not something I know offhand -- although most images will be compressed in some fashion and a good compression scheme aims to maximize entropy, so that's something. And if it's mostly random but it does fit a known distribution, then with a little more cleverness it should be possible to write a reversible function that fits the encrypted data into that distribution. It will definitely be different from the original image on the bit level, if you happen to have a copy of it. That could just mean the image was reencoded at some point, though, which is not unheard of -- though it'd be a little suspicious if only the low bits changed.
3Nanashi9y
You're mostly correct. The data is encrypted, and then broken into a base-4 string. The least significant base-4 bit is dropped from each pixel leaving 98.4% fidelity, which is higher fidelity than the compression that gets applied. Thus in terms of image quality, the picture is indistinguishable from any other compressed image. The encoding is deliberately reversible and also open-sourced. However, you can apply the same algorithm to any image, whether it's a decoy or not, and get a string of possibly-encrypted-data. The only confirmation that the data is meaningful would be a successful decryption which is only possible with the correct passphrase. All that said, the fact that the picture is indistinguishable from other non-decoy images only adds a trivial amount of entropy to the encryption. An attacker who is determined to brute force their way into your pictures can simply attempt to crack every picture in your camera roll, decoy or no.
3Pentashagon9y
Does it change the low bits of white (0xFFFFFF) pixels? It would be a dead giveaway to find noise in overexposed areas of a photo, at least with the cameras I've used.
4Nanashi9y
It does. Taking a picture of a solid white or black background will absolutely make it easier for an attacker with access to your data to be more confident that steganography is at work. That said there are some factors that mitigate this risk. 1. The iPhone's camera, combined with its JPG compression, inserts noise almost everywhere. This is far from exhaustive but in a series of 10 all-dark and 10 all-bright photos, the noise distribution of the untouched photos was comparable to the noise distribution of the decoy. Given that I don't control either of these, I'm not counting on this to hold up forever. 2. The app forces you to take a picture (and disables the flash) rather than use an existing one, lessening the chances that someone uses a noiseless picture. Again though, someone could still take a picture of a solid black wall. Because of this, the visual decoy aspect of it is not meant as cryptographic protection. It's designed to lessen the chances that you will become a target. Any test designed to increase confidence in a tampered image requires access to your data which means the attacker has already targeted you in most cases. If that happens, there are other more efficient ways of determining what pictures would be worth attacking. My original statement was that an attacker cannot confirm your image is a Decoy. They can raise their confidence that steganography is taking place. But unless a distinguishing attack against full AES exists, they can't say with certainty that the steganography at work is Decoy. TL;DR: the decoy aspect of things is basically security through obscurity. The cryptographic protection comes from the AES encryption.
3ChristianKl9y
The fact that it distributes noise doesn't mean that the noise is uniformly distributed. It likely doesn't put the same noise in an area with is uniformly colored and an area that isn't uniformly colored. I can't say with certainty either that the sun will rise tomorrow.
1dxu9y
This seems like deliberate misinterpretation of Nanashi's point. You can't say with certainty that the Sun will rise tomorrow, but you can say so with extremely high probability. An attacker can't confirm that the image is a Decoy with a probability anywhere near as high.
1Nanashi9y
Correct. I'd assign a probability of, say, 99.999999999999999999% that the sun will rise tomorrow. If I were an attacker analyzing the noise distribution of an image, I could say with maybe 10% probability that an image has been tampered with. From there I have to further reduce the probability because there are hundreds of ways an image could have been tampered with that aren't Decoy.
2Nanashi9y
For what it's worth, here is a sample of the noise distribution of the iPhone's JPEG compression vs. Decoy (iPhone on left, Decoy on right) http://i.cubeupload.com/ujKps6.png (Note that these are not the same picture, because Decoy does not save or store the original version of either photo. It's two pictures where I held the iPhone very close against a wall. So there's a slight color variation)
2Lumifer9y
That's pretty useless -- what you want is to look at some statistical measures of the empirical distributions of lower-order bits in these images. See e.g. this outdated page.
0Nanashi9y
I don't blame you for not spotting this, since these comments have gone really all over the place. But I did describe how an attacker would use LSB or Chi^2 analysis to determine: For posterity here is that section: "Incidentally, regarding the specific details of such a detection method: We (and the attacker) already know that the distribution of base64 characters in an AES-encrypted ciphertext is approximately random and follows no discernible pattern. We also know that the ciphertext is encoded into the last 2 bits of each 8-bit pixel. So, we can, with X amount of confidence, show that an image is not a Decoy if we extract the last 2 bits of each pixel and discover the resulting data is non-randomly distributed. However, because it is possible for normal, non-Decoy, compressed JPEGs to exhibit a random distribution of the data in the last 2 bits of each pixel, the presence of randomness does not confirm that an image is a Decoy. The only viable attack here would be to pull images which are "visually similar" (a trivial task by simply using Google image search), reduce them to the same size, compress them heavily, and then examine the last 2 bits of each of their pixels. If there is a significant difference in the randomness of the control images vs. the randomness of the suspected image, you could then suggest with X% confidence that the suspected image has been tampered with. However, because it is possible for an image to be tampered with and yet NOT be a Decoy image, even then you could still not, with any legitimate amount of confidence, use such a test to state that an image is a Decoy."
1Lumifer9y
The point you're missing is that the purpose of steganography is not to be noticed as opposed to "you can't prove this beyond reasonable doubt". If I run statistical analysis on the images in your phone and enough of them show suspicious randomness in LSBs, your steganography has failed already.
3Nanashi9y
I've already said this like, five times, but I am giving you a pass here because there are a billion comments on this post and I wouldn't expect someone to read all of them. 1. The primary protection isn't steganography, it's the AES encryption. 2. The goal of the steganography is a deterrent. As you said, to help you not be noticed. If someone is suspicious enough that they steal your images and run a statistical analysis of them, the whole conversation is moot because you've already been noticed. So, I just don't get it. What is your point here? That steganography has potential weaknesses? Is anyone suggesting otherwise?
3Nanashi9y
Also just for the record, here are the relevant statements I made personally about Decoy: Which, by request I clarified: These clarifications were provided very early on in the conversation. It has since devolved into a criticism of steganography in general, which at no point have I ever tried to insinuate that steganography is anything other than security through obscurity.
-1Lumifer9y
Protection against what? Your lack of the threat model is tripping you up. If all you want is protection against a Fappening, you don't need steganography, just encrypt images to binary blobs and you're done. Why steal? Imagine a scenario: you're drove to Canada from the US for a weekend and when you're returning, a polite TSA officer asks for your phone and plugs it into a gizmo. The gizmo displays some blinkenlights, beeps, and the polite officer tells you that your phone is likely to have hidden encrypted information and would you mind stepping inside that office to have a conversation about it? Encrypting your images has obvious benefits, but what exactly do you gain by keeping them inside other images as opposed to random binary files?
1Nanashi9y
I specifically outlined the three primary attack types: fusking, stolen-phone, targeted attacks. In that scenario, I would hope the "beyond a reasonable doubt" standard would apply (which this protocol passes). But if we're assuming an evil government that doesn't stick to that standard, the same hypothetical gizmo can be used to detect any encrypted data. Convenience, a deterrent against attacks, and moderate protection. * Convenience: the iPhone doesn't provide any sort of file system for you to store random binary files, and no supported protocol by which to transmit them anywhere. It does, however, have a very robust photo storage and transmission system and GUI. * Deterrent: In low-threat situations where potential attackers only have visual access to your images, there are no visual methods by which to distinguish your decoy pictures from normal pictures, and therefore make you a target. * Moderate protection: Any further compression or alteration of the decoy image will mung the data. Most (not all) means of transmitting images from an iPhone (social networking apps, email apps, online image storage services, etc.) will compress the image before sending/storing, which as mentioned will mung the encrypted data. Obviously this should not be relied on because there are means (albeit less convenient) of transmitting the data from your phone without compression.
-1Lumifer9y
No need for hope here. "Beyond a reasonable doubt" is a legal standard that applies to evidence presented in criminal prosecutions. It does not apply to investigations or, for example, things like being put on the no-fly list. Or the "next target for the drone assassination" list. Moreover, at a border crossing the Fourth Amendment basically does not apply, too. A border control official can search all your belongings including your electronic devices without needing to show any cause, never mind about "reasonable doubt". At the border, TSA can trawl through your laptop or phone at will.
1Nanashi9y
Relevant quote: "[I]f we're assuming an evil government that doesn't stick to that standard, the same hypothetical gizmo can be used to detect any encrypted data."
1khafra9y
It's super-easy to spot in a histogram, so much so that there's ongoing research into making it less detectable.
2Nanashi9y
Yes. Without the password, even a skilled attacker cannot confirm the presence of any metadata.
3ChristianKl9y
What do you mean with "confirm"? Can a attacker show that the image isn't of the type produced by the normal photo app?
4Nornagest9y
I don't think that attack is practical, as long as Decoy leaves the metadata alone and works only on the image data. You'd need to reproduce the inputs to a particular implementation of the image encoding exactly, which is impossible unless you're snooping the raw data -- my phone camera produces images in JPEG format (high quality, but it's still lossy compression) and does the conversion before the raw image data even leaves RAM. If you're dealing with images originating off the device, things get both easier and more difficult. Easier because there will typically be unchanged images in the wild to compare against; more difficult because there will typically be several different copies of an image floating around, and I don't think it's practical to reconstruct every possible chain of encodings. Many popular image-hosting sites, for example, reencode everything they get their grubby little paws on. Send an image as a text, that's another reencoding. And so forth. As I've mentioned elsewhere, though, decoy images may be statistically distinguishable from an untouched JPEG even if you can't conclusively match it to an origin or e.g. validate against its EXIF tags -- though I could be proven wrong here with the right analysis, and I'd like to be.
2Nanashi9y
Your first paragraph nails it. Unless your phone is both jail broken and seriously compromised, there is no means of viewing the "original" version of either picture. Also re: the second paragraph. The app forces you to take a picture from your device to use as the "Decoy", it will not allow you to use an off-device image. (You CAN use an off-device image as the hidden picture). As for the statistical analysis, it's mostly irrelevant. The encoding algorithm is both reversible and published. So you can extract "Decoy data" from ANY picture that you find, Decoy or no. The only thing that will confirm it one way or the other is a successful decryption. The best you could do is say, "Based on certain telltales, there's a 10% chance this image is a Decoy" or whatever the odds may be. Such an attack has little to no value. If you are an attacker with a specific target, isolating which pictures are decoys removes a trivial amount of entropy from the equation, especially compared to the work of trying to brute-force an AES-encrypted ciphertext.
6Nornagest9y
I understand that, and I understand that it should be impractical to decrypt the hidden image without its key given that strong attacks on AES have not yet been publicly found (key exchange difficulties, which are always considerable, aside). But I think you're being far too nonchalant about detection here. The fact that you can extract "decoy data" from any image is wholly irrelevant; it's the statistical properties of those bits that I'm interested in, and with maybe a million bits of data to play with, the bias per bit does not have to be very high for an attacker to be very confident that some kind of steganography's going on. That does not, of course, prove that it's being used to hide anything interesting from an attacker's point of view; but that was never the point of this objection.
3Nanashi9y
Well, my point has never been that it's impossible for an attacker to be confident that you're using steganography. Rather it's that an attacker cannot prove with certainty. The "decoy picture" aspect of the protocol is intended to provide social protection and ensure plausible deniability can be maintained. It is not intended as cryptographic protection, that is what the AES is for. "Confidence" is only useful to an attacker when it comes to determining a target. But an attacker has to already be confident in order to perform such a test in the first place. Which means you've already been selected as a target. Furthermore they would have to compromise enough of your security to access your image data. If that happens, then the benefit of gaining further confidence is marginal at best. Incidentally, regarding the specific details of such a detection method: We (and the attacker) already know that the distribution of base64 characters in an AES-encrypted ciphertext is approximately random and follows no discernible pattern. We also know that the ciphertext is encoded into the last 2 bits of each 8-bit pixel. So, we can, with X amount of confidence, show that an image is not a Decoy if we extract the last 2 bits of each pixel and discover the resulting data is non-randomly distributed. However, because it is possible for normal, non-Decoy, compressed JPEGs to exhibit a random distribution of the data in the last 2 bits of each pixel, the presence of randomness does not confirm that an image is a Decoy. The only viable attack here would be to pull images which are "visually similar" (a trivial task by simply using Google image search), reduce them to the same size, compress them heavily, and then examine the last 2 bits of each of their pixels. If there is a significant difference in the randomness of the control images vs. the randomness of the suspected image, you could then suggest with X% confidence that the suspected image has been tampered with. However, b
0Nanashi9y
--moved to previous comment8
1ChristianKl9y
If you would put a probability on it, how likely would you expect a proper security audit to prove you wrong?
2Nanashi9y
.01%
3itaibn09y
How much money are you willing to bet on that? If the amount is less than $50,000, I suggest you just offer it all as prize to whoever proves you wrong. The value to your reputation will be more than $5, and due to transaction costs people are unlikely to bet with you directly with less than $5 to gain.
2Nanashi9y
I'd be willing to bet 50% of the market value of a feasible distinguishing-attack against AES. Under the condition that whoever proves me wrong discloses their method to me and only me. In other words: a shitload. Such an attack would be far more valuable than any sum I'd possibly be able to offer.
1Nornagest9y
Wrong on what count? I intended that sentence to refer only to the last paragraph of my post, and I'd expect that to be very implementation-dependent. Generally speaking, the higher the compression ratio the more perfectly random I'd expect the low bits to be -- but even at low ratios I'd expect them to be pretty noisy. I'm fairly confident that some JPEG implementations would leave distinguishable patterns when fed some inputs, but I don't have any good way of knowing how many or how easily distinguishable. To take a shot in the dark, I'm guessing there's maybe a 30% chance that an arbitrarily chosen implementation with arbitrarily chosen parameters would be easily checked in this way? That's mostly model uncertainty, though, so my error bars are pretty wide. If we exclude that sort of statistical analysis, I'd estimate on the order of a 10 or 20% chance that Decoy images are distinguishable as such by examining metadata or other non-image traces -- but that comes almost entirely from the fact that I haven't read Nanashi's code, I'm not a JPEG expert, and security is hard. A properly done implementation should not be vulnerable to such an attack; I just don't know if this is properly done.
1Nanashi9y
"Confirm" meaning an attacker cannot demonstrate with ~100% certainty that the image isn't of the type that could normally be found on the camera roll.

For grins:

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

Information security is a pretty big passion of mine; I don't think someone needs to have "something to hide" in order to make use of digital signing, encryption, etc. Another passion of mine is making things easier for other people to do. I've written a couple of tools that I think can be useful for the LW crowd.

Online PGP Signature: This is an online javascript-based tool which allows you to sign messages using your PGP private key. I love the idea of PGP-signed messages (I remember someo... (read more)

If you are passionate about computer security, you know the standard advice to NOT use unaudited tools from unknown sources.

"Just look at my code" is not a good answer since people who can usefully look at your code already have a large variety of well-known tools at their disposal. Your target is "normal" people and to them your code is gibberish anyway.

4Nanashi9y
Actually my target audience isn't "normal" people since normal people don't have a PGP private key and probably don't even know what PGP is. I said this before somewhere in the vast miasma of comments that this is a tool I wrote for my own use that I wanted to share with the LW crowd. (Note that there is not a PGP key generator or signature validator included with the tool.) As for its value? Well, there is at least one person capable of usefully looking at my code who finds value in this tool: me. If no one else finds it helpful, that's okay, the effort required to upload and share the app was pretty trivial. If other people DO find it helpful, then great, I've generated a bit of net utility for very little effort.
2Lumifer9y
Value can be negative if people think something is secure when it isn't (that's not a claim that your software is insecure, just a general observation).
-1Nanashi9y
I'm not going to lose any sleep over that, at least with regard to the PGP tool.
0Nanashi9y
Since someone seemed to take issue with this, I'll clarify: all the potential security flaws that we've discussed here operate under the assumption that I maliciously attempted to subvert the stated purpose of this tool. Since I am ~100% confident that I did not, in fact, do this, I am not too concerned.
-3Lumifer9y
This is not true at all. Security flaws of a crypto tool are bugs which make it less secure than expected and which the author is typically not aware of. Software with a hidden malicious load is just a trojan (or malware in general).
4Nanashi9y
Note that I said "all the potential security flaws we've discussed here". Not, "any possible security flaw." This is precisely why I've been annoyed by the direction this thread has taken. If someone wants to talk about potential flaws specific to this tool, I'm all ears. But instead it's mostly been a discussion about all the different ways I could possibly slip a Trojan into this tool.
1Lumifer9y
I don't believe I ever said anything like that.
2Nanashi9y
You didn't. Sorry, I should have clarified. When I said "this thread" I meant "the comments in general" and not your particular reply.

This is not a program I wrote but while we're posting things I have a guide to setting up Automator on a mac to send out an email on login on my blog.