ike comments on Open thread, Feb. 9 - Feb. 15, 2015 - Less Wrong

6 Post author: MrMind 09 February 2015 09:12AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (321)

You are viewing a single comment's thread. Show more comments above.

Comment author: gwern 10 February 2015 07:20:37PM *  8 points [-]

That's both wasteful on bandwidth as everything is downloaded twice, and anything not public needs to be done manually with cookies.

But it's dead-simple and robust compared to some sort of in-browser extension which saves the rendered DOM in the background.

He also can't prove that they came from a site even if it used https.

I've never needed to prove that. My concern is usually having a copy at all, and the IA is trusted enough that it's a de facto proof.

But it's possible:

  1. find a download tool which will save the raw bit-level TCP/IP stream of packets when you download a page and save it to an appropriate format like a PCAP file (maybe Wireshark supports this or it could be hooked into something like wget?); this preserves the HTTPS encryption and allows proving that the content came signed with the domain's key (not that this means much), since the crypto can be checked to be valid and the stream replayed.
  2. This doesn't prove it came from the domain at a particular time, but you can get timestamping using a trusted timestamping service such as the Bitcoin blockchain: as soon as the PCP file is closed and the webpage downloaded, take the hash and send a satoshi to the equivalent Bitcoin address. Transaction fees mean that this might get expensive to do for each web page, but there are a lot of ways you could optimize this (such as batching up an entire day's worth of downloads into a single tarball archive, and timestamping that; big savings but also more granular timestamp; there are other schemes). Now you can prove the content came from someone with a particular HTTPS public key and that you downloaded it on or before a particular hour and date (roughly when the Bitcoin block with your transaction will be mined).

If anyone doubts you, they can take the relevant tarball, verify the hash and timestamp to when you say it was, then extract the relevant PCAP, verify the encryption, and then the displayed content.

Good luck.

Comment author: ike 10 February 2015 09:23:35PM 1 point [-]

But it's simple and robust compared to some sort of in-browser extension which saves the rendered DOM in the background.

It's not robust for saving things like private chats, or anything else you need to be logged in to see. If I read your page correctly you'd need to do each of those manually. Unless the program can automatically take the cookies from a browser, and even then not everything gets saved. I'd want something that saved every element that was downloaded to my computer.

Also, if the page gets deleted fast, your program may miss it.

I've never needed to prove that. My concern is usually having a copy at all, and the IA is trusted enough that it's a de facto proof.

I anticipate needing that, or at least finding it useful. Did you know the Internet Archive will delete based on a request by the website? I'm dealing with a specific domain where people are forging screenshots to prove their side, and something like this would come in handy, I think. Some of the sites are also deleting posts fast, which makes it hard to archive on a schedule.

find a download tool which will save the raw bit-level TCP/IP stream when you download a page and save it to an appropriate format like a PCAP file (maybe Wireshark supports this or it could be hooked into something like wget?); this preserves the HTTPS encryption and allows proving that the content came signed with the domain's key (not that this means much), since the crypto can be checked to be valid and the stream replayed.

Why doesn't it mean much?

For timestamping, doesn't the TCP protocol have timestamps in it, or are those not signed or something? Also, many pages have timestamps embedded in them.

We do have different uses for this, obviously.

Would a proxy be easier to set up, and if so, how would I do that to cache all results?

If there was a program that functioned like I wanted it to, would you prefer it over your solution?

Comment author: gwern 11 February 2015 04:48:57AM 2 points [-]

It's not robust for saving things like private chats, or anything else you need to be logged in to see. If I read your page correctly you'd need to do each of those manually.

There are always edge-cases. A simple version of my solution can be coded up and fully implemented in an hour or less by a normal programmer (the hardest part is writing the SQL line for pulling out URLs from Firefox); the full version (a bot or daemon) could probably be done in only a few hours more.

Your desired solution, on the other hand, requires intimate familiarity with browser extensions and internals (if you want to save dynamic content and fancy things like Javascript-based chats, so much for trying to leverage existing solutions like the Mozilla Archive Format extension!).

Pareto.

Did you know the Internet Archive will delete based on a request by the website?

My understanding is that in all cases, these deletions are really more 'marking private', and if it's done via robots.txt, well, one day that robots.txt may be gone.

Some of the sites are also deleting posts fast, which makes it hard to archive on a schedule.

Note the on-demand archiving services used by my archive bot, discussed in my page...

For timestamping, doesn't the TCP protocol have timestamps in it, or are those not signed or something?

I'm not sure. It's possible that the packets have timestamps, but the encrypted content does not, in which case you don't get provable timestamping: the HTTPS encryption can be verified, but one could have modified the packets to read any timestamps one pleases because they're 'around' the encryption, not 'in' it. If it does, then maybe you don't need explicit trusted timestamping, but if it doesn't (or you want to work with any other data sources which don't luckily have timestamps built in just right) then the Bitcoin solution would work.

Also, many pages have timestamps embedded in them.

Now who's satisficing.

If there was a program that functioned like I wanted it to, would you prefer it over your solution?

I would consider it, but I would be somewhat reluctant to switch because I wouldn't trust the tool to not break horribly at some point.