When it comes to meta data and plain text extraction it's worth noting that meta data can both be used to verify documents and to expose whistleblowers. If a journalist can verify authenticity of emails because they have access to the meta data that's useful.
- guidelines for latest hard-to-censor social media
- to publish torrent link, maybe raw docs, and social media discussions
- guidelines must be country-wise and include legal considerations. always use a social media of a country different from the country where leak happened.
The Session messenger is probably better than country-specific social media.
country-wise torrents (not sure if this is needed)
- torrents proposed above are illegal in all countries. instead we can legally circulate country A's secrets via torrent within country B and legally circulate country B's secrets via torrent within country A. only getting the info past the border is illegal, for that again need securedrop or hard disk dead drop if any country or org seals their geographic borders.
The US does have the first amendment. That currently means that all the relevant information about AI labs is legal to share. It's possible to have a legal regime where sharing model weights of AI gets legally restricted but for the sake of AI safety I don't think we want Open AI researchers to leak model weights of powerful models.
The main information that's currently forbidden from being shared legally in the US is child pornography, but whistleblowing is not about intentionally sharing child pornography. When it comes to child pornography, the right thought isn't "How can we host it through a jurisdiction where it's legal", but to try to avoid sharing it.
While sharing all the bitcoin blocks involves sharing child pornography, nobody went after bitcoin minors for child pornography. People who develop cryptography who don't intend to share child pornography generally has not been prosecuted.
Torrents are not a good technology for censorship-resistant hosting. Technology like veilid, where a data set that gets queried by a lot of people automatically gets distributed over more of the network is better because it prevents the people who hosts the torrents from being DDoSed.
If you just want to host plaintext, blockchain technology like ArDrive also exists. You need to pay ~12$ per GB but if you do so, you get permanent storage that's nearly impossible to censor.
If a journalist can verify authenticity of emails because they have access to the meta data that's useful.
Makes sense! Will think about this.
The Session messenger is probably better than country-specific social media.
I'm explicitly looking for social media for that step as common knowledge needs to be established for any political action following it (such as voting in a new leader). Messaging can't replace the function of social media I think.
The US does have the first amendment. That currently means that all the relevant information about AI labs is legal to share. It's possible to have a legal regime where sharing model weights of AI gets legally restricted but for the sake of AI safety I don't think we want Open AI researchers to leak model weights of powerful models.
My proposed system doesn't assume legality and can be used to leak AI model weights or anything invented by an ASI, such as bioweapon sequences and lab protocols to manufacture bioweapons. It can also be used to spread child porn and calls to violence.
I agree that US having first amendment makes this system easier to implement in the US but generally the idea is that law can change based on incentives, and this system works regardless of laws. For instance due to incentives intelligence agencies may classify certain types of information and/or place employees under security clearance. This system will allow leaking even such information. For example video recordings of which authority or employee said what.
because it prevents the people who hosts the torrents from being DDoSed.
Yes torrents can be DDoSed, thanks for reminding me! I knew this but recently forgot. In general I'm optimistic on proof-of-work captchas as a way to ensure anonymous users can share information without spamming each other. But yes, the details will have to be worked out.
If you just want to host plaintext, blockchain technology like ArDrive also exists.
I haven't looked into ArDrive codebase in particular, but in general I'm not very optimistic on any blockchain tech whose software is too complex as the developers can then be co-opted politically. Therefore I don't see why censorship-resistance of ArDrive is higher than a forum like 4chan. ArDrive can also be used, no doubt, I just don't want people to get the false impression that ArDrive is guaranteed to still be around 10 years from now, for example.
The US does not have laws that forbid people who don't have a security clearance from publishing classified material. The UK is a country that has such laws but the first amendment prevents that.
I don't think that chosing jurisdiction in the hope that they will protect you is a a good strategy. If you want to host leaks from the US in China, it's possible that China's offers to surpress that information as part of a deal.
4chan has a single point of failure. If the NSA would be motivated enough to burn some of their 0-days, taking it offline wouldn't be hard.
Taking a decentralized system with an incentive structure like ArDrive down is significantly harder.
Attacking ArDrive is likely also politically more costly as it breaks other usages of it. The people with NFT that store data on ArDrive can pay lobbyists to defend it.
Just convincing the developers is not enough. You also need the patch they created to be accepted by the network, and it's possible for the system to be forked if different network participants want different things.
Torrents are also bad for privacy everybody can see the IP addresses of all the other people who subscribe to a torrent.
For privacy onion routing is great. Tor uses that. Tor however doesn't have a data storage layer.
Veiled and the network on which Session runs use onion routing as well and have a data storage layer.
In the case of Veiled you get the nice property that the more people want to download a certain piece of content the more notes in the network store the information.
As far as creating public knowledge goes, I do think that Discord, servers and Telegram chats serve currently as social media.
Update: I thought about this more and I think yeah it should be possible to just skip the torrent step. I have updated the post with this change.
Post on SecureDrop servers, circulate via manual or automated resending of messages. For people with technical skills and enough free time to run servers as a part-time job.
Post on a nginx clearnet server, circulate via automated web crawlers, For people with technical skills but not necessarily a lot of free time.
Post on high attention social media platforms, circulate via people using DMs and discovery of those social media platforms. For all people.
A key attack point here is the first person who posts this on clearnet. Hence I was hoping for it to circulated by automated bots before any human reads it on clearnet.
The US does not have laws that forbid people who don't have a security clearance from publishing classified material. The UK is a country that has such laws but the first amendment prevents that.
Thanks this is useful info for me. But also don’t think it matters as much? People in NSA, state dept etc will obviously find an excuse to arrest the person instead. Many historical examples of the same.
I don't think that chosing jurisdiction in the hope that they will protect you is a a good strategy. If you want to host leaks from the US in China, it's possible that China's offers to surpress that information as part of a deal.
I will likely read more on this. I’m generally less informed on legal matters. Any historical examples you have would be useful.
I agree with the very specific example of US and China this might happen. The general idea is to share in a lot of different places. So share it in China and also lots of other countries.
Attacking ArDrive is likely also politically more costly as it breaks other usages of it.
I’m currently not very convinced but I’ll have to read more about ardrive in order to be confident. I currently guess 4chan’s owners and developers have more money and public attention, and hence more powerful humans need to be taken down in order to take down 4chan. Zero day might doxx users sure, I agree with this being possible.
Torrents are also bad for privacy everybody can see the IP addresses of all the other people who subscribe to a torrent.
Yes I’m aware of this.
One platonic ideal world is just have 8 billion people operate 8 billion securedrop servers and for any information that hits one server and checks out as not spam, user attached a PoW hash and sends copies to every other server. But convincing that many people to run SecureDrop is hard. Torrent is one level less private and secure than this. But yes I’ll think more on whether torrent is good enough or whether a custom solution has to be designed here.
Veiled and the network on which Session runs use onion routing as well and have a data storage layer.
In the case of Veiled you get the nice property that the more people want to download a certain piece of content the more notes in the network store the information.
I’ll try to read more on veiled. And also try their app out. Thanks!
As far as creating public knowledge goes, I do think that Discord, servers and Telegram chats serve currently as social media.
Yes this is true as of 2025 for many countries. Which social media platforms are high attention and also hard to censor varies country-to-country.
For instance in India most people use phone login not email login hence WhatsApp plays a lot more of a social media role.
Update: This is a living document. Given below is an older version of this document. Click link for latest version.
2025-04-13
DISCLAIMER
This document describes how to setup distributed whistleblowing processes to reduce personal risk for everyone involved in the process. Typically whistleblowing (such as with wikileaks or snowden leaks) incurs significant personal risk. Reducing personal risk may ensure whistleblowing is highly likely to happen when an org doesn't have complete trust of all its members, forcing them to pay a secrecy tax (in Assange's words) relative to orgs that do have complete trust of their members and/or higher levels of transparency with the broader public.
I am especially interested in enabling whistleblowing on orgs and labs working on intelligence (such as superintelligent AI, BCIs, human genetic engg, human connectome research, etc) and national/international intelligence agencies that may work with them.
Potential problems
Summary of potential solutions
IMPORTANT: Need feedback from people who have actually worked with whistleblowers, to validate all hypotheses listed above.
IMPORTANT: Need to decide whether whisteblowing of important orgs (such as companies/labs researching intelligence, and national/intl intelligence agencies) is actually what I want to work on.
Crux: Does this lead to longterm human flourishing or not?