There's a lot of privacy-related writing that looks like:

  1. Introduce a risk due to insufficient privacy.
  2. Describe things you can do to increase your privacy.

For example, I recently came across "It's time to worry about Internet privacy again" (via LW), which talks about the risk of AI-accelerated scamming. The idea is that the more information about you there is in a place accessible to scammers, the easier it is to send messages that will fool you into doing the wrong thing. Then it recommends things like avoiding streaming video and using Linux and Signal.

I have two big issues with this kind of writing, which are that it rarely:

  • Connects the privacy recommendations to the risks, and instead treats "privacy" as a scale where your only options are to move towards "more privacy" or "less privacy". Choices have different impacts against different possible threats: if Netflix knows that I like Deep Space Nine, how does that information even get into a scammer's AI tooling?

  • Engages with the reasons someone wouldn't already be doing their recommended things, and consider whether the privacy gains are worth the downsides. If I switched from a Mac laptop to Linux I expect I'd have a worse experience (battery life, trackpad, suspend, etc) and the privacy gains from avoiding Apple telemetry seem unlikely to outweigh that. Similarly, being able to look back at old instant message conversations has been very useful many times, and while I'm willing to use Signal with friends that want it I'd much rather use messaging apps that care about maintaining these logs.

One thing that makes this kind of reasoning especially tricky is that the benefits are far from linear. For most threats you might be worried about (examples) going entirely dark gets you much more than twice the value of going halfway there. So a privacy mitigation might be worth the hassle for one person because it prevents a large portion of their potential exposure, but not worth it for someone else who has a lot of different exposures and realistically isn't going to do much to reduce most of them.

Another issue that I rarely see enough concern over is that you can't trust the future to keep things private. The post recommended, in addition to avoiding streaming and using Linux and Signal, posting under a pseudonym. While this may be enough to keep people from connecting your writing back to you right now, even a relatively simple stylometric tool was able to identify many people's HN "alt" accounts, and this kind of highly scalable automated analysis will likely improve a lot over time.

Personally, I've gone a pretty different way: I've generally given up on privacy. After considering the ongoing effort necessary to get actually useful reductions in the level of risk from realistic threats, I think most "private option" recommendations are very far from worth it for me. Instead, I default to public: I'll include even somewhat personal details in blogging if they seem like they'd be helpful to others, draft company-internal emails as if they may someday be leaked, and try hard to make choices such that if they became public I'd be comfortable standing behind my actions. This approach isn't for everyone, and I'm glad there are people working on various privacy technologies, but I think it would be a better fit for a lot more people than currently practice it.

Comment via: facebook, mastodon

New Comment
1 comment, sorted by Click to highlight new comments since: Today at 2:45 PM

even a relatively simple stylometric tool was able to identify many people's HN "alt" accounts, and this kind of highly scalable automated analysis will likely improve a lot over time.

True, but there may still be some value in deniability. Even if a tool says that accounts "Viliam" and "creepypenguin2020" are the same person with probability 98%, a company might hesitate to fire me over the latter (because of a possible legal risk), if I insist that it is not my account.