If we're dead-serious about infohazards, we can't just be thinking in terms of 'information that might accidentally become known to others through naive LessWrong newbies sharing it on Twitter'.
Rather, we need to be thinking in terms of 'how could we actually prevent the military intelligence analysts of rival superpowers from being able to access this information'?
My personal hunch is that there are very few ways we could set up sites, security protocols, and vetting methods that would be sufficient to prevent access by a determined government. Which would mean, in practice, that we'd be sharing our infohazards only with the most intelligent, capable, and dangerous agents and organizations out there.
Which is not to say we shouldn't try to be very cautious about this issue. Just that we shouldn't be naive about what the American NSA, Russian GRU, or Chinese MSS would be capable of.
Bluntly: if you write it on Lesswrong or the Alignment Forum, or send it to a particular known person, governments will get a copy if they care to. Cybersecurity against state actors is really, really, really hard. Lesswrong is not capable of state-level cyberdefense.
If you must write it at all: do so with hardware which has been rendered physically unable to connect to the internet, and distribute only on paper, discussing only in areas without microphones. Consider authoring only on paper in the first place. Note that physical compromise of your home, workplace, and hardware is also a threat in this scenario.
(I doubt they care much, but this is basically what it takes if they do. Fortunately I think LW posters are very unlikely to be working with such high-grade secrets.)
Yep, we are definitely not capable of state-level or even "determined individual" level of cyberdefense.
When walls don't work, can use ofbucsation? I have no clue about this, but wouldn't it be much easier to use pbqrjbeqf for central wurds necessary for sensicle discussion so that it wouldn't be sreachalbe, and then have your talkings with people on fb or something?
Would be easily found if written on same devices or accounts used for LW, but that sounds easier to work around than literally only using paper?
No, this is also easy to work around; language models are good at deobfuscation and you could probably even do it with edit-distance techniques. Nor do you have enough volume of discussion to hide from humans literally just reading all of it; nor is Facebook secure against state actors, nor is your computer secure. See also Security Mindset and Ordinary Paranoia.
Yes! The way I'd like it is if LW had a "research group" feature that anyone could start, and you could post privately to your research group.
I like it. This is another example of AI Alignment projects needing more shared infrastructure.
This looks like something that would be useful also for alignment orgs, if they want to organize their research in siloes, as Yudkowsky often suggests (if they haven't already implemented systems like this one).
I've been thinking along similar lines, but instinctively, without a lot of reflection, I'm concerned about negative social effects of having an explicit community-wide list of "trusted people".
"Exfohazard" is a quicker way to say "information that should not be leaked". AI capabilities has progressed on seemingly-trivial breakthroughs, and now we have shorter timelines.
The more people who know and understand the "exfohazard" concept, the safer we are from AI risk.
I have the same sentiment as you. I wrote about this here: Has private AGI research made independent safety research ineffective already? What should we do about this?
(edit: i mean exfohazard, not infohazard)
to me, turning my thoughts into posts that i then publish on my blog and sometimes lesswrong serves the following purposes:
however, i've come to increasingly want to write and publish posts which i've determined — either on my own or with the advice of a trusted peers — to be potentially infohazardous, notably with regards to potentially helping AI capability progress.
on one hand, there is no post of mine i wouldn't trust, say, yudkowsky reading; on the other i can't just, like, DM him and everyone else i trust a link to an unlisted post every time i make one.
it would be nice to have a platform — or maybe a lesswrong feature — which lets me choose which persons or groups can read a post, with maybe a little ⚠ sign next to its title.
note that such a platform/feature would need something more complex than just a binary "trusted" flag: just because i can make a post that the Important People can read, doesn't mean i should be trusted to read everything else that they can read; and there might be people whom i trust to read some posts of mine but not others.
maybe trusted recipients could be grouped by orgs — such as "i trust MIRI" or "i trust The Standard List Of Trusted Persons". maybe something like the ability to post on the alignment forum is a reasonable proxy for "trustable person"?
i am aware that this seems hard to figure out, let alone implement. perhaps there is a much easier alternative i'm not thinking about; for the moment, i'll just stick to making unlisted posts and sending them to the very small intersection of people i trust with infohazards and people for whom it's socially acceptable for me to DM links to new posts of mine.