All of Tarnish's Comments + Replies

Answer by Tarnish10

As far as I know, there is unfortunately no system for this. I think what people typically do is contact MIRI leadership, but I don't know MIRI leadership to have particularly put silent people in touch with other silent people as a result.

1hive
Thank you. The best (but still complicated) idea I have as a general solution (beside contacting MIRI) is to set up a website explicitly as a "shelling point for infohazard communication" and allow people to publish public keys and encrypted messages there. When you think you have an infohazard, you generate a key using a standardized method and your idea as seed. This would allow everyone with the same idea to publish messages that only they can read. E.g. Einstein would make a key from the string "Energy is mass times the speed of light squared." and variations thereof (using different languages). And leave contact information as a message. I don't know if there is any decentralized and encrypted messenger protocol that would allow for that. With that, the website would only have to contain the instructions to avoid the legal consequences of hosting.
Tarnish20

Strong arguments of this kind? I sure hope not, that'd make it easier for more people to find insights for how to build an AI that causes doom.

Tarnish2-6

Note that some of the best arguments are of the shape "AI will cause doom because it's not that hard to build the following..." followed by insights about how to build an AI that causes doom. Those arguments are best rederived privately rather than shared publicly, and by asking publicly you're filtering the strength of arguments you might get exposed to.

3faul_sname
I note that if software developers used that logic for thinking about software security, I expect that almost all software in the security-by-obscurity world would have many holes that would be considered actual negligence in the world we live in.
3ABlue
Is there a better way of discovering strong arguments for a non-expert than asking for them publicly?
Tarnish1716

Unfortunately, that does not appear to be a stable solution. Even if the US paused its AI development, China or other countries could gain an advantage by accelerating their own work.

Arguing-for-pausing does not need to be a stable solution to help. If it buys time, that's already helpful. If the US pauses AI development, but China doesn't, that's still less many people working on AI that might kill everyone.

6niplav
That argument seems plausibly wrong to me.

Mu. The most basic rationalist precept is to not forcibly impose your values onto another mind.

It is? Last I heard, the two most basic precepts of rationality were:

  1. Epistemic rationality: systematically improving the accuracy of your beliefs.
  2. Instrumental rationality: systematically achieving your values.

(Typically with a note saying "ultimately, when at odds, the latter trumps the former")