Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

To be fair, I think that everyone in a position to actually control the first super-intelligent AGIs will likely already be aware of most of the realistic catastrophic scenarios that humans could preemptively conceive of.  The most sophisticated governments and tech companies devote significant resources to assessing risks and creating highly detailed models of disastrous situations.  

 

And on the reverse, even if your scenarios were to become widely discussed on social media and news platforms, something like 99.9999999% of the potential audience for this information probably has absolutely no power to make them come true even if they devoted their lives to it.

 

If anything, I would think that openly discussing realistic scenarios that could lead to AI-induced human extinction would do a lot more good than not, because it could raise awareness of the masses and eventually manifest in preventative legislation.  Make no mistake: unless you have one of the greatest minds of our time, I'd bet my next paycheck that you're not the only one who's considered the scenarios you're referring to.  So in keeping them to yourself, it seems to me that it would only serve to reduce awareness of the risks that already exist, and keep those ideas only in the hands of the people who understand AI (including and especially the people who intend to wreak havoc on the world).