Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

I am not an expert, however I'd like to make a suggestion regarding the strategy. The issue I see with this approach is that policymakers have a very bad track record of listening to actual technical people (see environmental regulations).

Generally speaking they will only listen when this is convenient to them (some immediate material benefit is on the table), or if there is very large popular support, in which case they will take action in the way that allows them to put the least effort they can get away with.

There is, however, one case where technical people can get their way (at times): Military analysts

Strategic analysts to be more precise; apparently the very real threat of nuclear war is enough to actually get some things done. Nuclear weapons share some qualities with AI systems envisioned by MIRI: 

  • They can "end the world"
  • They have been successfully contained (only a small number of actors have access to them)
  • World-wide, Industry-wide control on their development
  • At one point, there were serious discussion of halting development altogether
  • "Control" has persisted over long time periods
  • No rogue user (as of now)

I think military analysts could be a good target to try to reach out to, they are more likely to listen and understand technical arguments than policymakers for sure, and they already have experience in navigating the political world. In an ideal scenario AI could be treated like another class of WMDs like nuclear, chemical and bioweapons.