After my previous post on the marginal existential risk from AI in world where nuclear war is likely, here I review some canonical AI risk literature, describe my assessment on the current level of AI risk, and conclude that an "AI pause" is not advisable.
After my previous post on the marginal existential risk from AI in world where nuclear war is likely, here I review some canonical AI risk literature, describe my assessment on the current level of AI risk, and conclude that an "AI pause" is not advisable.