Mo Putera
Non-loss of control AGI-related catastrophes are out of control too
For the Open Philanthropy AI Worldview Contest Executive Summary * Conditional on AGI being developed by 2070, we estimate the probability that humanity will suffer an existential catastrophe due to loss of control (LoC) over an AGI system at ~6%, using a quantitative scenario model that attempts to systematically assess...
How should we think about the decision relevance of models estimating p(doom)?
To illustrate what I mean, switching from p(doom) to timelines: * The recent post AGI Timelines in Governance: Different Strategies for Different Timeframes was useful to me in pushing back against Miles Brundage's argument that "timeline discourse might be overrated", by showing how choice of actions (in particular in the...
To add to your point, Jacy Reese Anthis in Some Early History of Effective Altruism wrote
... (read 926 more words →)