Honorable AI
This note discusses a (proto-)plan for [de[AGI-[x-risk]]]ing [1] (pdf version). Here's the plan: 1. You somehow make/find/identify an AI with the following properties: * the AI is human-level intelligent/capable; * however, it would be possible for the AI to quickly gain in intelligence/capabilities in some fairly careful self-guided way, sufficiently...
yea we could and imo should just set out to grow more intelligent/capable as humans, instead of handing the world to some aliens (at least for now, but also maybe forever, tho it should remain possible to collectively reevaluate this later). this centrally requires quickly banning AGI development and somehow quickly making humanity generally act and develop more thoughtfully