Petal Pepperfly

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
Answer by Petal Pepperfly10

I see no problems with your list. I would add that creating corrigible superhumanly intelligent AGI doesn't necessarily solve the AI Control Problem forever because its corrigibility may be incompatible with its application to the Programmer/Human Control Problem, which is the threat that someone will make a dangerous AGI one day. Perhaps intentionally.