I feel most of this fear is risidual leftovers from the self-modifying symbolic-program singularity FOOM theories that I hope are mostly left behind by now. But this is just the point -- people who don't understand real AGI don't understand what the real risks are and aren't (and certainly can't mediate them).
Self-modifying AI is the point behind FOOM. I'm not sure why you're connecting self-modification/FOOM/singularity with symbolic programming (I assume you mean GOFAI), but everyone I'm aware of who thinks FOOM is plausible thinks it will be because of self-modification.
Yes, I understand that. But it matters a lot what premises underlie AGI how self-modification is going to impact it. The stronger fast-FOOM arguments spring from older conceptions of AGI. Imo, a better understanding of AGI does not support it.
Thanks much for the interesting conversation, I think I am expired.
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.