this is a fair response, and to be honest i was skimming your post a bit. i do think my point somewhat holds, that there is no "intelligence skill tree" where you must unlock the level 1 skills before you progress to level 2.
i think a more fair response to your post is:
Surely it would be exceptionally good at those kinds of writing, too, right?
surely an LLM capable of writing A+ freshman college papers would correctly add two 2-digit numbers? surely an AI capable of beating grandmasters in chess would be able to tutor a 1000 elo player to a 1500 elo or beyond? surely an AI capable of answering questions at a university level in diverse subjects such as math, coding, science, law, would be able to recursively improve itself and cause an intelligence explosion? surely such an AI would at least be ab...
In some sense, the Agent Foundations program at MIRI sees the problem as: human values are currently an informal object. We can only get meaningful guarantees for formal systems. So, we need to work on formalizing concepts like human values. Only then will we be able to get formal safety guarantees.
unless i'm misunderstanding you or MIRI, that's not their primary concern at all:
...Another way of putting this view is that nearly all of the effort should be going into solving the technical problem, "How would you get an AI system to do some very modest con
this was posted after your comment, but i think this is close enough:
And the idea that intelligent systems will inevitably want to take over, dominate humans, or just destroy humanity through negligence is preposterous.
They would have to be specifically designed to do so.
Whereas we will obviously design them to not do so.