I don't think your use of "autistic" in this post was very clarifying. Do you just mean that the AI doesn't consider the context of the problem we give it in order to deduce the actual problem? If so, it's not clear to me that an AI with greater capabilities will necessarily be "less autistic".
I meant that it takes instructions a bit too literally since it doesn't fully understand implicit instructions.
Epistemic Status: Take with a grain of salt. This post was written relatively quickly and it heavily relies on making an analogy between AI and human behaviour. This post kind of just takes a sledgehammer to these concerns and tries to reason it out using these analogies anyway. I'd encourage others to consider whether I've fallen into the trap of mistakenly anthropomorphising AI.
Update:
Given the stakes of the alignment problem, I made the decision to emphasise clarity over political correctness.