Fair point, the analogy I made is a bit of a stretch, really.
Still, I think that uncertainty about "boxing" increases rapidly enough with intelligence that these considerations are significant even for intelligence differences that we observe between humans.
I agree that often people do not intuitively find it horrible scary to let an AI join their group aka let the AI out of the box, but on the other hand intuitively are wary of letting intelligent people* join their group for the reasons you listed above. In general people should be wary of both and more so for the AI since it does not share a common genetic history, brain structure, biological needs, and therefore and in general is a greater unknown. This lack of intuitive fear is much like how many species of animal lacked any fear of humans on first encou...
The following is a minor curiosity that occurred to me regarding real-world analogies to the AI-box concept.
Fundamentally, the reason that we fear a randomly-chosen super-intelligent AI is twofold: