A word's denotation is our conscious definition of it. You can think of this as the set of things in the world with membership in the category defined by that word; or as a set of rules defining such a set. (Logicians call the former the category's extension into the world.)
A word's connotation can mean the emotional coloring of the word. AI geeks may think of it as a set of pairs, of other concepts that get activated or inhibited by that word, and the changes to the odds of recalling each of those concepts.
When we think analytically about a word - for instance, when writing legislation - we use its denotation. But when we are in values/judgement mode - for instance, when deciding what to legislate about, or when voting - we use its denotation less and its connotation more.
This denotative-connotative gap can cause people to behave less rationally when they become more rational. People who think and act emotionally are at least consistent. Train them to think analytically, and they will choose goals using connotation but pursue them using denotation. That's like hiring a Russian speaker to manage your affairs because he's smarter than you, but you have to give him instructions via Google translate. Not always a win.
Consider the word "human". It has wonderful connotations, to humans. Human nature, humane treatment, the human condition, what it means to be human. Often the connotations are normative rather than descriptive; behaviors we call "inhumane" are done only by humans. The denotation is bare by comparison: Featherless biped. Homo sapiens, as defined by 3 billion base pairs of DNA.
Some objections to transhumanism are actually objections to transhumanism. But some are caused by the denotative-connotative gap. A person's analytic reasoner says, "What about this transhumanism thing, then?", and their connotative reasoner replies, "Human good! Ergo, not-human bad! QED."
I don't mean that we can get around this by renaming "transhumanism" as "humanism with sprinkles!" This confusion over denotation and connotation happens inside another person's head, and you can't control it with labels. If you propose making a germline genetic modification, this will trigger thoughts about the definition of "human" in someone else's head. When that person asks how they feel about this modification, they take the phrase "not human" chosen for its denotation, go into values mode, access its full connotation, attach the label "bad" to "not human", and pass the result back to their analytic reasoner to decide what to do about it. Fixing a disease gene can get labelled "bad" because the connotative reasoner makes a judgement about a different concept than the analytic reasoner thinks it did.
I don't think the solution to the d-c gap is to operate only in denotation mode. Denotation is what 1970s AI programs had. But we can try to be aware of the influence of connotations, and to prefer words that say what we mean over the overused and hence connotation-laden words that first spring to mind. Connotation isn't a bad thing - it's part of what makes us vertebrate, after all.
In conversation, the best way I've found to avoid causing frustration by the Socratic method is flattery. Every once in a while say "Ah yes", or "That's a valuable way to look at some problems", "I see where you're coming from", etc. And smiling and nodding irregularly when they say something less wrong than other statements. Sometimes, act as if the idea is new to you. Other times, attribute it knowingly to someone ostensibly respectable. It just takes a bit of positive feedback to keep a person to speak his or her mind.
A great way to get an arrogant person to reconsider an idea is to emphasize the priors they have right, have them work out inconsistencies with your simple, crafted questions, and then promote it as their idea, with a little bit of help that you provided.
It's not just questions instead of statements, it's a (partial) pretense of humbleness, an appeal to their ego to let you in. That's where I suspect people usually go wrong.