Not that relevant, but an observation specifically on politicians: having met several national-level politicians in person and heard them speak, they are uniformly far more charismatic in person than I had expected from having only seeing them on TV. That old adage about the camera adding fifteen pounds: it also reduces charisma, significantly. Even politicians you thought were gray and dull from seeing them on TV are actually very charismatic in person. So my impression is that one of the primary requirements to be an effective politician at a national level is being extremely charismatic — way more charismatic than you think they are. Which statistically, offhand, suggests perhaps they're mostly not also amazingly smart (probably mostly just somewhat smart), since the odds of lightning striking twice are low. (This wouldn't necessarily be true if intelligence and charisma were strongly correlated, but I've met enough STEM professors to know that definitely isn't true.)
I am distantly related to a powerful political family, and am apparently somewhat charismatic in person, in a way that to me just feels like basic empathy and social skills. If there's a way to turn that into more productivity for software development or alignment research, let me know.
After all, AI safety would be easy if all it required was ensuring that people remain far more numerous and physically capable than the AI or even ensuring that the total computational power available to AI agents is small compared to that available to humanity.
Why?
A key assumption in most x-risk arguments for AI is that the ability of an agent to exert control over the world increases rapidly with intelligence. After all, AI safety would be easy if all it required was ensuring that people remain far more numerous and physically capable than the AI or even ensuring that the total computational power available to AI agents is small compared to that available to humanity.
What these arguments require is that a single highly (but not infinitely) intelligent agent will have be able to overwhelm the advantages humans might retain in terms of numbers, looks and computational power either by manipulating people to do it's bidding or hacking other systems. However, I've yet to see any attempt to quantify the relationship between intelligence and control assumed in these arguments.
It occurs to me that we have information about these relationships that can inform such assumptions. For instance, if we wish to estimate the returns to intelligence in hacking we look at how the number of exploits discovered by researchers varies with their intelligence.
To estimate the returns to intelligence in terms of manipulation we could at the distribution of intelligence in highly effective politicians/media personalities and compare it to other traits like height or looks. Or even, if we assume that evolution largely selects for ability to influence others, look at the distribution of these traits in the population.
I realize that doing this would probably require a number of substantial assumptions but I'm curious if anyone has tried. And yes I realize this entirely ignores the issue of defining intelligence beyond human capability (though if the notion has any validity we could probably use something like the rate at which unknown theorems, weighted by importance, can be proved).