A key assumption in most x-risk arguments for AI is that the ability of an agent to exert control over the world increases rapidly with intelligence. After all, AI safety would be easy if all it required was ensuring that people remain far more numerous and physically capable than the AI or even ensuring that the total computational power available to AI agents is small compared to that available to humanity.

What these arguments require is that a single highly (but not infinitely) intelligent agent will have be able to overwhelm the advantages humans might retain in terms of numbers, looks and computational power either by manipulating people to do it's bidding or hacking other systems. However, I've yet to see any attempt to quantify the relationship between intelligence and control assumed in these arguments.

It occurs to me that we have information about these relationships that can inform such assumptions. For instance, if we wish to estimate the returns to intelligence in hacking we look at how the number of exploits discovered by researchers varies with their intelligence.

To estimate the returns to intelligence in terms of manipulation we could at the distribution of intelligence in highly effective politicians/media personalities and compare it to other traits like height or looks. Or even, if we assume that evolution largely selects for ability to influence others, look at the distribution of these traits in the population.

I realize that doing this would probably require a number of substantial assumptions but I'm curious if anyone has tried. And yes I realize this entirely ignores the issue of defining intelligence beyond human capability (though if the notion has any validity we could probably use something like the rate at which unknown theorems, weighted by importance, can be proved).

New Answer
New Comment

1 Answers sorted by

RogerDearnaley

Dec 31, 2023

40

Not that relevant, but an observation specifically on politicians: having met several national-level politicians in person and heard them speak, they are uniformly far more charismatic in person than I had expected from having only seeing them on TV. That old adage about the camera adding fifteen pounds: it also reduces charisma, significantly. Even politicians you thought were gray and dull from seeing them on TV are actually very charismatic in person. So my impression is that one of the primary requirements to be an effective politician at a national level is being extremely charismatic — way more charismatic than you think they are. Which statistically, offhand, suggests perhaps they're mostly not also amazingly smart (probably mostly just somewhat smart), since the odds of lightning striking twice are low. (This wouldn't necessarily be true if intelligence and charisma were strongly correlated, but I've met enough STEM professors to know that definitely isn't true.)

I am distantly related to a powerful political family, and am apparently somewhat charismatic in person, in a way that to me just feels like basic empathy and social skills. If there's a way to turn that into more productivity for software development or alignment research, let me know.

3localdeity4mo
It could make you better at: managing a team, advocating for a certain project, mediating discussions and conflicts, keeping meetings productive, giving advice to individuals about their social or socially mediated problems, etc.  I don't think it would directly enhance your productivity as a researcher, but it could let you act as a force multiplier for others.
3Sheikh Abdur Raheem Ali4mo
Thanks, that matches my experience. At the end of the day everyone’s got to make the most of the hand they’ve been dealt, if my gift is meant for the benefit of others, then I’m grateful for that, and I’ll utilize it as best as I can.
2RogerDearnaley4mo
In general, in software companies, the people most likely to fit that profile are Product /Project Managers. Which requires empathy with the users, imagination, and communication and social skills to communicate to and coordinate teams. Not quite as necessary specifically in Alignment work.
1 comment, sorted by Click to highlight new comments since: Today at 7:48 AM
[-]lc4mo20

After all, AI safety would be easy if all it required was ensuring that people remain far more numerous and physically capable than the AI or even ensuring that the total computational power available to AI agents is small compared to that available to humanity.

Why?