Context: I wrote a post about how "Graham's Design Paradox" (though I didn't use that phrase) is a major economic bottleneck, especially once you have a lot of money.
I generally buy the point of this post, though I'd frame it differently. Your bullet points correspond to heuristics like:
These sorts of heuristics are themselves a class of knowledge/skill. It's a type of knowledge/skill which generalizes across many domains, and gives us some ability to recognize expertise across many domains.
But I wouldn't call these sorts of heuristics "general abilities, resources, and motives"; they're more narrow than that. They all fit a certain pattern. What I'd call these sorts of heuristics is "rationality skills".
Minor addition to the list of practical problems:
Can't believe I forgot this one. I will edit the post and add this because it's also a very common failure mode.
In the case of validating the CISO's performance, I could imagine a pentesting company where its compensation depends on whether or not they successfully hacked the company. They could advertise this fact to the CEO (a more sophisticated version of "you only pay us if we hack you!"). Might not solve the problem entirely, but could be a step in the right direction in this case?
This is kind of what bug bounties are! See also https://www.synack.com/red-team/. The limitation with crowdsourcing and bug bounties is that you can generally only use them to find publicly accessible technical problems with your products, and the hackers aren't allowed to do things like social engineering. I haven't heard of a consultancy that has this same policy with their pentests, but generally it'd have to be the company contracting them to come up with the compensation policy, as which assets are important and what further "compromise" entails varies between organizations.
These can be tough problems, but they're mostly not immune to making prudent leadership decisions and caring about the cause of concern. I'm good friends with one computer hacker who despite having extraordinary security chops and being a great person is, (I believe), mildly to moderately autistic, an alumnus of a no-name college in Texas, and generally terrible at perception management or professional networking.
To me, that sounds like a person who's ill-suited to managing security practices at a company. Being good at security is about convincing other people to make design decisions that make their lives harder but that end up with a more secure architecture.
It's useful to have people like this around to spot problems but it's not enough. You need more to actually have an organization that gets their security decisions right.
I didn't necessarily say a person like that would be a good pick for a CISO. I am just impressed that Tesla was able to find them and hire them for the technical position that they did. It suggests competence.
Then I don't see how the example is relevant to the issue. The problem of Samsung isn't that they don't employ anyone who's good at design. It's that inside the organization it's impossible to give those people who are good at design the power to shape how the final design looks like in a way that's similar to Apple.
Graham's design paradox is a maxim about organizations that says that if key management isn't competent at ${X}, it's often impossible for an organization to hire people who are, regardless of how well resourced that organization is. It was once described thusly:
The paradox has been invoked either directly or as an analogy to explain why the alignment problem is hard, why information security is hard, why identifying good programming talent is hard, and why American car companies don't design good looking cars.
In some cases, especially in cases where failure is irrecoverable (like in evaluating alignment solutions), overcoming the design paradox is in fact a pressing issue. However I'd argue those cases are probably rare. In practice, most appeals to it to explain organizational incompetence are misattributions of a litany of deeper problems.
One field that seems to most people like it'd be a perfect application of this maxim is information security. Major security breaches are sometimes described as black swan events, because they're relatively rare and when they happen they can be very disruptive. Your typical large technology company may not get real-world feedback on the performance of a subpar Chief Information Security Officer for years. It's also extremely difficult to evaluate the security of such systems in vitro if you yourself are not generally good at breaking them, and you generally want to protect them against the best computer hackers in the field and not just the average ones. Perfect example of a case where the design paradox applies, right?
It's true that companies tend to be bad at picking CISOs, and a very common tendency in the industry is for those executives to be fired every six to eighteen months when they're scapegoated for incidents. In my estimation, however, this is not generally because executive leadership can't identify poor security practices. There actually happens to be a perfectly legible way for nontechnical people at most large companies to analyze their security teams' performance: penetration testing and red teaming.
The idea behind red teaming is that you pay a company like SpectreOps to try and breach specific key assets of yours, and then give you a report with all of the security holes they were able to find during that period. These firms will generally happily give you security recommendations and elaborate on deeper underlying problems with your tech policy, in ways that are understandable to smart nonexperts. Red teaming works because for most companies, surviving a comprehensive penetration test is a good (though not unmistakable) indicator that you'll survive the attention of the parties that are going to look at your company over the next couple years or so.
Almost every Fortune 500 company knows about this, and gets penetration tests performed regularly on their critical systems. There is not necessarily a ready made pentesting service out there for all imaginable security requirements, but it turns out most companies have similar needs and the standard occasional checkup is basically sufficient. And even when large companies can't identify the good penetration testing services on their first try, they're still usually able to cycle through a bunch of firms and then continue to use the ones that are able to come up with solid results. With enough money you can develop similarly conclusive testing and evaluation strategies for many other concerns, like A/B testing UX designs and paneling consumers for feedback on aesthetics.
So, why do firms still suck at picking information security executives, or struggle with these kinds of left field hires in general? Well, it's complicated:
These can be tough issues, but they're mostly not immune to making prudent leadership decisions and caring about the cause of concern. I'm good friends with one computer hacker who despite having extraordinary security chops and being a great person is, (I believe), mildly to moderately autistic, an alumnus of a no-name college in Texas, and generally terrible at perception management or professional networking. A while back I personally flew to Texas in an attempt to pitch them in person to be employee #1 at my attempted startup, as I was almost certain I had private information about their skills. They politely let me give my pitch, and then informed me that the salary would have to be something like 2x what I suggested, because they recently got an offer from Tesla, straight out of college, at well above Tesla's usual rate for the job they now have.
Did Tesla manage to snipe my friend because Tesla leadership is full of people with some particular psychological aptitude for security, or because Tesla has boatloads of money and is run by generally smart people? My guess it's that it's the latter more than the former. Perhaps there's some level beyond which the former becomes a bottleneck, but in my experience and in the experience of the people I've talked to who do pentesting at a top level, organizational competence along a specific dimension like security has more in common with leadership's general abilities, resources, and motives, than it does with personal skill.
It certainly decomplicates hiring efforts to be great at security engineering yourself, and thus in possession of a "what would I do" oracle. But the degree to which it hurts not to have such an oracle is context and task dependent, and might be irrelevant for all practical purposes if your staff can come up with reliable protocols for testing what you want to test anyways.