Ouch, I hope your intestine has recovered since your diagnosis.
To be clear, when I say sensitivity I mean "how reactive is your immune system to gluten" rather than "do you feel gastrointestinal symptoms when you eat gluten". The correlation between the severity of symtoms (both obvious and non-obvious symptoms) is weaker than you would expect but it seem to me there still is one.
In your comment you describe 3 scenarios:
(1) Risk of cross contamination (Chipotle)
(2) Known cross contamination (fryer and pizza prep)
(3) Accidental medium dose of gluten
You are happy to accept (1), but you say celiacs "should not take" (2). I agree the risk is higher in (2), but the heart of my conclusion is that for some people (like myself), the additional risk is negligible and the benefit is significant. With the caveat that I need to check reality on "the additional risk is negligible" by measuring my immune system response.
If my lifestyle includes (1), (2), and unavoidably (3), but if my blood tests show normal antibodies (plus possibly another intestine inflammation check to be doubly sure), I think that (2) is a risk that's ok for me to keep taking.
Of course, it can be true at the same time that (2) is not worth it for you.
On the last point, I agree that avoiding long-term inflammation is important. But I don't think it necessarily follows that infrequent (3) causes less inflammation than a lot of (2). Maybe a low dose slips under the radar and doesn't trigger a reaction. Maybe a moderate dose reaches a threshold and makes your immune system hit the button and keeps the antibodies pumping for a while.
I think your conclusion is reasonable that the investment of effort in security improvements is not justified by the risk of it being exploited, but I want to pull out a tiny part of your post and suggest refining:
"There is no point in pursuing a security mindset if you are virtually certain that the thing you would be investing resources into would not be your weakest attack point."
Different attackers will target different points depending on their capability and what they care about, and which attacker will go after you depends on their motivations. Your weakest point may be lower real risk than others simply because the type of attackers who would exploit that point don't care about you.
Organisations will regularly invest resources into what is not necessarily the weakest attack point but based on their assessment of the most cost effective way to reduce overall risk. This plays into defence in depth, where multiple layers of overall security features can provide better risk reduction, especially where the weakest attack points are expensive or impossible to address.
This may seem like a inconsequential point as it doesn't make any difference to your conclusions, but I do see people focussing on weak attack points without considering whether their money is being well spent.
To me, a better framing would be:
You shouldn't invest resources into measures where there are alternatives that are more effective at reducing risk.
I agree that these are good areas to deploy AI but I don't see these being fairly easy to implement or result in a radical reduction in security risk on their own. Mainly because a lot of security is non-technical, and security involves tightening up a lot of little things that take time and effort.
AI could give us a huge leg up in monitoring - because as you point out, it's labour-intensive, and some false positives are OK. But it's a huge investment to get all of the right logs and continuously deepen your visibility. For example, many organisations do not monitor DNS traffic due to the high volume of logs generated. On-host monitoring tools make lots of tradeoffs about what events they try to capture without hosing the system or your database capacity - do you note every file access? None of these mean monitoring is ineffective, but if you don't have a strong foundation then AI won't help. And operating systems have so many ways for you to do things - if you know bash is monitored, you can take actions using ipython.
I think 'trust displacement' could be particularly powerful to remove direct access privileges from users. Secure and Reliable Systems talks about the use of a tool proxy to define higher level actions that users need to take, so they don't need low level access. In practice this is cumbersome to define up front and relies on engineers with a lot of experience in the system, so you only end up doing it for especially sensitive or dangerous actions. Having an AI to build these for you, or do things on your behalf would reduce the cost of this control.
But in my experience a key challenge with permission management is that to do it well you can't just give people the minimal set of privileges to do their job - you have to figure out how they could do their job with less privileges. This is extremely powerful, but it's far from easy. People don't like to change the way they do their work, especially if it adds steps. Logical appeals using threat models only go so far when people's system 1 is not calibrated with security in mind - they just won't feel like it's worth it.
For these reasons good access management effectively takes cultural change, which is usually slow, and AI alone can't solve that. Especially not at labs going as fast as they can, with employees threatening to join your competitor if you add friction or "security theatre" they don't understand. One way this could go better than I expect is if it's easier, faster or more reliable to have AI do the action for you, ie. if users have incentives to change their workflows to be more secure.