Upvoted for being an important idea, but I actually disagree with the advice. The relationship of ideas to action is exceedingly complex, and I strongly doubt (but do not know how to test) that the idea simply hadn't occurred to someone who wanted attention through harm.
I find it much more likely that there's a large uncertainty in the effectiveness (in terms of attention to be had) of uncommon attacks, and when it's not already in the public eye, it's known but not considered as a reasonable mechanism. Much like cryogenics is weird and uncertain, even for people who would like to be revived, poisoning medicine (dangerous idea: why only medicine, not other foods?) was weird and uncertain until it had been shown to work.
I suspect the dangerous information is that it has succeeded at least once, and gotten a lot of press attention. This information is much harder (and less desirable) to suppress.
In the software world, ideas are rampant and cheap. Execution of the correct idea is the path to success. I expect it's similar as a terrorist, except there are way fewer people to help you choose, refine, and change your ideas, so you only get one shot (as it were).
I also note a similarity to the disclosure debate about computer vulnerabilities - there's a tension between publishing so that potential victims can protect themselves or watch for attacks vs keeping quiet so vendors can fix underlying bugs before very many attackers know of it. There are a LOT of factors that go into these decisions, it's not as simple as "don't spread harmful information".
Another example, which I don't know if it supports my position or yours: Tom Clancy published Debt of Honor in 1994, which included a near-decapitation of the US government by a pilot-turned-terrorist flying his 747 into the capital building. Only 7 years later, real-life terrorists did something very similar. We immediately instituted systems to prevent repeats (and a bunch of systems that added irritation and did not protect anything), and there have been no copycats for 17 years.
It seems implementing systems that prevent hijacking of planes is easier with how airports and plane travel work vs how much would need to change to stop vehicles being used in attacks. Seems similar to the debate over whether the Slaughterbots video and campaign to stop autonomous weapons will be successful. The supporters use nuclear weapons policy as the success story but it may not be the most useful comparison because nuclear weapons are much easier technology to restrict.
It is worth considering that information is easier to move now, and that there are groups dedicated to finding and implementing new strategies for attacks. I think it is more likely that we are in a ‘loose lips sink ships’ regime now than we were then.
From an infosec point of view, you tend to rely on responsible disclosure. That is you tell people that will be most affected or that can solve the problem for other people, they can create counter measures and then you release those counter measures to everyone else (which gives away the vulnerability as well), who should be in a position to quickly update/patch.
Otherwise you are relying on security via obscurity. People may be vulnerable and not know it.
There doesn't seem to be a similar pipeline for non-computer security threats.
Even for responsible infosec disclosure, it's always a limited time, and there are lots of cases of publishing before a fix, if the vendors are not cooperating, or if the exploit gains attention through other channels. And even when it works, it's mostly limited to fairly concrete proven vulnerabilities - there's no embargo on wild, unproven ideas.
There doesn't seem to be a similar pipeline for non-computer security threats.
Nor is there anyone likely to be able to help during the period of limited-disclosure, nor are most of the ideas concrete and actionable enough to expect it to do any good to publish to a limited audience before full disclosure.
The non-computer analog for bug fixes is product recalls. I point out that recalling defective hardware is hideously expensive; so much so that even after widespread public outcry, it often requires lawsuits or government intervention to motivate action.
As for the reporting channel, my guess is warranty claims? Physical things come with guarantees that they will not fail in unexpected ways. Although I notice that there isn’t much of a parallel for bug searches at the physical level.
If I were Tom Clancy I hope that I would not have published Debt of Honor. I don't know whether terrorists were inspired by it, but at least for me it's pretty clearly in the "not worth the risk" category.
In some respects the 9/11 attacks can be considered similar to the Tylenol incident (though obviously much more devastating) - an incident took place using a method that had been theoretically viable for a long time, prompting immediate corrective action.
One of the reasons those attacks were so successful is that air hijacks were relatively common, but most led "only" to hostage scenarios, demands for the release of political prisoners, etc - in point of fact the standard protocol was to cooperate with hijackers, and as Wikipedia says "often, during the epidemic of skyjackings in the late 1960s and early 1970s, the end result was an inconvenient but otherwise harmless trip to Cuba for the passengers." Post-9/11, hijacks began being taken much more seriously.
(There were actually many terrorist attempts against airplanes in the time shortly after 9/11, though most were not hijack attempts - the infamous "shoe bomber" who attempted to destroy an aircraft in flight a few months later, only to be beaten and captured by other passengers, was maybe the most well known.)
I hope that I would not have published Debt of Honor.
There have been an enormous number of books, movies, etc with various forms of realistic plots. Are you saying this genre shouldn't exist, that authors should make sure their plots are not realistic, or that there's something unusual about this plot in particular that should have kept Clancy from publishing?
If I were Tom Clancy I hope that I would not have published Debt of Honor. I don't know whether terrorists were inspired by it, but at least for me it's pretty clearly in the "not worth the risk" category.
I get the argument but then I'm wondering where it stops? Don't direct A Clockwork Orange because there's a high likelihood that copycat murders will happen? Stop production on all things where someone might copy something harmful?
I think I would have published. A potentially-productive question is "With 7 years warning, why did bad guys try it before good guys prevented it?". Was is a question of misaligned incentives (where the good guys effectively let it happen because the public punishes for inconvenience), or different estimates of success (the good guys thought it'd never happen, although it was (in retrospect) extremely effective?
Keeping ideas/information obscure is unlikely to work - the more motivated side is going to get it first, and it's likely to be more effective the first time it's used than if many people anticipated it (or at least understood the vulnerability).
“With 7 years warning, why did bad guys try it before good guys prevented it?”
This came up around 9/11. Good guys have too too many things to prevent to focus on any random hypothetical more than any other hypothetical. Gwern has some writing on terrorism and it not being about terror. I leave it up to the reader to find the link.
There seems to have been vehicle attacks before the recent ones, they just didn't get widespread press coverage as "terrorism", such as the one in 2006, or the guy with the Killdozer: http://www.badassoftheweek.com/heemeyer.html
As usual, Wikipedia has a list.
A list that is probably vastly incomplete. It seems very likely that there have been vehicle attacks for as long as vehicles exist. What would be the odds of no one in the past 100 years, no angry spouse, disgruntled ex-employee or lunatic, having thought of taking revenge on the cruel world by ramming a vehicle into people? Wouldn’t a prior on the order of at least one such event per 1 million vehicles per year be more likely to yield correct predictions than 0, for events before, say, the year 2005?
This reminds me of my favorite TV show Survivor. There are a number of reasons why it's great, but one aspect that is so fun to watch is seeing someone come up with a new strategy or tactic and then seeing how it immediately gets adopted in the later seasons. One of the most intellectual players, Yau-Man Chan, had the idea to create a fake immunity idol. Now it is standard to save pieces from things to try to make one. Russell Hantz started looking for hidden idols before clues to them were even found. That's become standard too and new strategies and counter-strategies keep being invented and you can see how fast memes take off in the show. From sanitation to eating Tide Pods, memes are powerful.
A South Korean show by the name of "the Genius" is basically a case study in adaptive memes in a competitive environment, which might serve as an even better example. There are copycats, innovators and bystanders, and they all have varying levels of ingenuity and honor.
I've spent many years with this issue.
Luckily for the forces of law and order, we have a few things going for us. Number one, effective people with good judgement generally find ways of resolving their grievances that do not involve terrorism--these types of people are required to scale any idea. Two, the world's security services are tending towards more, not less, effectiveness and generally prioritize shooting members, and would be members, of terrorist groups' technical staffs. Three, terrorist groups tend to develop along predictable lines, with predictable organizational structures and predictable personalities in various roles; these predictable features are not conducive to technical achievement.
Over-valuing received wisdom, and inordinate amounts of time spent on reinforcing ideology distract from effective engineering development. Non-technical management staff, waterfall development, a fear of innovation emerging from outside the (non technical) gurus who produce the ideological ideas, and the insularity driven by both ideological and security requirements really do hamper effective engineering.
The movie Four Lions is by far the best movie I've seen on terrorism. For whatever reason, people don't like to believe me when I tell them this. 'The Joker' or 'mad evil genius' archetypes, though common in movies, are extremely rare in practice.
Rather than a $ scale of damage, I like to think of technique danger on a log scale, meaning, how many logs of human lives could this attack destroy at the high end.
Morons will always be able to hit 0; anyone at all with a little thought could hit 1. 2 is challenging, but unfortunately achievable by someone acting alone. 3 logs is 9/11.
Truck attacks probably cap out at 1 log, so in an absolute sense, are probably better from the perspective of the security services than attacks involving explosives, which have hit two logs with alarming frequency.
When it comes to 'evil psychology', most people when 'thinking like the baddie' start from a place of 'if I were exactly who I am today, and set the goal of , how would I go about doing it. Obviously I'd have to just turn off my emotions and pretend I'm a psychopath to start'.
This creative thinking often leads to anxiety (when interesting 1-2 log ideas are generated), and confusion that usually goes something like this: 'my goodness, it would be so easy to , we should be terrified that someone could figure it out!'
This fails to take into account that on the one hand, 'psychopathic' terrorists are rare because psychopaths are notoriously unable to execute plans that require discipline over a long period of time, and on the other, that a person who is pursuing a terrorist attack is often in a mental and emotional state that is very, very different from yours. The terrorist may be prioritizing things that in your view, would seem counterproductive to the point being downright stupid.
As an example, I recall a school shooting (don't make me look it up, there have been a whole mess of them) where one of the victim stories described the result of an authority figure shouting something to the effect of 'I have a wife and kids' to the shooter through a door. A normal person, in a normal state of mind would find that to be a sympathetic, though not necessarily a persuasive message. The shooter, whose motives were later ascribed to frustration at his own inability to achieve things like a girlfriend or family, responded to this plea with a hail of gunfire. Though we cannot know what he was thinking, I assess that the shooter could be believed to have added 'and YOU never will' to the victim's statement, and responded with rage and frustration to the perceived insult. A separate school shooter was apparently defused when a female teacher repeated 'I love you' over and over to him when he entered the building with a gun. Note: the suggestion that 'if we have a mass shooting, one of the cute but not intimidatingly hot girls should try mustering up as much sincerity as she can and repeat I love you to the shooter' probably will not go over well at your office for cultural reasons.
Anyway, if you are nervous about any of your ideas that you're thinking about releasing on the internet, feel free to PM me and I'll be happy to help you work through the logic. If your proposed technique can do one log of harm, it's probably fun to talk about in public and unlikely to make the world worse; two and up might require some discretion when it comes to the technical details (everyone loves finding a dead terrorist splattered across his apartment), but I would err on the side of disclosure in general terms, particularly if the technique is novel or simple, as the people who are best positioned to spot an attack in its incipient stages probably are not security professionals. If you can reliably generate ideas that you're sure can hit three or more, for the sake of your health, I suggest avoiding participation in radical politics.
One practical way to deal with it is to write any discussion pertaining to those matter in the language that's academic enough that the kind of people who can't think on their own won't follow your writing.
That method is generally used quite successfully by academics to prevent certain papers to be read by a broader public.
Someone actually did try a 9/11 style attack back in the 1970s. A man named Samuel Byck tried to hijack a passenger jet with the goal of crashing it into the White House and assassinating President Nixon, but instead of hijacking a plane in flight, he boarded one by force and demanded that it take off - which did not happen.
I wonder if we'd be better off if it'd succeeded earlier. Getting competent at preventing and/or responding to specific attack mechanisms seems likely to be better done earlier in history, when there are at least smaller crowds for the new norms to be worked out upon.
A single plane in the 1970s, followed by the recognition that hijacking is just no longer tolerable (destroy the plane before letting an attacker take control) would perhaps have been fewer lives lost than 9/11/2001.
I'm actually somewhat worried that this post is still a bit dangerous (I guess my cruxes being 1) an assumption that, since _I_ hadn't heard of the truck strategy, there's still a lot of room for more people to hear about it, 2) an assumption that if this post were to become reasonably linked to (which it seems memetically fit enough to probably achieve), it has a nontrivial chance of getting noticed by the wrong person both due to direct-linkage and general google search.
This does seem like the minimum-viable version of this post that gives any examples though, and not sure if the previous incarnations of it (that had literally zero examples as far as I remember) were memetically fit enough to do their job anyhow. Shrug?
Yes, I worried about this myself for some time. Ultimately I decided that terrorist organizations already know about this method and it is being widely discussed in the media, so the number of potentially dangerous people who would hear about it here first is comparatively low. Further, this method is primarily suited towards indiscriminate attacks, which I am somewhat less worried about compared to alternatives.
Only if (a) terrorists tend to read what I consider to be fairly intellectual content or (b) they google around for meta-strategies. I rate (a) very unlikely and (b) as well, since as this post shows, they can't even be bothered to google around for good terrorism methods.
I think I'm mostly persuaded on this, although I think the direct-links issue could still be problematic if a sufficiently funny/clickbaity version of this post got shared around a bunch, i.e. the way Scott Alexander articles sometimes go mainstream. (I think this post is not super likely to have that happen to it, part of the reason I've dialed down my concern)
(rambly thoughts about my interior thought process incoming)
So, I don't endorse the actual algorithm I was running here. (i.e "notice dangerous information --> speak out about dangerous information" rather than "make even a crude attempt to reflect on the overall stakes", which I do think I should do more often)
I think the algorithm Davis followed was basically correct (as I understand it, "start writing a post on dangerous info --> reflect on overall risk of using a particular example / check for less dangerous examples --> publish article with less dangerous examples and/or decide the risk is acceptable")
It's particularly salient to me that Ziz is correct to call me out here because I had recently noticed an inconsistency in myself: If I saw someone make a dangerous-seeming-decision, and they already double checked their reasoning, and the triple-checked their reasoning seeking out someone with different priors... I would probably demand that they quadruple-check their reasoning.
Which is maybe fine, except that if they had only doublechecked their math... I'm aware that I'd be satisfied if I demanded that they triplecheck in. And if they had quadruplechecked it, I'd probably demand that they check it a 5th time.
I lean towards "it's better to have this algorithm to not have it to make sure people are doublechecking their dangerous decisions at all, but it's definitely better to actually have a principled take on how much danger is reasonable."
And this post was the first instance of me running into this behavior pattern since reflecting on it.
That all said...
In this particular post, which is literally about being careful with information hazards, which includes a potential information hazard... it seems sort of amiss to not at least address where to draw the line?
I think you are very unusual in not having heard of the truck strategy, and in particular that anybody interested in terrorism is especially likely to have heard of it.
Fair. (I'd want a bit more evidence than one person's say-so, but I've also already walked back my overall position a bit from the original point)
I wonder if, instead of trying to suppress ideas (which is almost impossible), we should instead try to drown effective ideas in less-effective ones. Publish TONS of ideas, the vast majority are less likely to "work" than tried-and-true techniques.
"If you REALLY want to make a splash, attacking a police station with a plastic fork is definitely the way! You're almost guaranteed lots of reporting and press just for the attempt."
While the late 19th and early 20th century anarchist attackers often wanted to target the well-off (and indeed carried out many assassination attempts in service of this goal), they weren't averse to making indiscriminate attacks as long as the target was vaguely upclass - consider the Cafe Terminus attack or the Galleanist Wall Street bombing, which were indiscriminate in nature.
Similarly, the anarchist doctrine of "propaganda of the deed" held that attacks would break down the state's monopoly on violence and show the people that revolution was possible, and as such the attacks were valuable simply as demonstrations, even if they did not kill their intended targets; the 1919 Galleanist bombings, while notionally assassination attempts against various powerful figures, killed only a night watchman and blew a servant's hands off, but were still considered blows struck for anarchy.
My sense is that Galleani and his followers would have been quite happy to crash vehicles into crowds of people, especially in financial or government districts, but they didn't much realize it was an option.
Hmm. I don't know anything about Galleani, but wanting to inspire the masses to action via "propaganda of the deed" seems incompatible with directly terrorizing the masses? (Excuses about "collateral damage" aside.)
It seems like this might have something to do with tribalism: who do the terrorists consider "us" versus "them"?
Interesting post, and I'm sure "not having thought of it" helps explain the recency of vehicular attacks (though see the comment from /r/CronoDAS questioning the premise that they are as recent as they may seem).
Another factor: Other attractive methods, previously easy, are now harder--lowering the opportunity cost of a vehicular attack. For example, increased surveillance has made carefully coordinated attacks harder. And perhaps stricter regulations have made it harder to obtain bomb-making materials or disease agents.
This also helps to explain apparent geographical distribution of vehicle attacks: more common in Europe and Canada than the United States, especially per capita. Alternative ways to kill many people, like with a gun, are much easier in the US.
Yet another explanation: Perhaps terrorist behavior doesn't appear to maximize damage or terror is that much terrorism is not intended to do so. My favorite piece arguing this is from Gwern:
While I have no definitive proof of this, it would seem to me that there has actually been advancement in the area automobiles that have increased their lethality. Mainly more powerful and efficient engines, and other such refinements in design which make them more robust, easier to use, and handle better increasing their utility as a weapon. I realize this is a bit vague, I am not an engineer, but it would seem to me that if I had to choose between a truck produced in the first half of the 20th century or a modern model, the former would likely not be my first choice for a vehicular assault. Of course this is not say that the changes are prohibitive for their use just that it seems to me that there probably has been some relevant changes.
Secondly, I am not sure the actual problem is simply the idea. Take the case of tampering with medication. You point out that regulations were put in place afterwards that made it harder to tamper with medication. However this makes it seem that the subsequent lack of tampering cases may be linked to this, when it is more likely that the first case was simply an outlier. In the past, especially in america, there has been a long history of completely unregulated drug markets, and dubious cures sold out of the back of carts. Many products such as soda often started out as questionable curealls. For instance coca cola was made with actual coca (the plant cocaine alkaloids are derived from) and sold as a cure for headaches. Other drug-laced, or even mildly toxic products were often sold to the public in great quantities. As such there have been many times that anyone with the will to tamper with medications and trick people into consuming poisons has always been possible. In fact it has been quite easy. This holds true not just for medications, but for many consumables that change hands several times on the way to the consumer.
I think the limiting factor here is not the idea, nor the accessibility of means to inflict harm, but instead the lack of will to kill indiscriminately on the part of most humans. Even during wars I have read it is common for most soldiers to deliberately miss the enemy (Dave Grossman's On Killing, although I think some of the claims in that book may have been called into question by more recent research). I would guess it is the case that most of these attacks are unified more by the fact that they were committed spur of the moment by individuals with rare combinations of mental traits and social environments. It is the rarity of such people, with whom the idea enables which is crucial. Further I would say that the idea doesn't cause these people to act violently, it instead adds to their repertoire of possible violent actions, from which they would likely choose either way.
Of course, that is not to say that meme transmission has little to do with the expressions of violence, it is of course of utmost importance in shaping most of our behavior, I simply do not really believe it is the factor of importance in this case.
Finally I just want to say that it strikes me that the idea of "dangerous information" is similar to the idea of computer security by obfuscation. The problem with "security by obfuscation" is that it often also impedes solutions to various attacks, it only works if your opponent has no way of guessing as to how to defeat your security. If I recall correctly that was the justification of releasing the genetic sequence from the research on what mutations could cause the avian flu virus to go airborne. While it is also true that this could enable someone to induce the same change into flu virus, it was judged more important for the scientific community to understand the genetics of pathology. Even if it wasn't released, it would not have taken much for any similarly equipped bio-weapons research lab to turn out the same result. Creating just as much, if not more, danger than public release. As such I tend not to agree with trying to withhold information, unless it can be reasonably established that the information is oddly asymmetric in it's cost and dangers (If some mad genius figures out how to macguyver a nuke out of chewing gum, baking soda, and bits of string, that would certainly be worth keeping secret.)
Well anyways sorry for the longwinded rambling post, just my two pence.
For instance coca cola was made with actual coca (the plant cocaine alkaloids are derived from) and sold as a cure for headaches.
Coca tea is still in use in parts of South America. I've been told it isn't really comparable to cocaine. Wikipedia is under the impression that there's about 6x as much cocaine in a cup of coca tea as a line.
I've never had coca tea, but I can buy that doing cocaine is a little like what snorting 600mg of pure caffeine would be like for someone with no prior exposure to caffeine. (I don't recommend either at all.)
How much cocaine was in the original Coca-Cola recipe? Allegedly, the original recipe had 3 drams coca extract to 2.5 gallons of water, whatever that means.
There is still coca extract used in Coca-Cola, but the psychoactive ingredient in the coca leaf is removed from the extract. (Cocaine still sees legitimate medical use as a local anesthetic, by the way.)
"unless it can be reasonably established that the information is oddly asymmetric"
How often do you think these ideas come along? Defining danger of idea by logs of deaths per incident (upper bound of typical event caused by 'a few' people), and frequency of generation by annual (decadal? Monthly?) rate.
This seems to me analogous to keeping a good startup idea under wraps, but for the case of disutility.
However, violence in general and warfare in particular are more controlled subjects of expertise than anything commercial. It would be much more difficult to quietly gather information to check your suspicions, consult experts, etc. In many cases steps taken to verify whether you have a dangerous idea could be considered a crime.
Fortunately, people in general are powerfully unlikely to do anything to cause a lot of harm on purpose, so we lose very little by keeping this kind of thing to ourselves.
Further, I would generally say that the types of people who make attacks are cunning but unimaginative.
Why? Up until this point I thought the crux of your argument was that people would commit an attack are generally unintelligent. Why would they have cunning specifically but no imagination?
How would you suggest disclosing a novel x risk, or a novel scheme for an AGI that the originator believes has a reasonable chance of succeeding?
Well, is there anything that can be done to stop the x-risk? If there is, maybe tell the people who are best positioned to stop it. Re: the AGI thing, is it a scheme that could plausibly be made friendly? If yes, maybe tell people who are working on friendliness/work on making it friendly yourself. If no, try to steal the core idea and use it as the basis for an FAI I guess? Or just forget about it.
Extremely low amount of deaths is due to terrorist attacks (https://i.redd.it/5sq16d2moso01.gif, https://owenshen24.github.io/charting-death/), so this is not important, and people should care about such things less.
This is not a good argument against caring about terrorism; I wrote a blog post about this but it seems to be frequently misunderstood so it's probably not very good.
This basically means that the attacks terrorists use today have the feature that very little people die as a result. That makes it more important to not spread information about ways that actually kill a lot of people not less.
It is evidence, however, that terrorists aren't trying (or aren't JUST trying) to kill a lot of people. They're trying to do something else, and that something else could well mean that ideas aren't the bottleneck, and we shouldn't worry about trying to keep ideas for killing a lot of people suppressed.
If it were just about killing people, I think Mao still holds the record, with "ideology-driven central government planning" as the most effective mechanism. I still strongly believe we're better off publishing the effectiveness of that than suppressing the knowledge.
Well, if you want to go into details, Mao had a lot of help and he didn't explicitly plan to cause the famines. The person who personally killed the most people that he specifically intended to kill is the guy who dropped the atomic bomb on Hiroshima.
Recently, there has been an alarming development in the field of terrorist attacks; more and more terrorists seem to be committing attacks via crashing vehicles, often large trucks, into crowds of people. This method has several advantages for an attacker - it is very easy to obtain a vehicle, it is very difficult for police to protect against this sort of attack, and it does not particularly require special training on the part of the attacker.
While these attacks are an unwelcome development, I would like to propose an even more worrisome question - why didn't this happen sooner?
I see no reason to believe that there has been any particular technological development that has caused this method to become prevalent recently; trucks have been in mass production for over a hundred years. Similarly, terrorism itself is not particularly new - just look to the anarchist attacks of the late 19th and early 20th century. Why, then, weren't truck attacks being made earlier?
The answer, I think, is both simple and frightening. The types of people who make attacks hadn't thought of it yet. The main obstacle to these attacks was psychological and intellectual, not physical, and once attackers realized these methods were effective the number of attacks of this sort began increasing. If the Galleanists had realized this attack method was available, they might well have done it back in '21 -- but they didn't, and indeed nobody motivated to carry out these attacks seemed to until much later.
Another instance - though one with less lasting harm - pertains to Tylenol. In 1982, a criminal with unknown motives tampered with several Tylenol bottles, poisoning the capsules with cyanide and then replacing them on store shelves. Seven people died in the original attack, which caused a mass panic to the point where police cars were sent to drive down the streets broadcasting warnings against Tylenol from their loudspeakers; more people still were killed in later "copycat" crimes.
In this case, there was a better solution than with the truck rammings - in the aftermath of these events, greatly increased packaging security was put into place for over-the-counter medications. Capsules (which are comparatively easy to adulterate) fell out of favor somewhat in favor of tablets; further, pharmaceutical companies began putting tamper-resistant seals on their products and the government made product tampering a federal offense. Such attacks are now much harder to commit.
However, the core question remains - why was it that it took until 1982 for there to be a public attack like this, and then there were many more (TIME claims hundreds!) in short succession? The types of people who make attacks hadn't thought of it yet. Once the first attack and the panic around it exposed this vulnerability, opportunistic attackers carried out their own plans, and swift action suddenly became necessary - swift action to close a security hole that had been open for years and years!
One practical implication of this phenomenon is quite worrisome - one must be very careful to avoid accidentally spreading dangerous information. If the main constraint on an attack vector can really just be that the types of people who make attacks haven't thought of it yet, it's very important to avoid spreading knowledge of potential ways in which we're vulnerable to these attacks - you might wind up giving the wrong person dangerous ideas!
Many otherwise analytical or strategic thinkers that I have encountered seem to fall prey to the typical mind fallacy in these cases, assuming that others will also have put thought into these things and thus that there's no real risk in discussing them - after all, these methods are "obvious" or even "publicly known". Certainly I have made this mistake myself before!
However, what is "publicly known" in some book or white paper somewhere may only be practically known by a few people. Openly discussing such matters, especially online, risks many more people seeing it than otherwise would. Further, I would generally say that the types of people who make attacks are cunning but unimaginative. They are able to execute existing plans fairly effectively, but are comparatively unlikely to come up with novel methods. This means that there's extra reason to be wary that you might have come up with something they haven't.
Thus, when dealing with potentially dangerous information, care should be taken to prevent it from spreading. That doesn't, of course, mean that you can't talk these matters over with trusted colleagues or study to help prepare defenses and solve vulnerabilities - but it does mean that you should be careful when doing so.
As strange as it seems, it is very possible that the only reason things haven't gone wrong in just the way you're thinking of is that dangerous people haven't thought of it yet - and if so, you don't want to be the one giving them ideas!
Author's note: Sincere thanks to those who assisted me with this post; their assistance has made it safer and more compelling.