In general and across all instances I can think of so far, I do not agree with the part of your futurological forecast in which you reason, "After event W happens, everyone will see the truth of proposition X, leading them to endorse Y and agree with me about policy decision Z."
Example 1: "After a 2-year-old mouse is rejuvenated to allow 3 years of additional life, society will realize that human rejuvenation is possible, turn against deathism as the prospect of lifespan / healthspan extension starts to seem real, and demand a huge Manhattan Project to get it done." (EDIT: This has not happened, and the hypothetical is mouse healthspan extension, not anything cryonic. It's being cited because this is Aubrey de Grey's reasoning behind the Methuselah Mouse Prize.)
Alternative projection: Some media brouhaha. Lots of bioethicists acting concerned. Discussion dies off after a week. Nobody thinks about it afterward. The rest of society does not reason the same way Aubrey de Grey does.
Example 2: "As AI gets more sophisticated, everyone will realize that real AI is on the way and then they'll start taking Friendly AI development seriously."
Alternative projection: As AI gets more sophisticated, the rest of society can't see any difference between the latest breakthrough reported in a press release and that business earlier with Watson beating Ken Jennings or Deep Blue beating Kasparov; it seems like the same sort of press release to them. The same people who were talking about robot overlords earlier continue to talk about robot overlords. The same people who were talking about human irreproducibility continue to talk about human specialness. Concern is expressed over technological unemployment the same as today or Keynes in 1930, and this is used to fuel someone's previous ideological commitment to a basic income guarantee, inequality reduction, or whatever. The same tiny segment of unusually consequentialist people are concerned about Friendly AI as before. If anyone in the science community does start thinking that superintelligent AI is on the way, they exhibit the same distribution of performance as modern scientists who think it's on the way, e.g. Hugo de Garis, Ben Goertzel, etc.
Consider the situation in macroeconomics. When the Federal Reserve dropped interest rates to nearly zero and started printing money via quantitative easing, we had some people loudly predicting hyperinflation just because the monetary base had, you know, gone up by a factor of 10 or whatever it was. Which is kind of understandable. But still, a lot of mainstream economists (such as the Fed) thought we would not get hyperinflation, the implied spread on inflation-protected Treasuries and numerous other indicators showed that the free market thought we were due for below-trend inflation, and then in actual reality we got below-trend inflation. It's one thing to disagree with economists, another thing to disagree with implied market forecasts (why aren't you betting, if you really believe?) but you can still do it sometimes; but when conventional economics, market forecasts, and reality all agree on something, it's time to shut up and ask the economists how they knew. I had some credence in inflationary worries before that experience, but not afterward... So what about the rest of the world? In the heavily scientific community you live in, or if you read econblogs, you will find that a number of people actually have started to worry less about inflation and more about sub-trend nominal GDP growth. You will also find that right now these econblogs are having worry-fits about the Fed prematurely exiting QE and choking off the recovery because the elderly senior people with power have updated more slowly than the econblogs. And in larger society, if you look at what happens when Congresscritters question Bernanke, you will find that they are all terribly, terribly concerned about inflation. Still. The same as before. Some econblogs are very harsh on Bernanke because the Fed did not print enough money, but when I look at the kind of pressure Bernanke was getting from Congress, he starts to look to me like something of a hero just for following conventional macroeconomics as much as he did.
That issue is a hell of a lot more clear-cut than the medical science for human rejuvenation, which in turn is far more clear-cut ethically and policy-wise than issues in AI.
After event W happens, a few more relatively young scientists will see the truth of proposition X, and the larger society won't be able to tell a damn difference. This won't change the situation very much, there are probably already some scientists who endorse X, since X is probably pretty predictable even today if you're unbiased. The scientists who see the truth of X won't all rush to endorse Y, any more than current scientists who take X seriously all rush to endorse Y. As for people in power lining up behind your preferred policy option Z, forget it, they're old and set in their ways and Z is relatively novel without a large existing constituency favoring it. Expect W to be used as argument fodder to support conventional policy options that already have political force behind them, and for Z to not even be on the table.
I don't find either example convincing about the general point. Since I'm stupid I'll fail to spot that the mouse example uses fictional evidence and is best ignored
We are all pretty sick of seeing a headline "Cure for Alzheimer's disease!!!" and clicking through to the article only to find that it is cured in mice, knock-out mice, with a missing gene, and therefore suffering from a disease a little like human Alzheimer. The treatment turns out to be injecting them with the protein that the missing gene codes for. Relevance to human health: zero.
Mice are very short lived. We expect big boosts in life span by invoking mechanisms already present in humans and already working to provide humans with much longer life spans than mice. We don't expect big boosts in the life span of mice to herald very much for human health. Cats would be different. If pet cats started living 34 years instead of 17, their owners would certainly be saying "I want what Felix is getting."
The sophistication of AI is a tricky thing to measure. I think that we are safe from unfriendly AI for a few years yet, not so much because humans suck at programming computers, but because they suck in a particular way. Some humans can sit at a keyboard typing in hundreds of thousands of lines of code specific to a particular challenge and achieve great things. We can call that sophistication if we like, but it isn't going to go foom. The next big challenge requires a repeat of the heroic efforts, and generates another big pile of worn out keyboards. We suck at programming in the sense that we need to spend years typing in the code ourselves, we cannot write code that writes code.
Original visions of AI imagined a positronic brain in an anthropomorphic body. The robot could drive a car, play a violin, cook dinner, and beat you at chess. It was general purpose.
If one saw the distinction between special purpose and general purpose as the key issue, one might wonder: what would failure look like? I think the original vision would fail if one had separate robots, one for driving cars and flying airplanes, a second for playing musical instruments, a third to cook and clean, and fourth to play games such as chess, bridge, and baduk.
We have separate hand-crafted computer programs for chess and bridge and baduk. That is worse than failure.
Examples the other way.
After the Wright brothers people did believe in powered, heavier-than-air flight. Aircraft really took of after that. One crappy little hop in the most favourable weather and suddenly every-one's a believer.
Sputnik. Dreamers had been building rockets since the 1930's, and being laughed at. The German V2 was no laughing matter, but it was designed to crash into the ground and destroy things, which put an ugh field around thinking about what it meant. Then comes 1957. Beep, beep, beep! Suddenly every-one's a believer and twelve years later Buzz Aldrin and the other guy are standing on the moon :-)
The Battle of Cambrai is two examples of people "getting it". First people understood before the end of 1914 that the day of the horse-mounted cavalry charge was over. The Hussites had war wagons in 1420 so there was a long history of rejecting that kind of technology. But after event W1 (machine guns and barbed wire defeating horses) it only took three years before the first tank-mounted cavalry charge. I think we tend to miss understand this by measuring time in lives lost rather than in years. Yes, the adoption of armoured tanks was very slow if you count the delay in lives, but it couldn't have come much faster in months.
The second point is that first world war tanks were crap. The Cambrai salient was abandoned. The tanks were slow and always broke down, because they were too heavy and yet the armour was merely bullet proof. There only protection against artillery was that the gun laying techniques of the time were ill suited to moving targets. The deployment of tanks in the first world war fall short of being the critical event W. One expects the horrors of trench warfare to fade and military doctrine to go back to horses and charges in brightly coloured uniforms.
In reality the disappointing performance of the tanks didn't cause military thinkers to miss their significance. Governments did believe and developed doctrines of Blitzkreig and Cruiser tanks. Even a weak W can turn every-one into believers.