From a review of Greg Egan's new book, Zendegi:
Egan has always had difficulty in portraying characters whose views he disagrees with. They always end up seeming like puppets or strawmen, pure mouthpieces for a viewpoint. And this causes trouble in another strand of Zendegi, which is a mildly satirical look at transhumanism. Now you can satirize by nastiness, or by mockery, but Egan is too nice for the former, and not accurate enough at mimicry for the latter. It ends up being a bit feeble, and the targets are not likely to be much hurt.
Who are the targets of Egan’s satire? Well, here’s one of them, appealing to Nasim to upload him:
“I’m Nate Caplan.” He offered her his hand, and she shook it. In response to her sustained look of puzzlement he added, “My IQ is one hundred and sixty. I’m in perfect physical and mental health. And I can pay you half a million dollars right now, any way you want it. [...] when you’ve got the bugs ironed out, I want to be the first. When you start recording full synaptic details and scanning whole brains in high resolution—” [...] “You can always reach me through my blog,” he panted. “Overpowering Falsehood dot com, the number one site for rational thinking about the future—”
(We’re supposed, I think, to contrast Caplan’s goal of personal survival with Martin’s goal of bringing up his son.)
“Overpowering Falsehood dot com” is transparently overcomingbias.com, a blog set up by Robin Hanson of the Future of Humanity Institute and Eliezer Yudkowsky of the Singularity Institute for Artificial Intelligence. Which is ironic, because Yudkowsky is Egan’s biggest fan: “Permutation City [...] is simply the best science-fiction book ever written” and his thoughts on transhumanism were strongly influenced by Egan: “Diaspora [...] affected my entire train of thought about the Singularity.”
Another transhumanist group is the “Benign Superintelligence Bootstrap Project”—the name references Yudkowsky’s idea of “Friendly AI” and the description references Yudkowsky’s argument that recursive self-optimization could rapidly propel an AI to superintelligence. From Zendegi:
“Their aim is to build an artificial intelligence capable of such exquisite powers of self-analysis that it will design and construct its own successor, which will be armed with superior versions of all the skills the original possessed. The successor will produce a still more proficient third version, and so on, leading to a cascade of exponentially increasing abilities. Once this process is set in motion, within weeks—perhaps within hours—a being of truly God-like powers will emerge.”
Egan portrays the Bootstrap Project as a (possibly self-deluding, it’s not clear) confidence trick. The Project persuades a billionaire to donate his fortune to them in the hope that the “being of truly God-like powers” will grant him immortality come the Singularity. He dies disappointed and the Project “turn[s] five billion dollars into nothing but padded salaries and empty verbiage”.
(Original pointer via Kobayashi; Risto Saarelma found the review. I thought this was worthy of a separate thread.)
How about one where people destroyed the Internet, burned all books and killed all academics to impede the dangerous knowledge cut loose by Roko. In the preface the downfall of the modern world would be explained by this. The actual story then would be set in the year 4110 when the world not just recovered but invented advanced AI and many other technologies we dreamed about today. The plot would be about a team of AI supported cyborg archaeologists on Mars discovering an old human artifact from the 2020's, some kind of primitive machine that could be controlled from afar to move over the surface of Mars. When tapping its internal storage they are shocked. It looks like that the last upload from Earth was all information associated with the infamous Roko incident that lead to the self-inflicted destruction of the first technological civilisation over 2000 years ago. Sure, the archaeologists only know the name of the incident that lead a group of people to destroy the civilized society. But there's a video too! Some Chinese looking guy can be seen, panic in his eyes and loud explosions in the background. Apparently some SIAI assault team is trying to take out his facility, as you can hear a repeated message coming in from a receiver, "We are the SIAI. Resistance is futile..." He's explaining how he's going to upload all relevant data to let the future know that it was all for nothing...then the video suddenly ends. Instantly long instantiated measures are taken to sandbox the data for further analysis. In the epilogue it is then told how people are aghast at how the ancients destroyed their civilisation over such blatant nonsense. How could have anyone taken those ideas serious for that every kid knows that a hard takeoff isn't possible as there can only be a gradual development of artificial intelligence and that any technological civilization is merging with its machines rather than being ruled by them. Even worse, the ancients had absolutely no reason to believe that to create intelligences with incentive as broad as to allow for the urge to evolve is something that can be easily happen by failure, now people know that it has to grow and requires the cooperation of the world beyond you. And the moral of the story would be that the real risk is taking mere ideas too serious!
Maybe I'm generalizing from one example here, but every time I've imagined a fictional scenario where something I felt strongly about escalated implausibly into warfare, I've later realized that it was a symptom of an affective death spiral, and the whole thing was extremely silly.
That's not to say a short story about a war triggered by supposedly-but-not-actually dangerous knowledge couldn't work. But it would work better if the details of the knowledge in question were optimized for the needs of the story, which would mean it'd have to be fictional.