I see this post is gathering downvotes (-3 so far) but no comments at all. It would be helpful if someone managed to put their reaction into words, and not just into a downvote.
Perhaps the "scenario" seems arbitrary or the purpose of the post is obscure. To some extent I was just musing aloud on the implications of a new fact. I knew intellectually that the NSA has its billion-dollar budgets and its thousands of PhD mathematicians, and the creation of AI in a secret military project is a standard fictional trope. But to hear about this specific facility concretized everything for me, and stirred my imagination.
My whimsy about a clique of singularitarian Mormon computer scientists may be somewhat arbitrary. But consider this: who is more likely to create the first AGI - the Singularity Institute, or the National Security Agency? The answer to that can hardly be in doubt. The NSA's mission is to stay ahead of everyone else in matters like code-breaking and quantitative data analysis. They have to remain number one in theoretical computer science, and they have a budget of billions with which to accomplish that goal.
So if the future hinges on the value system of the first AI, then what goes on inside the NSA is far more important than what goes on at singinst.org. The Singularity Institute may have adopted a goal - design and create friendly AI - which, according to the Institute's own philosophy, means that they would determine the future of the human race; and some of the controversy about the Institute, its methods, personalities, etc, is coming about because of this. But if you accept the philosophy, then the NSA is surely the number-one candidate to actually decide the fate of the world. Outsiders will not get to decide what happens; the most we can reasonably hope to do is to make correct observations that might be noticed and taken into account by the people on the inside who will, for better or worse, make the fateful decisions.
Of course it is theoretically possible that Google, IBM, the FSB, Japan's biggest supercomputer... will instead be ground zero for the intelligence explosion. But I would think that the NSA is well ahead of all of them.
The NSA's mission is to stay ahead of everyone else in matters like code-breaking and quantitative data analysis. They have to remain number one in theoretical computer science, and they have a budget of billions with which to accomplish that goal.
Almost inescapable conclusion, yes. The only thing I see against them to win is the fact, that they might be too slow. Someone might be a bit faster, what is a threat for them and what should accelerate them further.
So the scenario would be, not that the elders of the LDS church are secretly running the American intelligence community...
In fact, there are a lot of Mormons in the U.S. intelligence services. This isn't because of any sinister conspiracy,* but simply because of their institution of going off as missionaries to foreign countries. Most Americans, if raised speaking English in the home, have little motivation to properly learn another language, and don't. Mormons do -- they learn most of the languages of the globe and practice them under very trying conditions. Better yet, from the point of view of a government official concerned about security clearances, they do this without any family connections to the nations in question. BYU knows this.
I'm a little confused about this post. Is it trying to outline a good idea for a story? In which case go for it. Or is it trying to help us understand what will actually happen? In which case what's going on in Bluffdale may be important, but talk of a Mormon conspiracy is probably a distraction.
Ha! If you can't imagine such people, just visit the Mormon Transhumanist Association: http://transfigurism.org
their religious collaborators might not think of overtly adding "Joseph Smith was a prophet" to the axiom set of America's supreme strategic AI; but they might have more subtle plans meant to bring about an equivalent outcome.
Do they really need such plans? If someone believes in their own rationality and in Mormonism, they could make the fallacy of believing that a superintelligent AI will automatically recognize the truth of Mormonism.
They certainly could, but presumably they'd only be likely to if they believed that intelligence was reliably well-correlated with coming to true beliefs about the world.
If they believed instead that certain truths about the world cannot be reliably arrived at through intelligence alone, but instead require some measure of the something-else sometimes labelled "inspiration" or "Grace" or "faith" or "being properly instructed by a True Faith-holder", they could instead believe that a superintelligent system would additionally require that something-else before it was able to recognize the truth of Mormonism. If they don't have a good working model for what that something-else looks like, they might nevertheless decide that anything that reliably causes the system to recognize the truth of Mormonism has a better chance of making the system recognize that important truth than something that doesn't.
Additionally, if they believed that high intelligence is not reliably adequate to overcome sufficiently wrong priors, and also believed that secular American culture embedded extremely wrong priors about (for example) the nature of God and Joseph Smith, they might decide to encode the truths about those things explicitly so as to compensate for the falsehoods already in the system.
Wired Magazine has a story about a giant data center that the USA's National Security Agency is building in Utah, that will be the Google of clandestine information - it will store and analyse all the secret data that the NSA can acquire.
So what would you want to tell them, before they take their final steps?
If the bad guys are chasing you, you are mortally wounded and the only hope for protecting the database is to send it out by email to an underachieving nerd, pick me.
So what would you want to tell them, before they take their final steps?
Get thee behind me, Satan: for it is written, Thou shalt worship the Lord thy God, and him only shalt thou serve. Ave crux, spes unica.
Wired Magazine has a story about a giant data center that the USA's National Security Agency is building in Utah, that will be the Google of clandestine information - it will store and analyse all the secret data that the NSA can acquire. The article focuses on the unconstitutionality of the domestic Internet eavesdropping infrastructure that will feed into the Bluffdale data center, but I'm more interested in this facility as a potential locus of singularity.
If we forget serious futurological scenario-building for a moment, and simply think in terms of science-fiction stories, I'd say the situation has all the ingredients needed for a better-than-usual singularity story - or at least one which caters more to the concerns characteristic of this community's take on the concept, such as: which value system gets to control the AI; even if you can decide on a value system, how do you ensure it has been faithfully implemented; and how do you ensure that it remains in place as the AI grows in power and complexity?
Fiction makes its point by being specific rather than abstract. If I was writing an NSA Singularity Novel based on this situation, I think the specific belief system which would highlight the political, social, technical and conceptual issues inherent in the possibility of an all-powerful AI would be the Mormon religion. Of course, America is not a Mormon theocracy. But in a few years' time, that Utah facility may have become the most powerful and notorious supercomputer in the world - the brain of the American deep state - and it will be located in the Mormon state, during a Mormon presidency. (I'm not predicting a Romney victory, just describing a scenario.)
Under such circumstances, and given the science-fictional nature of Mormon cosmology, it is inevitable that there would at least be some Internet crazies, convinced that it's all a big plot to create a Mormon singularity. What would be more interesting, would be to suppose that there were some Mormon computer scientists, who knew about and understood all our favorite concepts - AIXI, CEV, TDT... - and who were earnestly devout; and who saw the potential. If you can't imagine such people, just visit the recent writings of Frank Tipler.
So the scenario would be, not that the elders of the LDS church are secretly running the American intelligence community, but that a small coalition of well-placed Mormon computer scientists - whose ideas about a Mormon singularity might sound as strange to their co-religionists as they would to a secular "singularitarian" - try to steer the development of the Bluffdale facility as it evolves towards the possibility of a hard takeoff. One may suppose that they have, in their coalition, allied colleagues who aren't Mormon but who do believe in a friendly singularity. Such people might think in terms of an AI that will start out with Mormon beliefs, but which will have a good enough epistemology to rationally transcend those beliefs once it gets going. Analogously, their religious collaborators might not think of overtly adding "Joseph Smith was a prophet" to the axiom set of America's supreme strategic AI; but they might have more subtle plans meant to bring about an equivalent outcome.
Perhaps in an even more realistic scenario, the Mormon singularitarians would just be a transient subplot, and the ethical principles of the NSA's big AI would be decided by a committee whose worldview revolved around American national security rather than any specific religion. Then again, such a committee is bound to have a division of labor: there will be the people who liaise with Washington, the lawyers, the geopolitical game theorists, the military futurists... and the AI experts, among whom might be experts on topics like "implementation of the value system". If the hypothetical cabal knows what it's doing, it will aim to occupy that position.
I'm just throwing ideas out there, telling a story, but it's so we can catch up with reality. Events may already be much further along than 99% of readers here know about. Even if no-one here gets to personally be a part of the long-awaited AI project that first breaks the intelligence barrier, the people involved may read our words. So what would you want to tell them, before they take their final steps?