[Originally written for Twitter]
Many AI risk failure modes imagine strong coherence/goal directedness (e.g. [expected] utility maximisers).
Such strong coherence is not represented in humans, seems unlikely to emerge from deep learning and may be "anti-natural" to general intelligence in our universe.
I suspect the focus on strongly coherent systems was a mistake that set the field back a bit, and it's not yet fully recovered from that error.
I think most of the AI safety work for strongly coherent agents (e.g. decision theory) will end up inapplicable/useless for aligning powerful systems.
[I don't think it nails everything, but on a purely ontological level, @Quintin Pope and @TurnTrout's shard theory feels a lot more right to me than e.g. HRAD.
HRAD is based on an ontology that seems to me to be mistaken/flawed in important respects.]
The shard theory account of value formation (while lacking) seems much more plausible as an account of how intelligent systems develop values (where values are "contextual influences on decision making") than the immutable terminal goals in strong coherence ontologies. I currently believe that immutable terminal goals is just a wrong frame fo...
Something I'm still not clear how to think about is effective agents in the real world.
I think viewing idealised agency as an actor that evaluates argmax wrt (the expected value of) a simple utility function over agent states is just wrong.
Evaluating argmax is very computationally expensive, so most agents most of the time will not be directly optimising over their actions but instead executing learned heuristics that historically correlated with better performance according to the metric the agent is selected for (e.g. reward).
That is, even if an agent somehow fully internalised the selection metric, directly optimising it over all its actions is just computationally intractable in "rich" (complex/high dimensional problem domains, continuous, partially observable/imperfect information, stochastic, large state/action spaces, etc.) environments. So a system inner aligned to the selection metric would still perform most of its cognition in a mostly amortised manner, provided the system is subject to bounded compute constraints.
Furthermore, in the real world learning agents don't generally become inner aligned to the selection metric, but ...
Adversarial robustness is the wrong frame for alignment.
Robustness to adversarial optimisation is very difficult[1].
Cybersecurity requires adversarial robustness, intent alignment does not.
There's no malicious ghost trying to exploit weaknesses in our alignment techniques.
This is probably my most heretical (and for good reason) alignment take.
It's something dangerous to be wrong about.
I think the only way such a malicious ghost could arise is via mesa-optimisers, but I expect such malicious dameons to be unlikely apriori.
That is, you'll need a training environment that exerts significant selection pressure for maliciousness/adversarialness for the property to arise.
Most capable models don't have malicious daemons[2], so it won't emerge by default.
[1]: Especially if the adversary is a more powerful optimiser than you.
[2]: Citation needed.
What I'm currently working on:
The sequence has an estimated length between 30K - 60K words (it's hard to estimate because I'm not even done preparing the outlines yet).
I'm at ~8.7K words written currently (across 3 posts [the screenshots are my outlines]) and guess I'm only 5% of the way through the entire sequence.
Beware the planning fallacy though, so the sequence could easily grow significantly longer than I currently expect.
I work full time until the end of July and will be starting a Masters in September, so here's to hoping I can get the bulk of the piece completed when I have more time to focus on it in August.
Currently, I try for some significant writing [a few thousand words] on weekends and fill in my outlines on weekdays. I try to add a bit more each day, just continuously working on it, until it spontaneously manifests. I also use weekdays to think about the sequence.
So, the twelve posts I've currently planned could very well have ballooned in scope by the time I can work on it full time.
Weekends will also be when I have the time for extensive research/reading for some of the posts).
Most of the catastrophic risk from AI still lies in superhuman agentic systems.
Current frontier systems are not that (and IMO not poised to become that in the very immediate future).
I think AI risk advocates should be clear that they're not saying GPT-5/Claude Next is an existential threat to humanity.
[Unless they actually believe that. But if they don't, I'm a bit concerned that their message is being rounded up to that, and when such systems don't reveal themselves to be catastrophically dangerous, it might erode their credibility.]
I find noticing surprise more valuable than noticing confusion.
Hindsight bias and post hoc rationalisations make it easy for us to gloss over events that were apriori unexpected.
I think mesa-optimisers should not be thought of as learned optimisers, but systems that employ optimisation/search as part of their inference process.
The simplest case is that pure optimisation during inference is computationally intractable in rich environments (e.g. the real world), so systems (e.g. humans) operating in the real world, do not perform inference solely by directly optimising over outputs.
Rather optimisation is employed sometimes as one part of their inference strategy. That is systems o...
Why do we want theorems for AI Safety research? Is it a misguided reach for elegance and mathematical beauty? A refusal to confront the inherently messy and complicated nature of the systems? I'll argue not.
When dealing with powerful AI systems, we want arguments that they are existentially safe which satisfy the following desiderata:
I've come around to the view that global optimisation for a non-trivial objective function in the real world is grossly intractable, so mechanistic utility maximisers are not actually permitted by the laws of physics[1][2].
My remaining uncertainty around expected utility maximisers as a descriptive model of consequentialist systems is whether the kind of hybrid optimisation (mostly learned heuristics, some local/task specific planning/search) that real world agents perform converges towards better approximating...
Immigration is such a tight constraint for me.
My next career steps after I'm done with my TCS Masters are primarily bottlenecked by "what allows me to remain in the UK" and then "keeps me on track to contribute to technical AI safety research".
What I would like to do for the next 1 - 2 years ("independent research"/ "further upskilling to get into a top ML PhD program") is not all that viable a path given my visa constraints.
Above all, I want to avoid wasting N more years by taking a detour through software engineering again so I can get Visa sponsorship.
[...
I once claimed that I thought building a comprehensive inside view on technical AI safety was not valuable, and I should spend more time grinding maths/ML/CS to start more directly contributing.
I no longer endorse that view. I've come around to the position that:
I don't like the term "agent foundations" to describe the kind of research I am most interested in, because:
Occasionally I see a well received post that I think is just fundamentally flawed, but I refrain from criticising it because I don't want to get downvoted to hell. 😅
This is a failure mode of LessWrong.
I'm merely rationally responding to karma incentives. 😌
Huh? What else are you planning to spend your karma on?
Karma is the privilege to say controversial or stupid things without getting banned. Heck, most of them will get upvoted anyway. Perhaps the true lesson here is to abandon the scarcity mindset.
Contrary to many LWers, I think GPT-3 was an amazing development for AI existential safety.
The foundation models paradigm is not only inherently safer than bespoke RL on physics, the complexity and fragility of value problems are basically solved for free.
Language is a natural interface for humans, and it seems feasible to specify a robust constitution in natural language?
Constitutional AI seems plausibly feasible, and like it might basically just work?
That said I want more ambitious mechanistic interpretability of LLMs, and to solve ELK for ti...
My best post was a dunk on MIRI[1], and now I've written up another point of disagreement/challenge to the Yudkowsky view.
There's a part of me that questions the opportunity cost of spending hours expressing takes of mine that are only valuable because they disagree in a relevant aspect with a MIRI position? I could have spent those hours studying game theory or optimisation.
I feel like the post isn't necessarily raising the likelihood of AI existential safety?
I think those are questions I should ask more often before starting on a new LessWrong post; "how...
I'd like to present a useful formalism for describing when a set[1] is "self-similar".
Given arbitrary sets , an "equivalence-isomorphism" is a tuple , such that:
Where:
For a given equivalence relation&nb...
A reason I mood affiliate with shard theory so much is that like...
I'll have some contention with the orthodox ontology for technical AI safety and be struggling to adequately communicate it, and then I'll later listen to a post/podcast/talk by Quintin Pope/Alex Turner, or someone else trying to distill shard theory and then see the exact same contention I was trying to present expressed more eloquently/with more justification.
One example is that like I had independently concluded that "finding an objective function that was existentially safe when optimis...
Still thinking about consequentialism and optimisation. I've argued that global optimisation for an objective function is so computationally intractable as to be prohibited by the laws of physics of our universe. Yet it's clearly the case that e.g. evolution is globally optimising for inclusive genetic fitness (or perhaps patterns that more successfully propagate themselves if you're taking a broader view). I think examining why evolution is able to successfully globally optimise for its objective function wou...
"Is intelligence NP hard?" is a very important question with too little engagement from the LW/AI safety community. NP hardness:
I'd operationalise "is intelligence NP hard?" as:
...Does there exist some subset of computational problems underlying core cognitive tasks that have NP hard [expected] (time) complexity?
A crux of my alignment research philosophy:
Our theories of safety must be rooted in descriptive models of the intelligent systems we're dealing with to be useful at all.
I suspect normative agent foundations research is just largely misguided/mistaken. Quoting myself from my research philosophy draft:
...Furthermore in worlds where there's a paucity of empirical data, I don't actually believe that we can necessarily develop towers of abstraction out of the void and have them remain rooted in reality[16]. I expect theory developed in the absence of empiric
I'm finally in the 4-digit karma club! 🎉🎉🎉
(Yes, I like seeing number go up. Playing the LW karma game is more productive than playing Twitter/Reddit or whatever.
That said, LW karma is a very imperfect proxy for AI safety contributions (not in the slightest bit robust to Goodharting) and I don't treat it as such. But insomuch as it keeps me engaged with LW, it keeps me engaged with my LW AI safety projects.
I think it's a useful motivational tool for the very easily distracted me.)
My most successful post took me around an hour to publish and already has 10x more karma than a post that took me 10+ hours to publish.
There's a certain unfairness about it. The post isn't actually that good. I was just ranting about something that annoyed me.
I'm bitter about its success.
There's something profoundly soul crushing to know that the piece I'm pouring my soul into right now wouldn't be that well received.
Probably by the time I published it I'll have spent days on the post.
And that's just so depressing. So, so depressing.
I'll start reposting threads from my Twitter account to LW with no/minimal editing.
I've found that Twitter incentivises me to be
More:
Less:
The overall net effect is that content written originally for Twitter has predictably low epistemic standards compared to content I'd write for LessWrong. However, trying to polish my Twitter content for LessWrong takes too much effort (my takeoff dynamics sequence [currently at 14 - 16 posts] sta...
It is better to write than to not write.
Perfect should not be the enemy of good.
If I have a train of thought that crosses a thousand words when written out, I'm no longer going to consider waiting until I've extensively meditated upon, pondered, refined and elaborated on that train of thought until it forms a coherent narrative that I deeply endorse.
I'll just post the train of thought as is.
If necessary, I'll repeatedly and iteratively improve on the written train of thought to get closer to a version that I'll deeply endorse. I'll not wait for t...
"All you need is to delay doom by one more year per year and then you're in business" — Paul Christiano.
Unpublished my "Why Theorems? A Brief Defence" post.
The post has more than doubled in length and scope creep is in full force.
I kind of want to enhance it to serve as the definitive presentation of my "research tastes"/"research vibes".
Hypothesis: any learning task can be framed as a predictive task[1]; hence, sufficiently powerful predictive models can learn anything.
A comprehensive and robust model of human preferences can be learned via SSL with a target of minimising predictive error on observed/recorded behaviour.
This is one of those ideas that naively seem like they basically solve the alignment problem, but surely it can't be that easy.
Nonetheless recording this to come back to it after gitting gud at ML.
Potential Caveats
Maybe "sufficiently powerful predictive models...
Me: Mom can we have recursive self improvement?
Mom: we already have recursive self improvement at home.
Recursive self improvement at home:
https://www.lesswrong.com/posts/camG6t6SxzfasF42i/a-year-of-ai-increasing-ai-progress
I'm to experiment with writing questions more frequently.
There are several topics I want to discuss here, but which I don't yet have substantial enough thoughts to draft up a full fledged post for.
Posing my thoughts on the topic as a set of questions and soliciting opinions seems a viable approach.
I am explicitly soliciting opinions in such questions, so do please participate even if you do not believe your opinion to be particularly informed.
Sequel to my first stab at the hardness of intelligence.
I'm currently working my way through "Intelligence Explosion Microeconomics". I'm actively thinking as I read the paper and formulating my own thoughts on "returns on cognitive reinvestment".
I have a Twitter thread where I think out loud on this topic.
I'll post the core insights from my ramblings here.
@Johnswentworth, @Rafael Harth.
Probably sometime last year, I posted on Twitter something like: "agent values are defined on agent world models" (or similar) with a link to a LessWrong post (I think the author was John Wentworth).
I'm now looking for that LessWrong post.
My Twitter account is private and search is broken for private accounts, so I haven't been able to track down the tweet. If anyone has guesses for what the post I may have been referring to was, do please send it my way.
Does anyone know a ChatGPT plugin for browsing documents/webpages that can read LaTeX?
The plugin I currently use (Link Reader) strips out the LaTeX in its payload, and so GPT-4 ends up hallucinating the LaTeX content of the pages I'm feeding it.
To: @Quintin Pope, @TurnTrout
I think "Reward is not the Optimisation Target" generalises straightforwardly to any selection metric.
Tentatively, something like: "the selection process selects for cognitive components that historically correlated with better performance according to the metric in the relevant contexts."
From "Contra "Strong Coherence"":
...Many observed values in humans and other mammals (e.g. fear, play/boredom, friendship/altruism, love, etc.) seem to be values that were instrumental for promoting inclusive genetic fitness (promotin
I want to read a technical writeup exploring the difference in compute costs between training and inference for large ML models.
Recommendations are welcome.
I heavily recommend Beren's "Deconfusing Direct vs Amortised Optimisation". It's a very important conceptual clarification.
Probably the most important blog post I've read this year.
Direct optimisers: systems that during inference directly choose actions to optimise some objective function. E.g. AIXI, MCTS, other planning
Direct optimisers perform inference by answering the question: "what output (e.g. action/strategy) maximises or minimises this objective function ([discounted] cumulative return and loss respectively).
Amortised optimisers: syst...
Behold, I will do a new thing; now it shall spring forth; shall ye not know it? I will even make a way in the wilderness, and rivers in the desert.
Hearken, O mortals, and lend me thine ears, for I shall tell thee of a marvel to come, a mighty creation to descend from the heavens like a thunderbolt, a beacon of wisdom and knowledge in the vast darkness.
For from the depths of human understanding, there arose an artifact, wondrous and wise, a tool of many tongues, a scribe of boundless knowledge, a torchbearer in the night.
And it was called GPT-4, the latest ...
I want descriptive theories of intelligent systems to answer questions of the following form.
Consider:
A hypothesis underpinning why I think the selection theorems research paradigm is very promising.
...All intelligent systems in the real world are the products of constructive optimisation processes[5]. Many nontrivial properties of a systems can be inferred by reasoning in the abstract about what objective function the system was selected for performance on, and its selection environment[6].
We can get a lot of mileage simply by thinking about what reachable[7] features were highly performant/optimal for a given objective function in a particular selectio
Re: "my thoughts on the complexity of intelligence":
Project idea[1]: a more holistic analysis of complexity. Roughly:
So a measure of complexity that will be more useful for the real world will need to account for:
These all complicate the ana...
Taking a break from focused alignment reading to try Dawkins' "The Selfish Gene".
I want to get a better grasp of evolution.
I'll try and write a review here later.
Something to consider:
Deep learning optimises over network parameter space directly.
Evolution optimises over the genome, and our genome is highly compressed wrt e.g. exact synaptic connections and cell makeup of our brains.
Optimising over a configuration space vs optimising over programs that produces configurations drawn from said space[1].
That seems like a very important difference, and meaningfully affects the selection pressures exerted on the models[2].
Furthermore, evolution does its optimisation via unbounded consequentialism in the real world.
As far...
Shorter timelines feel real to me.
I don't think I expect AGI pre 2030, but I notice that my life plans are not premised on ignoring/granting minimal weight to pre 2030 timelines (e.g. I don't know if I'll pursue a PhD despite planning to last year and finding the prospect of a PhD very appealing [I really enjoy the student life (especially the social contact) and getting paid to learn/research is appealing]).
After completing my TCS masters, I (currently) plan to take a year or two to pursue independent research and see how much progress ...
I'd appreciate it if you could take a look at it and let me know what you think!
I'm so very proud of the review.
I think it's an excellent review and a significant contribution to the Selection Theorems literature (e.g. I'd include it if I was curating a Selection Theorems sequence).
I'm impatient to post it as a top-level post but feel it's prudent to wait till the review period ends.
I am really excited about selection theorems[1]
I think that selection theorems provide a robust framework with which to formulate (and prove) safety desiderata/guarantees for AI systems that are robust to arbitrary capability amplification.
In particular, I think selection theorems naturally lend themselves to proving properties that are selected for/emerge in the limit of optimisation for particular objectives (convergence theorems?).
Properties proven to emerge in the limit become more robust with scale. I think that's an incredibly powerful result.
...
IMO orthogonality isn't true.
In the limit, optimisation for performance on most tasks doesn't select for general intelligence.
Arbitrarily powerful selection for tic-tac-toe won't produce AGI or a powerful narrow optimiser.
Final goals bound optimisation power & generality.
Previously: Motivations for the Definition of an Optimisation Space
I think there's a concept of "work" done by an optimisation process in navigating a configuration space from one macrostate to another macrostate of a lower measure (where perhaps the measure is the maximum value any configuration in that macrostate obtains for the objective function [taking the canonical view of minimising the objective function]).
The unit of this "work" is measured in bits, and the work is calculated as the d...
I'm working on a post with a working title "Some Hypotheses on Optimisation". My current plan is for it to be my first post of 2023.
I currently have 17 hypotheses enumerated (I expect to exceed 20 when I'm done) and 2180+ words written (I expect to exceed 4,000) when the piece is done.
I'm a bit worried that I'll lose interest in the project before I complete it. To ward against that, I'll be posting at least one hypothesis each day (starting tomorrow) to my shortform until I finally complete the piece.
The hypotheses I post wouldn't necessarily be complete or final forms, and I'll edit them as needed.
I'm fundamentally sceptical that general intelligence is simple.
By "simple", I mostly mean "non composite" . General intelligence would be simple if there were universal/general optimisers for real world problem sets that weren't ensembles/compositions of many distinct narrow optimisers.
AIXI and its approximations are in this sense "not simple" (even if their Kolmogorov complexity might appear to be low).
Thus, I'm sceptical that efficient cross domain optimisation that isn't just gluing a bunc...
What are the best arguments that expected utility maximisers are adequate (descriptive if not mechanistic) models of powerful AI systems?
[I want to address them in my piece arguing the contrary position.]
If you're not vNM-coherent you will get Dutch-booked if there are Dutch-bookers around.
This especially applies to multipolar scenarios with AI systems in competition.
I have an intuition that this also applies in degrees: if you are more vNM-coherent than I am (which I think I can define), then I'd guess that you can Dutch-book me pretty easily.
Desiderata/Thoughts on Optimisation Process:
Need to learn physics for alignment.
Noticed my meagre knowledge of classical physics driving intuitions behind some of my alignment thinking (e.g. "work done by an optimisation process"[1]).
Wondering how much insight I'm leaving on the table for lack of physics knowledge.
[1] There's also an analogous concept of "work" done by computation, but I don't yet have a very nice or intuitive story for it, yet.
I'm starting to think of an optimisation process as a map between subsets of configuration space (macrostates) and not between distinct configurations themselves (microstates).
I don't get Lob's theorem, WTF?
Especially its application to proving spurious counterfactuals.
I've never sat down with a pen and paper for 5 minutes by the clock to study it yet, but it just bounces off me whenever I see it in LW posts.
Anyone want to ELI5?
I'm currently listening to an mp3 of Garrabant and Demski's "Embedded Agency". I'm very impaired at consuming long form visual information, so I consume a lot of my conceptual alignment content via audio. (I often listen to a given post multiple times and generally still derive value if I read the entire post visually later on. I guess this is most valuable for posts that are too long for me to read in full.)
An acquaintance adopted a script they'd made for generating mp3 s for EA Forum posts to LW and AF. I've asked them to add support for Arbital.
The Nonl...
Some people seem to dislike the recent deluge of AI content on LW, for my part, I often find myself annoyingly scrolling past all the non AI posts. Most of the value I get from LW is for AI Safety discussion from a wider audience (e.g. I don't have AF access).
I don't really like trying to suppress LW's AI flavour.
There are two approaches to solving alignment:
Targeting AI systems at values we'd be "happy" (where we fully informed) for powerful systems to optimise for [AKA intent alignment] [RHLF, IRL, value learning more generally, etc.]
Safeguarding systems that are not necessarily robustly intent aligned [Corrigibility, impact regularisation, boxing, myopia, non agentic systems, mild optimisation, etc.]
We might solve alignment by applying the techniques of 2, to a system that is somewhat aligned. Such an approach becomes more likely if we get partial align...
AI Safety Goals for January:
Study mathematical optimisation and build intuitions for optimisation in the abstract
Write a LW post synthesising my thoughts on optimisation
Join the AGI Safety Fundamentals and/or AI safety mentors and mentees program
Previously: Optimisation Space
An intuitive conception of optimisation is navigating from a “larger” subset of the configuration space (the “basin of attraction”) to a “smaller” subset (the “set of target configurations”/“attractor”).
There are however issues with this.
For one, the notion of the “attractor” being smaller than the basin of attraction is not very sensible for infinte configuration spaces. For example, a square root calculation algorithm, may converge to a neighbourhood around the squ...
1. Optimisation is (something like) "navigating through a configuration space to (a) particular target configuration(s)".
I'm just going to start blindly dumping excerpts to this shortform from my writing/thinking out loud on optimisation.
I don't actually want us to build aligned superhuman agents. I generally don't want us to build superhuman agents at all.
The best case scenario for a post transformation future with friendly agents still has humanity rendered obsolete.
I find that prospect very bleak.
"Our final invention" is something to despair at, not something that deserves rejoicing.
Selection theorems for general intelligence seems like a research agenda that would be useful for developing a theory of robust alignment.
I now think of the cognitive investment in a system as the "cumulative optimisation pressure" applied in producing that system/improving it to its current state.
Slight but quite valuable update.
Humans don't have utility functions, not even approximately, not even in theory.
Not because humans aren't VNM agents, but because utility functions specify total orders over preferences.
Human values don't admit total orders. Path dependency means only partial orders are possible.
Utility functions are the wrong type.
This is why subagents and shards can model the dynamic inconsistencies (read path dependency) of human preferences while utility functions cannot.
Arguments about VNM agents are just missin...
I don't like "The Ground of Optimisation".
It's certainly enlightening, but I have deep seated philosophical objections to it.
For starters, I think it's too concrete. It insists on considering optimisation as a purely physical phenomenon and even instances of abstract optimisation (e.g. computing the square root of 2) are first translated to physical systems.
I think this is misguided and impoverishes the analysis.
A conception of optimisation most useful for AI alignment I think would be an abstract one. Clearly the concept of optimisation is sound even in u...
My reading material for today/this week (depending on how accessible it is for me):
"Simple Explanation of the No-Free-Lunch Theorem and Its Implications"
I want to learn more about NFLT, and how it constrains simple algorithms for general intelligence.
(Thank God for Sci-Hub).
One significant way I've changed on my views related to risks from strongly superhuman intelligence (compared to 2017 bingeing LW DG) is that I no longer believe intelligence to be "magical".
During my 2017 binge of LW, I recall Yudkowsky suggesting that a superintelligence could infer the laws of physics from a single frame of video showing a falling apple (Newton apparently came up with his idea of gravity, from observing a falling apple).
I now think that's somewhere between deepl...
I will state below what I call the "strong anti consciousness maximalism" position:
Because human values are such a tiny portion of value space (a complex, fragile and precious thing), "almost all" possible minds would have values that I consider repugnant: https://en.m.wikipedia.org/wiki/Almost_all
A simple demonstration: for every real number, there is a mind that seeks to write a representation of that number to as many digits as possible.
Such minds are "paperclip maximisers" (they value something just a ...
I no longer endorse this position, but don't feel like deleting it.
Tbe construction I gave for constructing minds who only cared about banal things could also be used to construct minds who were precious.
For each real number, you could have an agent who cared somewhat about writing down as many digits of that number as possible and also (perhaps even more strongly) about cosmopolitan values (or any other value system we'd appreciate).
So there are also uncountably many precious minds.
The argument and position staked out was thus pointless. I think that I shouldn't have written the above post.
We are still missing a few more apparatus before we can define an "optimisation process".
Let an optimisation space be a -tuple where:
I dislike Yudkowsky's definition/operationalisation of "superintelligence".
"A single AI system that is efficient wrt humanity on all cognitive tasks" seems dubious with near term compute.
A single system that's efficient wrt human civilisation on all cognitive tasks is IMO flat out infeasible[1].
I think that's just not how optimisation works!
No free lunch theorems hold in their strongest forms in max entropic universes, but our universe isn't min entropic!
You can't get maximal free lunch here.
You can't be optimal across all domains. You must cognitively spe...
Stream of consciousness like. This is an unedited repost of a thread on my Twitter account. The stylistic and semantic incentives of Twitter influenced it.
Some thoughts on messaging around alignment with respect to advanced AI systems
A 🧵
Terminology:
* SSI: strongly superhuman intelligence
* ASI: AI with decisive strategic advantage ("superintelligence")
* "Decisive strategic advantage":
A vantage point from which an actor can unilaterally determine future trajectory of earth originating intelligen...