Perhaps the most important concept in strategy is the importance of operating within the right paradigm. It is extremely important to orient towards the right style or the right doctrine before you begin implementing your plan - that could be "no known doctrine, we'll have to improvise", but if it is you need to know that! If you choose the wrong basic procedure or style, you will end up refining a plan or method that ultimately can't get you to where you want to, and you will likely find it difficult to escape.
This is one of the Big Deep Concepts that seem to crop up all over the place. A few examples:
- In software development, one form of this error is known as "premature optimization," where you focus on optimizing existing processes before you consider whether those processes are really what the final version of your system needs. If those processes end up getting cut, you've wasted a bunch of time; if you end up avoiding "wasting work" by keeping these processes, the sunk cost fallacy may have blocked you from implementing superior architecture.
- In the military, a common mistake of this type leads to "fighting the last war" - the tendency of military planners and weapons designers to create strategies and weapon systems that would be optimal for fighting a repeat of the previous big war, only to find that paradigm shifts have rendered these methods obsolete. For instance, many tanks used early in World War II had been designed based on the trench warfare conditions of World War I and proved extremely ineffective in the more mobile style of warfare that actually developed.
- In competitive gaming, this explains what David Sirlin calls "scrubs" - players who play by their own made-up rules rather than the true ones, and thus find themselves unprepared to play against people without the same constraints. It isn't that the scrub is a fundamentally bad or incompetent player - it's just that they've chosen the wrong paradigm, one that greatly limits their ability when they come into contact with the real world.
This same limitation is present in almost every field that I have seen, and considering it is critical. Before you begin investing heavily in a project, you should ask yourself whether this is really the right paradigm to accomplish your goals. Overinvesting in the wrong paradigm has a doubly pernicious effect - not only are your immediate efforts not as effective as they could be, but it also renders you especially vulnerable to the sunk cost fallacy. Keep in mind that even those who are aware of the sunk cost fallacy are not immune to it!
Therefore, when making big decisions, don't just jump into the first paradigm that presents itself, or even the one that seems to make the most sense on initial reflection. Instead, realy truly consider whether this approach is the best one to get you what you want. Look at the goal that you're aiming for, and consider whether there are other ways to achieve it that might be more effective, less expensive, or both.
Here are some sample situations that can be considered paradigm-selection problems:
- Do you really need to go and get a CS degree in order to become a computer programmer, or will a bootcamp get you started faster and cheaper?
- Does your organization's restructuring plan really hit the core problems, or is it merely addressing the most obvious surface-level issues?
- Will aircraft carrier-centric naval tactics be effective in a future large-scale conventional war, or is the aircraft carrier the modern equivalent of the battleship in WW2?
I don't necessarily know the answers to all these questions - note that only one is even framed as a clear choice between two options, and there are obviously other options available even in that case - but I do know that they're questions worth asking! When it comes time to make big decisions, evaluating what paradigms are available and whether the one you've chosen is the right one for the job can be critical.
I think people have already considered this, but the strategies converge. If someone else is going to make it first, you have only two possibilities: seize control by exerting a strategic advantage, or let them keep control but convince them to make it safe.
To do the former is very difficult, and the little bit of thinking that has been done about it has mostly exhausted the possibilities. To do the latter requires something like 1) giving them the tools to make it safe, 2) doing enough research to convince them to use your tools or fear catastrophe, and 3) opening communications with them. So far, MIRI and other organizations are focusing on 1 and 2, whereas you'd expect them to primarily do 1 if they expected to get it first. We aren't doing 3 with respect to China, but that is a step that isn't easy at the moment and will probably get easier as time goes on.
I am now writing an article where I explore this type of solutions.
One similar to ones you listed is to sell AI Safety as a service, so any other team could hire AI Safety engineers to help align their AI (basically it is a way to combine the tool and the way to deliver it.)
Another (I don't say it is the best, but possible) is to create as many AI teams in the world as possible, so hard-takeoff will always happen in several teams, and the world will be separated on several domains. Simple calculation shows that we need around 1000 AI teams running simult... (read more)