Since these are all large subjects containing multiple domains of expertise, I am inclined to adopt the following rule: anything someone nominates as a bottleneck should be treated as a bottleneck until we have a convincing explanation for why it is not. I expect that once we have a good enough understanding of the relevant fields, convincing explanations should be able to resolve whole groups of prospective bottlenecks.
There are also places where I would expect bottlenecks to appear even if they have not been pointed out yet. These two leap to mind:
1. New intersections between two or more fields.
2. Everything at the systems level of analysis.
I feel like fast progress can be made on both types. While it is common for different fields to have different preferred approaches to a problem, it feels much rarer for there not to be any compatible approaches to a problem in both fields. The challenge would lie in identifying what those approaches are, which mostly just requires a sufficiently broad survey of each field. The systems level of analysis is always a bottleneck in engineering problems; the important thing is to avoid the scenario where it has been neglected.
It feels easy to imagine a scenario where the compatible approach from one of the fields is under-developed, so we would have to go back and develop the missing tools before we can really integrate the fields. It is also common even in well-understood areas for a systems level analysis to identify a critical gap. This doesn't seem any different to the usual process of problem solving, it's just each new iteration gets added to the bottleneck list.
Thomas Khun argues in his book that those scientific fields that try to achieve specific goals make worse progress then scientific fields that attempt to solve problems within those fields that it's researchers find interesting.
Physics progressed when physicists wanted to understand the natural laws of the universe and not because physicists wanted to make stuff that's useful.
On the other hand, you have a subject like nutrition science that's focused on producing knowledge that has immediate practical applications and the field doesn't make any practical progress.
Asking what's the bottleneck to do X and asking what needs to happen for X to be done safely are two different questions.
For practical purposes it's important to know both answers but for understanding it's clouds the issue to fix the questions together.
The question of whether we can build more effecitve BCI's is a question that's mostly about technical capability.
On the other hand, the concern that Nate raises over AGI is a safety concern. Nate doesn't doubt that we can build an AGI but considers it dangerous to do so.
FYI: I've updated the post to focus solely on the "what's the bottleneck to do X" question and not on safety, as I think the former question is less discussed on LW and what I wanted answers to focus on.
FWIW, I can't speak for Paul Christiano, but insofar as you've attempted to summarize what I think here, I don't endorse the summary.
Where does the post mention Paul Christiano? I only see a link to a discussion, without any commentary.
Edit: Nvm, I figured it out. I assume you mean: "The general pattern is that one person or group argues that we know enough about a topic's foundation that it's time to start to focus on achieving near-term milestones, often engineering ones. " is the specific line that you think doesn't accurately capture your views.
Can you be more specific? If you help me understand how/if I'm misrepresenting, I'd be happy to change it. My sense is that Paul's view is more like, "through working towards prosaic alignment, we'll get a better understanding of whether there are insurmountable obstacles to alignment of scaled up (and likely better) models." I can rephrase to something like that or something when more nuanced. I'm just wary of adding too much alignment-specific discussion as I don't want the debate to be too focused on the object-level alignment debate.
It's also worth noting that there are other researchers who hold similar views, so I'm not just talking about Paul's.
Since these are all large subjects containing multiple domains of expertise, I am inclined to adopt the following rule: anything someone nominates as a bottleneck should be treated as a bottleneck until we have a convincing explanation for why it is not. I expect that once we have a good enough understanding of the relevant fields, convincing explanations should be able to resolve whole groups of prospective bottlenecks.
There are also places where I would expect bottlenecks to appear even if they have not been pointed out yet. These two leap to mind:
1. New intersections between two or more fields.
2. Everything at the systems level of analysis.
I feel like fast progress can be made on both types. While it is common for different fields to have different preferred approaches to a problem, it feels much rarer for there not to be any compatible approaches to a problem in both fields. The challenge would lie in identifying what those approaches are, which mostly just requires a sufficiently broad survey of each field. The systems level of analysis is always a bottleneck in engineering problems; the important thing is to avoid the scenario where it has been neglected.
It feels easy to imagine a scenario where the compatible approach from one of the fields is under-developed, so we would have to go back and develop the missing tools before we can really integrate the fields. It is also common even in well-understood areas for a systems level analysis to identify a critical gap. This doesn't seem any different to the usual process of problem solving, it's just each new iteration gets added to the bottleneck list.
You should promote this to a full answer rather than a comment! It more than qualifies.
Regarding 1, I suspect a lot of recent progress in neuroscience has come from applying computational and physics-style approaches to existing problems. See, for example, the success Ed Boyden has had in his lab with applying physics thinking to building better neuroscience tools–optogenetics, expansion microscopy, and most recently implosion fabrication.
I think nanotechnology is a prime example of 2. AIUI, a lot of the component technologies for at least trying to build nano-assemblers exist but we lack the technology/institutions/incentives/knowledge to engineer them into coherent products and tools.
Copied to full answer!
I agree regarding neuroscience. I went to a presentation (from whom I have suddenly forgotten, and I seem to have lost my notes) that was describing an advanced type of fMRI that allowed more advanced inspection than previously, and the big discovery mostly consistent of "optimize the c++" and "rearrange the UI with practitioners in mind." I found it tremendously impressive - they were using it to help map epilepsy seizures in much more detail.
I am strongly tempted to say that 2 should be considered the highest priority in any kind of advanced engineering project, and I am further tempted to say it would sometimes be worth considering even before having project goals. There has been some new work in systems engineering recently that emphasizes the meta level and focusing on architecture-space before even getting the design constraints; I wonder if the same trick could be pulled with capabilities. Sort of systematizing the constraints at the same time as the design.
In discussions of AI, nanotechnology, brain-computer interfaces, and genetic engineering, I've noticed a common theme of disagreement over the right bottleneck to focus on. The general pattern is that one person or group argues that we know enough about a topic's foundation that it's time to start to focus on achieving near-term milestones, often engineering ones. The other group counters by arguing that taking such a milestone/near-term focused approach is futile because we lack the fundamental insights to achieve the long-term goals we really care about. I'm interested in heuristics we can use or questions we can ask to try and resolve these disagreements. For example, in the recent MIRI update, Nate Soares talks about how we're not prepared to build an aligned AI because we can't talk about the topic without confusing ourselves. While this post focuses on capability, not safety, I think the "can we talk about the topic without sounding confused" is a useful heuristic for understanding how ready we are to build independent of safety questions.
What follows are a few links to/descriptions of concrete examples of this pattern:
EDIT (01/02/2019): I removed references to safety/alignment after ChristianKI noted that conflating the two makes the question more confusing and John_Maxwell_IV argued that I was misrepresenting his (and likely others') views on alignment. The post now focuses solely on the question of identifying bottlenecks to progress.