If it’s worth saying, but not worth its own post, here's a place to put it.
If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don't want to write a full top-level post.
If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the new Concepts section.
The Open Thread tag is here. The Open Thread sequence is here.
Economic value might not be a perfect measure. Nuclear fission didn't generate any economic value either until 200.000 in Japan were incinerated. My fear is that a mixture of experts approach can lead to extremely fast progress towards AGI. Perhaps even less - maybe all it takes is an agent AI that can code as well as humans, to start a cascade of recursive self-improvement.
But indeed, a Knightian uncertainty here would already put me at some ease. As long as you can be sure that it won't happen "just anytime" before some more barriers are crossed, at least you can still sleep at night and have the sanity to try to do something.
I don't know, I'm not a technical person, that's why I'm asking questions and hoping to learn more.
"I'm more worried about someone reverse engineering the wiring of cortical columns in the neocortex in the next few years and then replicating it in silicon."
Personally that's what worries me the least. We can't even crack c.elegans! I don't doubt that in 100-200 years we'd get there but I see many other way faster routes.