I for one would love to see you organize a prize contest on Yudkowsky's views on nanotech! I'm happy to pitch in some money for the prize pool.
Glad to hear somebody else is as excited about that as I am!
I'd be curious if you had any thoughts or suggestions on what would be a good way to set it up?
fwiw this is what gpt4 said when I asked it
Limits of miniaturization:
- What are the fundamental physical limits on the miniaturization of technology?
- What are the challenges in scaling down to the nanoscale and how could they be potentially overcome?
Material Science Challenges:
- How will advances in material science impact the development and performance of nanotechnology?
- What are the challenges in synthesizing and manipulating nanostructured materials?
Manufacturing and Mass Production:
- What are the barriers to mass production of nanotechnological devices?
- Can we feasibly develop manufacturing techniques that operate at the nanoscale?
Energy Consumption and Heat Dissipation:
- How can nanotechnological devices be powered efficiently?
- How can heat dissipation issues at the nanoscale be tackled?
Quantum Effects:
- How do quantum effects impact the functioning of nanotechnological devices?
- What strategies can be employed to harness or mitigate these effects?
Reliability and Lifespan:
- What are the challenges in ensuring reliability and longevity of nanodevices?
- Can nanodevices be designed to self-repair or tolerate faults?
Computational Power and Data Storage:
- What is the potential for nanotechnology to enhance computing power and data storage capabilities?
- What are the barriers to achieving these enhancements and how might they be addressed?
Interfacing with Macro World:
- How can nanotechnological devices effectively interface with the macroscale world?
- What are the challenges and potential solutions for data transfer and communication at the nano-macro interface?
Environmental and Biological Compatibility:
- How feasible is the development of nanodevices that are compatible with environmental and biological systems?
- What are the technical challenges in achieving biocompatibility and environmental sustainability?
Self-Replication and Self-Assembly:
- What are the technical barriers to achieving self-replication or self-assembly in nanodevices?
- What breakthroughs would be needed to make this a reality?
Thanks for facilitating this! I found Steven's post and spxtr's comments particularly insightful.
I still think there's a more fundamental issue with Jacob's analysis, which is also present in some other works that characterize the brain as a substrate for various kinds of computation usually performed in silicon.
Namely, Jacob (and others) are implicitly or explicitly comparing FLOPS with "synaptic ops", but these quantities are fundamentally incomparable. (I've been meaning to turn this into a top-level post for a while, but I'll just dump a rough summary in a comment here quickly for now.)
FLOPS are a macro-level performance characteristic of a system. If I claim that a system is capable of 1 million FLOPS, that means something very precise in computer engineering terms. It means that the system can take 1 million pairs of floating point numbers (at some precision) and return 1 million results of multiplying or adding those pairs, each and every second.
OTOH, if I claim that a system is performing N million "synaptic ops" per second, there's not a clear high-level / outward-facing / measurable performance characteristic that translates to. For large enough N, suitably arranged, you get human-level general cognition, which of course can be used to perform all sorts of behaviors that can be easily quantified into crisply specifiable performance characteristics. But that doesn't mean that the synaptic ops themselves can be treated as a macro-level performance characteristic the way FLOPS can, even if each and every one of them really is maximally efficient on an individual basis and strictly necessary for the task at hand.
When you try to compare the brain and silicon-based systems in strictly valid but naive ways, you get results that are actually valid but not particularly useful or informative:
I don't think Jacob's analysis is totally wrong or useless, but one must be very careful to keep track of what it is that is being compared, and why. Joe Carlsmith does this well in How Much Computational Power Does It Take to Match the Human Brain? (which Jacob cites). I think much of Jacob's analysis is an attempt at what Joe calls the mechanistic method:
Estimate the FLOP/s required to model the brain’s mechanisms at a level of detail adequate to replicate task-performance (the“mechanistic method”).
(Emphasis mine.) Joe is very careful to be clear that the analysis is about modeling the brain's mechanisms (i.e. simulation), rather than attempting to directly or indirectly compare the "amount of computation" performed by brains or CPUs, or their relative efficiency at performing this purported / estimated / equated amount of computation.
Other methods, which Jacob also uses, (e.g. the limit method) in Joe's analysis can be used to upper bound the FLOPS required for human-level general cognition, but Joe correctly points out that this analysis can't be used directly to place a lower bound on how efficiently a difficult-to-crisply-specify task (e.g. high-level general cognition) can be performed:
None of these methods are direct guides to the minimum possible FLOP/s budget, as the most efficient ways of performing tasks need not resemble the brain’s ways, or those of current artificial systems. But if sound, these methods would provide evidence that certain budgets are, at least, big enough (if you had the right software, which may be very hard to create – see discussion in section 1.3).2
I might expand on this more in the future, but I have procrastinated on it so far mainly because I don't actually think the question these kinds of analyses attempt to answer is that relevant to important questions about AGI capabilities or limits: in my view, the minimal hardware required for creating superhuman AGI very likely already exists, probably many times over.
My own view is that you only need something a little bit smarter than the smartest humans, in some absolute sense, in order to re-arrange most of the matter and energy in the universe (almost) arbitrarily. If you can build epsilon-smarter-than-human-level AGI using (say) an energy budget of 1,000 W at runtime, you or the AGI itself can probably then figure out how to scale to 10,000 W or 100,000 W relatively easily (relative to the task of creating the 1,000 W AGI in the first place). And my guess is that the 100,000 W system is just sufficient for almost anything you or it wants to do, at least in the absence of other, adversarial agents of similar intelligence.
Rigorous, gears-level analyses can provide more and more precise lower bounds on the exact hardware and energy requirements for general intelligence, but these bounds are generally non-constructive. To actually advance capabilities (or make progress on alignment) my guess is that you need to do "great original natural philosophy", as Tsvi calls it. If you do enough philosophy carefully and precisely before you (or anyone else) builds a 100,000 W AGI, you get a glorious transhuman future, if not, you probably get squiggles. And I think analyses like Joe's and Jacob's show that the hardware and energy required to build a merely human-level AGI probably already exists, even if it comes with a few OOM (or more) energy efficiency penalty relative to the brain. As many commenters on Jacob's original post pointed out, silicon-based systems are already capable of making productive use of vastly more readily-available energy than a biological brain.
I think it does matter how efficient AI can get in using energy to fuel computation, in that I think it provides an important answer to the question over whether AI will be distributed widely, and whether we can realistically control AI distribution and creation.
If it turns out that creating superhuman AI is possible without much use of energy by individuals in their basement, then long term, controlling AI becomes essentially impossible, and we will have to confront a world where the government isn't going to reliably control AI by default. Essentially, Eliezer's initial ideas about the ability to create very strong technology in your basement may eventually become reality, just with a time delay.
If it turns out that any AI must use a minimum of say 10,000 watts or more, then there is hope for controlling AI creation and distribution long term.
And this matters both in scenarios where existential risk mostly comes from individuals, and scenarios where existential risk doesn't matter, but what will happen in a world where superhuman AI is created.
If it turns out that any AI must use a minimum of say 10,000 watts or more, then there is hope for controlling AI creation and distribution long term.
Note, 1 kW (50-100x human brain wattage) is roughly the power consumption of a very beefy desktop PC, and 10 kW is roughly the power consumption of a single rack in a datacenter. Even ~megawatt scale AI (100 racks) could fit pretty easily within many existing datacenters, or within a single entity's mid-size industrial-scale basement, at only moderate cost.
Yeah, this isn't enough to stop companies from producing useful AI, but it does mostly mean we can hope to avoid scenarios where single individuals can reliably build AI, meaning that controlling AI in scenarios where individuals, but not companies are the problem for existential risk is possible. It's also relevant for other questions not focused on existential risk as well.
My own view is that you only need something a little bit smarter than the smartest humans, in some absolute sense, in order to re-arrange most of the matter and energy in the universe (almost) arbitrarily
Does this imply that a weakly superhuman AGI can solve alignment?
prize contest on Yudkowsky's views on nanotechnology
I wrote a related post. But who would be defending those views? I could debate Yudkowsky if he wants, but I don't think he does.
Previously Jacob Cannell wrote the post "Brain Efficiency" which makes several radical claims: that the brain is at the pareto frontier of speed, energy efficiency and memory bandwith, that this represent a fundamental physical frontier.
Here's an AI-generated summary
Jake further has argued that this has implication for FOOM and DOOM.
Considering the intense technical mastery of nanoelectronics, thermodynamics and neuroscience required to assess the arguments here I concluded that a public debate between experts was called for. This was the start of the Brain Efficiency Prize contest which attracted over a 100 in-depth technically informed comments.
Now for the winners! Please note that the criteria for winning the contest was based on bringing in novel and substantive technical arguments as assesed by me. In contrast, general arguments about the likelihood of FOOM or DOOM while no doubt interesting did not factor into the judgement.
And the winners of the Jake Cannell Brain Efficiency Prize contest are
Each has won $150, provided by Jake Cannell, Eli Tyre and myself.
I'd like to heartily congratulate the winners and thank everybody who engaged in the debate. The discussion were sometimes heated but always very informed. I was wowed and amazed by the extraordinary erudition and willingness for honest compassionate intellectual debate displayed by the winners.
So what are the takeaways?
I will let you be the judge. Again, remember the choice of the winners was made on my (layman) assesment that the participant brought in novel and substantive technical arguments and thereby furthered the debate.
Steven Byrnes
The jury was particularly impressed by Byrnes' patient, open-minded and erudite participation in the debate.
He has kindly written a post detailing his views. Here's his summary
DaemonicSigil
For their extensive post on the time and energy cost to erase a bit. The jury was particularly impressed by the extensive calculations and experiments that were done.
spxtr
Here are four key comments, [1],[2], [3] , [4]
the last of which is a detailed worked out example of a LDF5-50A Rigid Coax Cable that would violate Jacob's postulated bound, see here:
Ege Erdil
For his continued insistence on making the cruxes explicit between Jake Cannell and his critics; honing in on key disagreements. Ege Erdil first suggested the idea of working out the example of the heat loss of a cable that would show a violation of Jake's postulated boundes.
My own views
I personally moved from being very sympathetic to Jacob Cannell's point of view to the point of giving a talk on his perspective to being much more skeptical. I was not aware to what degree his viewpoints differed from established physics academia. The tile Landauer model seems likely invalid - at the very least the claims that are made need to be much more explicitly detailed and made subject to peer review.
Still it seems many of considerations on the fundamental design trade-offs and limitations for brains and architecture are sensible and important. His ideas remain influential on how I think about the future of computer hardware, the brain and the form of superintelligent machines.
I'd like to once again thank all participants in this debate. It is my sincere belief that detailed, in-depth technical discussion is the way to move the debate forward and empower the public and decision-makers in making the right decisions.
I was heartened to see so many extremely informed people engage seriously and dispassionately. I hope that similar prize contests on topics of interest to the future of humanity may similarly spur debate. In particular, I hope to organize a prize contest on Yudkowsky's views on nanotechnology. Let me know if you are interested in that happening.