Interesting read. In the same vein. What I was imagining is a computational market, relying on the ability to do lots of complex trades at high speeds, and AI/ML. But much of that difference is explained by the 20 years.
I also reviewed some of his prototype code for a combinatorial prediction market around 10 years ago. I agree that these are promising ideas and I liked this post a lot.
Robin Hanson proposed much the same over 20 years ago in "Buy Health, Not Health Care".
IIRC Doug Orleans once made an ifMUD bot for a version of Zendo where a rule was a regular expression. This would give the user a way to express their guess of the rule instead of you having to test them on examples (regex equality is decidable).
Also I made a version over s-expressions and Lisp predicates -- it was single-player and never released. It would time-out long evaluations and treat them as failure. I wonder if I can dig up the code...
Here's what's helped for me. I had strong headaches that would persist for weeks, with some auras, which my doctor called migraines. (They don't seem to be as bad as what people usually mean by the word.) A flaxseed oil supplement keeps them away. When I don't take enough, they come back; it needs to be at least 15g/day or so (many times more than the 2-3 gelcaps/day that supplement bottles direct you to take). I've taken fish oil occasionally instead.
I found this by (non-blinded) experimenting with different allegedly anti-inflammatory supplements. I'm not a doctor, etc.
Computing: The Pattern On The Stone by Daniel Hillis. It's shorter and seemingly more focused on principles than the Petzold book Code, which I can't compare further because I stopped reading early (low information density).
I also don't have a stance on MNT either. If it were possible that would be great, but what would be even greater is if we could actually foresee what is truly possible within the realm of reality. At the very least, that would allow us to plan our futures.
However, I hope you won't mind me making a counter-argument to your claims, just for sake of discussion.
EoC and Nanosystems aren't comparable. EoC is not even a book about MNT per se, it is more a book about the impact of future technology on society (it has chapters devoted to the internet and other things - it's also notable that he successfully predicted the rise of the internet). Nanosystems on the other hand is an engineering book. It starts out with a quantitative scaling analysis of things like magnetism, static electricity, pressure, velocity etc. at the macroscale and nanoscale and proceeds into detailed engineering computations. It is essentially like a classical text on engineering, except on the nanoscale.
As for the science presented in nanosystems, I view it as less of a 'blueprint' and more of a theoretical exploration of the most basic nanotechnology that is possible. For example, Drexler presents detailed plans for a nanomechanical computer. He does not make the claim that future computers will be like what he envisions. His nanomechanical computer is simply a theoretical proof-of-concept. It is there to show that computing at the nanoscale is possible. It's unlikely that practical nanocomputers in the future (if they are possible) will look like that at all. They will probably not use mechanical principles to work.
Now about your individual arguments:
Conservation of Energy: In Nanosystems Drexler makes a lot of energy computations. However, in general, it is true that building things on the molecular level is not necessarily more energy-efficient than building them the traditional way i.e. in bulk. In fact, for many things it would probably be far less energy-efficient. It seems to me that even if MNT were possible, most things would still be made using bulk technology. MNT would only be used for the high-tech components such as computers.
Modelling is Hard: You're talking about solving the Schrodinger equation analytically. In practice, a sufficiently-precise numerical simulation is more than adequate. In fact, ab-initio quantum simulations (simulations that make only the most modest of assumptions and compute from first principles) have been carried out for relatively large molecules. I think it is safe to assume that future computers will be able to model at least something as complicated as a nanoassembler entirely from first principles.
A factory isn't the right analogy: I don't understand this argument.
Chaos: You mention chaos but don't explain why it would ruin MNT. The limiting factor in current quantum mechanical simulations is not chaotic dynamics.
The laws of physics hold: Wholeheartedly agree. However, even in the realm of current physics there is a lot of legroom. Cold fusion may be a no-no, but hot fusion is definitely doable, and there is no law of physics (that we know of) that says you can't build a compact fusion reactor.
The simulations of molecular gears and such you find on the internet are of course fanciful. They have been done with molecular dynamics, not ab-initio simulation. You are correct that stability analysis has not been done. However, stability analysis of various diamondoid structures has been carried out, and contrary to the 'common knowledge' that diamond decays to graphite at the surface, defect-free passivated diamond turns out to be perfectly stable at room temperature, even in weird geometries [1]
Agree.
De novo enzymes have been created that perform functions unprecedented in the natural world [2] (this was reported in the journal Nature). Introduction of such proteins into bacteria leads to evolution and refinement of the initial structure. The question is not one of 'doing better than biology'. It's about technology and biology working together to achieve nanotech by any means necessary. You are correct that we are still very very far from reaching the level of mastery over organic chemistry that nature seems to have. Whether organic synthesis remains a plausible route to MNT remains to be seen.
If this is about creating single carbon atoms, you are right. However, it is not mentioned that single carbon atoms will need to exist in isolation. Carbon dimers can exist freely and in fact ab-initio simulations have shown that they can quite readly be made to react and bond with diamond surfaces [3]. I think it's more plausible that this is what is actually meant. I don't believe Drexler is so ignorant of basic chemistry as to have made this mistake.
I do not have enough knowledge to give an opinion on this.
I also agree that at the present there is no way to know whether such programmable machines are possible. However, they are not strictly necessary for MNT. A nanofactory would be able to achieve MNT without needing any kind of nanocomputer anywhere. Nanorobots are not necessary, so arguments refuting them do not by any means refute the possibility of MNT.
References:
it's also notable that he successfully predicted the rise of the internet
Quibble: there was plenty of internet in 1986. He predicted a global hypertext publishing network, and its scale of impact, and starting when (mid-90s). (He didn't give any such timeframe for nanotechnology, I guess it's worth mentioning.)
Which one's the latest book?
Radical Abundance, came out this past month.
Added: The most relevant things in the book for this post (which I've only skimmed):
There's been lots of progress in molecular-scale engineering and science that isn't called nanotechnology. This progress has been pretty much along the lines Drexler sketched in his 1981 paper and in the how-can-we-get-there sections of Nanosystems, though. This matches what I saw sitting in on Caltech courses in biomolecular engineering last year. Drexler believes the biggest remaining holdup on the engineering work is how it's organized: when diverse scientists study nature their work adds up because nature is a whole, but when they work on bits and pieces of technology infrastructure in the same way, their work can't be expected to coalesce on its own into useful systems.
He gives his latest refinement of the arguments at a lay level.
The quine requirement seems to me to introduce non-productive complexity. If file reading is disallowed, why not just pass the program its own source code as well as its opponent's?
Yes -- in my version of this you do get passed your own source code as a convenience.
More generally, the set of legal programs doesn't seem clearly defined. If it were me, I would be tempted to only accept externally pure functions, and to precisely define what parts of the standard library are allowed. Then I would enforce this rule by modifying the global environment such that any disallowed behaviour would result in an exception being thrown, resulting in an "other" result.
But it's not me. So, what exactly will be allowed?
If you'd rather run with a very small and well-defined Scheme dialect meant just for this problem, see my reply to Eliezer proposing this kind of tournament. I made up a restricted language since Racket's zillion features would get in the way of interesting source-code analyses. Maybe they'll make the game more interesting in other ways?
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I've already read through it twice and didnt see the typos - pm me? I can't see them after a while.
This is still kinda rough draftish. In part it's my queued answer for the next time someone says ... "but machines will never be creative", as my last post was the cached answer for everything along the lines of "but we have no idea how the brain works."
s/From their/From there