Here's the argument I'd give for this kind of bottleneck. I haven't studied evolutionary genetics; maybe I'm thinking about it all wrong.
In the steady state, an average individual has n children in their life, and just one of those n makes it to the next generation. (Crediting a child 1/2 to each parent.) This gives log2(n) bits of error-correcting signal to prune deleterious mutations. If the genome length times the functional bits per base pair times the mutation rate is greater than that log2(n), then you're losing functionality with every generation.
On...
An allegedly effective manual spaced-repetition system: flashcards in a shoebox with dividers. You take cards from the divider at one end and redistribute them by how well you recall. I haven't tried this, but maybe I will since notecards have some advantages over a computer at a desk or a phone.
(It turns out I was trying to remember the Leitner system, which is slightly different.)
Radical Abundance is worth reading. It says that current work is going on under other names like biomolecular engineering, the biggest holdup is a lack of systems engineering focused on achieving strategic capabilities (like better molecular machines for molecular manufacturing), and we ought to be preparing for those developments. It's in a much less exciting style than his first book.
Small correction: Law's Order is by David Friedman, the middle generation. It's an excellent book.
I had a similar reaction to the sequences. Some books that influenced me the most as a teen in the 80s: the Feynman Lectures and Drexler's Engines of Creation. Feynman modeled scientific rationality, thinking for yourself, clarity about what you don't know or aren't explaining, being willing to tackle problems, ... it resists a summary. Drexler had many of the same virtues, plus thinking carefully and boldly about future technology and what we might need to do...
I'm not a physicist, but if I wanted to fuse metallic hydrogen I'd think about a really direct approach: shooting two deuterium/tritium bullets at each other at 1.5% of c (for a Coulomb barrier of 0.1 MeV according to Wikipedia). The most questionable part I can see is that a nucleus from one bullet could be expected to miss thousands of nuclei from the other, before it hit one, and I would worry about losing too much energy to bremsstrahlung in those encounters.
IIRC Doug Orleans once made an ifMUD bot for a version of Zendo where a rule was a regular expression. This would give the user a way to express their guess of the rule instead of you having to test them on examples (regex equality is decidable).
Also I made a version over s-expressions and Lisp predicates -- it was single-player and never released. It would time-out long evaluations and treat them as failure. I wonder if I can dig up the code...
Here's what's helped for me. I had strong headaches that would persist for weeks, with some auras, which my doctor called migraines. (They don't seem to be as bad as what people usually mean by the word.) A flaxseed oil supplement keeps them away. When I don't take enough, they come back; it needs to be at least 15g/day or so (many times more than the 2-3 gelcaps/day that supplement bottles direct you to take). I've taken fish oil occasionally instead.
I found this by (non-blinded) experimenting with different allegedly anti-inflammatory supplements. I'm not a doctor, etc.
Computing: The Pattern On The Stone by Daniel Hillis. It's shorter and seemingly more focused on principles than the Petzold book Code, which I can't compare further because I stopped reading early (low information density).
it's also notable that he successfully predicted the rise of the internet
Quibble: there was plenty of internet in 1986. He predicted a global hypertext publishing network, and its scale of impact, and starting when (mid-90s). (He didn't give any such timeframe for nanotechnology, I guess it's worth mentioning.)
Radical Abundance, came out this past month.
Added: The most relevant things in the book for this post (which I've only skimmed):
There's been lots of progress in molecular-scale engineering and science that isn't called nanotechnology. This progress has been pretty much along the lines Drexler sketched in his 1981 paper and in the how-can-we-get-there sections of Nanosystems, though. This matches what I saw sitting in on Caltech courses in biomolecular engineering last year. Drexler believes the biggest remaining holdup on the engineering work is how it's
If you'd rather run with a very small and well-defined Scheme dialect meant just for this problem, see my reply to Eliezer proposing this kind of tournament. I made up a restricted language since Racket's zillion features would get in the way of interesting source-code analyses. Maybe they'll make the game more interesting in other ways?
There's a Javascript library by Andrew Plotkin for this sort of thing that handles 'a/an' and capitalization and leaves your code less repetitive, etc.
In Einstein's first years in the patent office he was working on his PhD thesis, which when completed in 1905 was still one of his first publications. I've read Pais's biography and it left me with the impression that his career up to that point was unusually independent, with some trouble jumping through the hoops of his day, but not extraordinarily so. They didn't have the NSF back then funding all the science grad students.
I agree that all the people we're discussing were brought into the system (the others less so than Einstein) and that Einstein had t...
Isn't an H atom more like 0.1nm in diameter? Of course it's fuzzy.
I agree with steven0461's criticisms. Drexler outlines a computer design giving a lower bound of 10^16 instructions/second/watt.
Should there be a ref to http://e-drexler.com/d/07/00/1204TechnologyRoadmap.html ?
Quibbling about words: "atom by atom" seems to have caused some confusion with some people (taking it literally as defining how you build things when the important criterion is atomic precision). Also "nanobots" was coined in a ST:TNG episode, IIRC, and I'm not sure if people in the field use it.
You could grind seeds in a coffee grinder, as BillyOblivion suggests. (I don't because the extra stuff in seeds disagrees with another body issue of mine.) Sometimes I take around 5 gelcaps a day while traveling, which isn't as effective but makes most of the difference for the headaches.
What I do is put on a swimmer's nose clip, drink the oil by alternately taking in a mouthful of water and floating a swallow of oil down on top of that; follow up with a banana or something because I've found taking it on an empty stomach to disagree with me; have a bit mo...
When you call RUN, one of two things happens: it produces a result or you die from exhaustion. If you die, you can't act. If you get a result, you now know something about how much fuel there was before, at the cost of having used it up. The remaning fuel might be any amount in your prior, minus the amount used.
At the Scheme prompt:
(run 10000 '(equal? 'exhausted (cadr (run 1000 '((lambda (f) (f f)) (lambda (f) (f f))) (global-environment)))) global-environment)
; result: (8985 #t) ; The subrun completed and we find #t for yes, it ran to exhaustion.
(r
... The only way to check your fuel is to run out -- unless I goofed.
You could call that message passing, though conventionally that names a kind of overt influence of one running agent on another, all kinds of which are supposed to excluded.
It shouldn't be hard to do variations where you can only run the other player and not look at their source code.
A related example that I, personally, considered science fiction back in the 80s: Jerry Pournelle's prediction that by the year 2000 you'd be able to ask a computer any question, and if there was a humanly-known answer, get it back. Google arrived with a couple years to spare. To me that had sounded like an AI-complete problem even were all the info online.
You bring up cryonics and AI. 25 years ago Engines of Creation had a chapter on each, plus another on... a global hypertext publishing network like the Web. The latter seemed less absurd back then than the first two, but it was still pretty far out there:
...One of the things I did was travel around the country trying to evangelize the idea of hypertext. People loved it, but nobody got it. Nobody. We provided lots of explanation. We had pictures. We had scenarios, little stories that told what it would be like. People would ask astonishing questions, like “w
A doctor faces a patient whose problem has resisted decision-tree diagnosis -- decision trees augmented by intangibles of experience and judgement, sure. The patient wants some creative debugging, which might at least fail differently. Will they get their wish? Not likely: what's in it for the doctor? The patient has some power of exit, not much help against a cartel. To this patient, to first order, Phil Goetz is right, and your points partly elaborate why he's right and partly list higher-order corrections.
(I did my best to put it dispassionately, but I'm rather angry about this.)
I've wondered lately while reading The Laws of Thought if BDDs might help human reasoning too, the kind that gets formalized as boolean logic, of course.
This article reminded me of your post elsewhere about lazy partial evaluation / explanation-based learning and how both humans and machines use it.
The slowest phase in a nonoptimizing compiler is lexical scanning. (An optimizer can usefully absorb arbitrary amounts of effort, but most compiles don't strictly need it.) For most languages, scanning can be done in a few cycles/byte. Scanning with finite automata can also be done in parallel in O(log(n)) time, though I don't know of any compilers that do that. So, a system built for fast turnaround, using methods we know now (like good old Turbo Pascal), ought to be able to compile several lines/second given 1 kcycle/sec. Therefore you still want to reco...
A idealized free market is that of selfish rational agents competing (with a few extra condition I'm skipping). I'm moderately confident this could work pretty ok in the absence of "general" (if such a thing exists) or perhaps human "intelligence", but I'm not familiar enough with simulations of markets to be certain.
Eric Baum's papers, among others, show this kind of thing applied to AI. There doesn't seem to have been much followup.
Comparative Ecology: A Computational Perspective compares this idea to the human economy and biologic...
About this article's tags: you want dark_arts, judging by the tags in the sidebar. The 'arts' tag links to posts about fiction, etc.
ObDarkArts101: Here's a course that could actually have been titled that:
...Writing Persuasion (Spring 2011) A course in persuasive techniques that do not rely on overt arguments. It would not be entirely inaccurate to call this a course in the theory, practice, and critique of sophistry. We will explore how putatively neutral narratives may be inflected to advance a (sometimes unstated) position; how writing can exploit reader
There might be more agreement here than meets the eye. Drexler often posts informatively and approvingly about progress in DNA nanotechnology and other bio-related tech at http://metamodern.com ; this is the less surprising when you remember his very first nanotech paper outlined protein engineering as the development path. Nanosystems is mainly about establishing the feasibility of a range of advanced capabilities which happen to not already be done by biology, and for which it's not obvious how it could. Biology and its environment being complicated and ...
That writes can affect another session violates my expectations, at least, of the boundaries that'd be set.