davidad has a 10-min talk out on a proposal about which he says: “the first time I’ve seen a concrete plan that might work to get human uploads before 2040, maybe even faster, given unlimited funding”.
I think the talk is a good watch, but the dialogue below is pretty readable even if you haven't seen it. I'm also putting some summary notes from the talk in the Appendix of this dialoge.
I think of the promise of the talk as follows. It might seem that to make the future go well, we have to either make general AI progress slower, or make alignment progress differentially faster. However, uploading seems to offer a third way: instead of making alignment researchers more productive, we "simply" run them faster. This seems...
I like Roko's suggestion that we should look at how many doomsayers actually predicted a danger (and how early). We should also look at how many dangers occurred with no prediction at all (the Cameroon lake eruptions come to mind).
Overall, the human error rate is pretty high: http://panko.shidler.hawaii.edu/HumanErr/ Getting the error rate under 0.5% per statement/action seems very unlikely, unless one deliberately puts it into a system that forces several iterations of checking and correction (Panko's data suggests that error checking typically finds abou...
A new report (Steven B. Giddings and Michelangelo M. Mangano, Astrophysical implications of hypothetical stable TeV-scale black holes, arXiv:0806.3381 ) does a much better job at dealing with the black hole risk than the old "report" Eliezer rightly slammed. It doesn't rely on Hawking radiation (but has a pretty nice section showing why it is very likely) but instead calculates how well black holes can be captured by planets, white dwarves and neutron stars (based on AFAIK well-understood physics, besides the multidimensional gravity one has to a...
If this is not a hoax or she does a Leary, we will have her around for a long time. Maybe one day she will even grow up. But seriously, I think Eli is right. In a way, given that I consider cryonics likely to be worthwhile, she has demonstrated that she might be more mature than I am.
To get back to the topic of this blog, cryonics and cognitive biases is a fine subject. There is a lot of biases to go around here, on all sides.
"If intelligence is an ability to act in the world, if it refer to some external reality, and if this reality is almost infinitely malleable, then intelligence cannot be purely innate or genetic."
This misses the No Free Lunch theorems, which state that there is no learning system that outperforms any other in general. Yes, full human intelligence, AI superintelligence, earthworms and selecting actions at random are just as good. The trick is "in general", since that covers an infinity of patternless possible worlds. Worlds with (to us) ...
People have apparently argued for a 300 to 30,000 years storage limit due to free radicals due to cosmic rays, but the uncertainty is pretty big. Cosmic rays and background radiation are likely not as much a problem as carbon-14 and potassium-40 atoms anyway, not to mention the freezing damage. http://www.cryonics.org/1chapter2.html has a bit of discussion of this. The quick way of estimating the damage is to assume it is time compressed, so that the accumulated yearly dose is given as an acute dose.
I think Kaj has a good point. In a current paper I'm discussing the Fermi paradox and the possibility of self-replicating interstellar killing machines. Should I mention Saberhagen's berserkers? In this case my choice was pretty easy, since beyond the basic concept his novels don't contain that much of actual relevance to my paper, so I just credit him with the concept and move on.
The example of Metamorphosis of Prime Intellect seems deeper, since it would be a example of something that can be described entirely theoretically but becomes more vivid and cle...
Another reason people overvalue science fiction is the availability bias due to the authors who got things right. Jules Verne had a fairly accurate time for going from the Earth to the Moon, Clarke predicted/invented geostationary satelites, John Brunner predicted computer worms. But of course this leaves out all space pirates using slide rules for astrogation (while their robots serve rum), rays from unknown parts of the electromagnetic spectrum and gravity-shielding cavorite. There is a vast number of quite erroneous predictions.
I have collected a list o...
In think the "death gives meaning to life" meme is a great example of "standard wisdom". It is apparently paradoxical (right form to be "deep"), it provides a comfortable consolation for a nasty situation. But I have seldom seen any deep defense for it in the bioethical literature. Even people who strongly support it and ought to work very hard to demonstrate to fellow philosophers that it is a true statement seem to be content to just rattle it off as self-evident (or that people not feeling it in their guts are simply superf...
I have played with the idea of writing a "wisdom generator" program for a long time. A lot of "wise" statements seem to follow a small set of formulaic rules, and it would not be too hard to make a program that randomly generated wise sayings. A typical rule is to create a paradox ("Seek freedom and become captive of your desires. Seek discipline and find your liberty") or just use a nice chiasm or reversal ("The heart of a fool is in his mouth, but the mouth of the wise man is in his heart"). This seems to fit in w...
There is much to be said for looking at the super-specific. All the interesting complexity is found in the specific cases, while the whole often has less complexity (i.e. the algorithmic complexity of a list of the integers is much smaller than the algorithmic complexity of most large integers). While we might be trying to find good compressed descriptions of the whole, if we do not see how specific cases can be compressed and how they relate to each other we do not have much of a starting point, given that the whole usually overwhelms our limited working memories.
Staring at walls is underrated. But I tend to get distracted from my main project by all the interesting details in the walls.
It appears that priming can be reduced by placing words into a context: priming for words previously seen in a text (or even a nonsense jumble) is weaker than when seen individually.
I constantly buy textbooks and use them as bedtime reading. A wonderful way to pick up the fundamentals (or at least a superficial familiarity) with many subjects. However, just reading any textbook is unlikely to actually give a great insight into any field. Doing exercises, and in particular having a teacher or mentor point out what is important, is necessary for actually getting anywhere.
To add at least some thread-relevant material, I'd like to recommend Eliezer's web page "An Intuitive Explanation of Bayesian Reasoning" at http://yudkowsky.n...
In my opinion a full scale thermonuclear war would likely neither have wiped out humanity (I'm reading the original nuclear winter papers as well as their criticisms right now) nor wiped out civilization. It would have been terribly bad for both though. I did a small fictional writeup of such a scenario for a roleplaying game, http://www.nada.kth.se/~asa/Game/Fukuyama/bigd.html based in turn on the information in "The Effects of Nuclear War" (OTA 1979). That scenario may have been too optimistic, but it is hard to tell. It seems that much would d...
I agree with Tom that there isn't that much room to change the field equations once you have decided on the Riemannian tensor framework: gravity cannot be expressed as first-order differential equations and still fit with observation, while number of objects to build a set of second-order equations is very limited. The equations are the simplest possibility (with the cosmological constant as a slight uglification, but it is just a constant of integration).
But selecting the tensor framework, that is of course where all the bits had to go. It is not an obvi...
Yes, publication bias matters. But it also applies to the p<0.001 experiment - if we have just a single publication, should we believe that the effect is true and just one group has done the experiment, or that the effect is false and publication bias has prevented the publication of the negative results? If we had a few experiments (even with different results) it would be easier to estimate this than in the one published experiment case.
This also shows why independently replicated scientific experiments (more independent boxes) are more important than experiments with high p-values (boxes with better likeliehood ratios).
While Eliezer and I may be approaching the topic differently, I think we have very much the same aim. My approach will however never produce anything worthy to go into anybody's quote file.
David Brin has a nice analysis in his book The Transparent Society of what makes open societies work so well (no doubt distilled from others). Essentially it is the freedom to criticize and hold accountable that keeps powerful institutions honest and effective. While most people do not care or dare enough there are enough "antibodies" in a healthy open society to maintain it, even when the "antibodies" themselves may not always be entirely sane (there is a kind of social "peer review" going on here among the criticisms).
Muddle...
I did a calculation here:
http://tinyurl.com/3rgjrl
and concluded that I would start to believe there was something to the universe-destroying scenario after about 30 clear, uncorrelated mishaps (even when taking a certain probability of foul play into account).