Like any educated denizen of the 21st century, you may have heard of World War II. You may remember that Hitler and the Nazis planned to carry forward a romanticized process of evolution, to breed a new master race, supermen, stronger and smarter than anything that had existed before.
Actually this is a common misconception. Hitler believed that the Aryan superman had previously existed—the Nordic stereotype, the blond blue-eyed beast of prey—but had been polluted by mingling with impure races. There had been a racial Fall from Grace.
It says something about the degree to which the concept of progress permeates Western civilization, that the one is told about Nazi eugenics and hears "They tried to breed a superhuman." You, dear reader—if you failed hard enough to endorse coercive eugenics, you would try to create a superhuman. Because you locate your ideals in your future, not in your past. Because you are creative. The thought of breeding back to some Nordic archetype from a thousand years earlier would not even occur to you as a possibility—what, just the Vikings? That's all? If you failed hard enough to kill, you would damn well try to reach heights never before reached, or what a waste it would all be, eh? Well, that's one reason you're not a Nazi, dear reader.
It says something about how difficult it is for the relatively healthy to envision themselves in the shoes of the relatively sick, that we are told of the Nazis, and distort the tale to make them defective transhumanists.
It's the Communists who were the defective transhumanists. "New Soviet Man" and all that. The Nazis were quite definitely the bioconservatives of the tale.
Relatively new to the forum and just watched the 2 1/2 hour Yudkowsky video on Google. Excellent talk that really helped frame some of the posts here for me, though the audience questions were generally a distraction.
My biggest disappointment was the one question that popped up in my mind while watching and was actually posed wasn't answered because it would take about 5 minutes. The man who asked was told to pose it again at the end of the talk, but did not.
This was the question about the friendly AI: "Why are you assuming it knows the outcome of its modifications?"
Any pointer to the answer would be much appreciated.