Wei, you and others here interested in my opinions on this topic would benefit from understanding more about where I'm coming from, which you can mainly do by reading my old essays (especially the three philosophy essays I've just linked to on Unenumerated). It's a very different world view than the typical "Less Wrong" worldview: based far more on accumulated knowledge and far less on superficial hyper-rationality. You can ask any questions that you have of me there, as I don't typically hang out here. As for your questions on this topic:
(1) Th...
I only have time for a short reply:
(1) I'd rephrase the above to say that computer security is among the two most important things one can study with regard to this alleged threat.
(2) The other important thing is law. Law is the "offensive approach to the problem of security" in the sense I suspect you mean it (unless you mean something more like the military). Law is very highly evolved, the work of millions of people as smart or smarter than Yudkoswky over more than a millenium, and tested empirically against the real world of real agents ...
It's not something you can ever come close to competing with by a philosophy invented from scratch.
I don't understand what you mean by this. Are you saying something like if a society was ever taken over by a Friendly AI, it would fail to compete against one ruled by law, in either a military or economic sense? Or do you mean "compete" in the sense of providing the most social good. Or something else?
I stand by my comment that "AGI" and "friendliness" are hopelessly anthropomorphic, infeasible, and/or vague.
I disagree w...
The other important thing is law. Law is the "offensive approach to the problem of security" in the sense I suspect you mean it (unless you mean something more like the military). Law is very highly evolved, the work of millions of people as smart or smarter than Yudkoswky over more than a millenium, and tested empirically against the real world of real agents with a real diversity of values every day. It's not something you can ever come close to competing with by a philosophy invented from scratch.
As a lawyer, I strongly suspect this stateme...
Law is very highly evolved, the work of millions of people as smart or smarter than Yudkoswky over more than a millenium,
That seems pretty harsh! The Bureau of Labor Statistics reports 728,000 lawyers in the U.S., a notably attorney-heavy society within the developed world. The SMPY study of kids with 1 in 10,000 cognitive test scores found (see page 722) only a small minority studying law. The 90th percentile IQ for "legal occupations" in this chart is a little over 130. Historically populations were much lower, nutrition was worse, legal ed...
Selfish Gene itself is indeed quite sufficient to convince most thinking young people that evolution provides a far better explanation of how we got to be the way we are. It communicated far better than anybody else the core theories of neo-Darwinism which gave rise to evolutionary psychology, by stating bluntly the Copernican shift from group or individual selection to gene selection. Indeed, I'd still recommend it as the starting point for anybody interested in wading into the field of evolutionary psychology: you should understand the fairly elegant u...
I thought The Greatest Show On Earth (2010) was fantastic, and I'm currently rereading it. (I recommend this book to everyone. If you thought you understood evolution, you'll understand it better.) The first paragraph of the first chapter summarises just why Dawkins is so generally pissed off with religion these days:
...Imagine that you are a teacher of Roman history and the Latin language, anxious to impart your enthusiasm for the ancient world – for the elegiacs of Ovid and the odes of Horace, the sinewy economy of Latin grammar as exhibited in the orator
I am far more confident in it than I am in the AGI-is-important argument. Which of course isn't anywhere close to saying that I am highly confident in it. Just that the evidence for AGI-is-unimportant far outweighs that for AGI-is-important.
All of these kinds of futuristic speculations are stated with false certainly -- especially the AGi-is-very-important argument, which is usually stated with a level of certainty that is incredible for an imaginary construct. As for my evidence, I provide it in the above "see here" link -- extensive economic observations have been done on the benefits of specialization, for example, and we have extensive experience in computer science with applying specialized vs. generalized algorithms to problems and assessing their relative efficiency. That vast amount of real-world evidence far outweighs the mere speculative imagination that undergirds the AGI-is-very-important argument.
When some day some people (or some things) build an AGI, human-like or otherwise, it will at that time be extremely inferior to then-existing algorithms for any particular task (including any kind of learning or choice, including learning or choice of algorithms). Culture, including both technology and morality, will have changed beyond any of our recognitions long before that. Humans will already have been obsoleted for all jobs except, probably, those that for emotional reasons require interaction with another human (there's already a growth trend in su...
Skill at making such choices is itself a specialty, and doesn't mean you'll be good at other things. Indeed, the ability to properly choose algorithms in one problem domain often doesn't make you an expert at choosing them for a different problem domain. And as the software economy becomes more sophisticated these distinctions will grow ever sharper (basic Adam Smith here -- the division of labor grows with the size of the market). Such software choosers will come in dazzling variety: they like other useful or threatening software will not be general pu...
Indeed. As to why I find extreme consequences from general AI highly unlikely, see here. Alas, my main reason is partly buried in the comments (I really need to do a new post on this subject). It can be summarized as follows: for basic reasons of economics and computer science, specialized algorithms are generally far superior to general ones. Specialized algorithms are what we should hope for or fear, and their positive and negative consequences occur a little at a time -- and have been occurring for a long time already, so we have many actual real-wor...
It can be summarized as follows: for basic reasons of economics and computer science, specialized algorithms are generally far superior to general ones.
It would be better to present, as your main reason, "the kinds of general algorithms that humans are likely to develop and implement, even absent impediments caused by AI-existential risk activism, will almost certainly be far inferior to specialized ones". That there exist general-purpose algorithms which subsume the competitive abilities of all existing human-engineered special-purpose algori...
I should have said something about marginal utility there. Doesn't change the three tests for a Pascal scam though.
The asteroid threat is a good example of a low-probability disaster that is probably not a Pascal scam. On point (1) it is fairly lottery-like, insofar as asteroid orbits are relatively predictable -- the unknowns are primarily "known unknowns", being deviations from very simple functions -- so it's possible to compute odds from actual data, rather than merely guessing them from a morass of "unknown unknowns". It passes ...
ideal reasoners are not supposed to disagree
My ideal thinkers do disagree, even with themselves. Especially about areas as radically uncertain as this.