Posts

Sorted by New

Wiki Contributions

Comments

ideal reasoners are not supposed to disagree

My ideal thinkers do disagree, even with themselves. Especially about areas as radically uncertain as this.

Wei, you and others here interested in my opinions on this topic would benefit from understanding more about where I'm coming from, which you can mainly do by reading my old essays (especially the three philosophy essays I've just linked to on Unenumerated). It's a very different world view than the typical "Less Wrong" worldview: based far more on accumulated knowledge and far less on superficial hyper-rationality. You can ask any questions that you have of me there, as I don't typically hang out here. As for your questions on this topic:

(1) There is insufficient evidence to distinguish it from an arbitrarily low probability.

(2) To state a probability would be an exercise in false precision, but at least it's a clearly stated goal that one can start gathering evidence for and against.

(3) It depends on how clearly and formally the goal is stated, including the design of observatons and/or experiments that can be done to accurately (not just precisely) measure progress towards and attainment or non-attainment of that goal.

As for what I'm currently working on, my blog Unenumerated is a good indication of my publicly accessible work. Also feel free to ask any follow-up questions or comments you have stemming from this thread there.

I only have time for a short reply:

(1) I'd rephrase the above to say that computer security is among the two most important things one can study with regard to this alleged threat.

(2) The other important thing is law. Law is the "offensive approach to the problem of security" in the sense I suspect you mean it (unless you mean something more like the military). Law is very highly evolved, the work of millions of people as smart or smarter than Yudkoswky over more than a millenium, and tested empirically against the real world of real agents with a real diversity of values every day. It's not something you can ever come close to competing with by a philosophy invented from scratch.

(3) I stand by my comment that "AGI" and "friendliness" are hopelessly anthropomorphic, infeasible, and/or vague.

(4) Computer "goals" are only usefully studied against actual algorithms, or clearly defined mathemetical classes of algorithms, not vague and imaginary concepts. Perhaps you can make some progress by for example advancing the study of postconditions, which seem to be the closest analog to goals in the software engineering world. One can imagine a world where postconditions are always checked, for example, and other software ignores the output of software that has violated one of its postconditions.

Selfish Gene itself is indeed quite sufficient to convince most thinking young people that evolution provides a far better explanation of how we got to be the way we are. It communicated far better than anybody else the core theories of neo-Darwinism which gave rise to evolutionary psychology, by stating bluntly the Copernican shift from group or individual selection to gene selection. Indeed, I'd still recommend it as the starting point for anybody interested in wading into the field of evolutionary psychology: you should understand the fairly elegant underlying theory before doing the deep dive into what is now a far less elegant and organized study (in part because many of its practioners still don't understand the underlying theory).

Dawkins also had some very interesting theories of his own about evolution and animal behavior in Extended Phenotype, and despite his skill as a communicator of science it's a great loss that he largely discontinued his actual research in science.

In Blind Watchmaker he actually expresses quite a bit of understanding and empathy for major creationist arguments, especially the watchmaker argument, in the process of debunking them far better than any evolutionist had ever debunked them before.

Since then, he's gone downhill, becoming by now pedantic and repetitive and shrill. Of course he went downhill from a great height very few of us will ever hope to achieve, but it's sad nevertheless.

I am far more confident in it than I am in the AGI-is-important argument. Which of course isn't anywhere close to saying that I am highly confident in it. Just that the evidence for AGI-is-unimportant far outweighs that for AGI-is-important.

All of these kinds of futuristic speculations are stated with false certainly -- especially the AGi-is-very-important argument, which is usually stated with a level of certainty that is incredible for an imaginary construct. As for my evidence, I provide it in the above "see here" link -- extensive economic observations have been done on the benefits of specialization, for example, and we have extensive experience in computer science with applying specialized vs. generalized algorithms to problems and assessing their relative efficiency. That vast amount of real-world evidence far outweighs the mere speculative imagination that undergirds the AGI-is-very-important argument.

When some day some people (or some things) build an AGI, human-like or otherwise, it will at that time be extremely inferior to then-existing algorithms for any particular task (including any kind of learning or choice, including learning or choice of algorithms). Culture, including both technology and morality, will have changed beyond any of our recognitions long before that. Humans will already have been obsoleted for all jobs except, probably, those that for emotional reasons require interaction with another human (there's already a growth trend in such jobs today).

The robot apocalypse, in other worlds, will arrive and is arriving one algorithm at a time. It's a process we can observe unfolding, since it has been going on for a long time already, and learn from -- real data rather than imagination. Targetting an imaginary future algorithm does nothing to stop it.

If, for example, you can't make current algorithms "friendly", it's highly unlikely that you're going to make the even more hyperspecialized algorithms of the future friendly either. Instead of postulting imaginary solutions to imaginary problems, it's much more useful to work empirically, e.g. on computer scecurity that mathematically prevents algorithms in general from violating particular desired rights. Recognize real problems and demonstrate real solutions to them.

Skill at making such choices is itself a specialty, and doesn't mean you'll be good at other things. Indeed, the ability to properly choose algorithms in one problem domain often doesn't make you an expert at choosing them for a different problem domain. And as the software economy becomes more sophisticated these distinctions will grow ever sharper (basic Adam Smith here -- the division of labor grows with the size of the market). Such software choosers will come in dazzling variety: they like other useful or threatening software will not be general purpose. And who will choose the choosers? No sentient entity at all -- they'll be chosen they way they are today, by a wide variety of markets, except that there too the variety will be far greater.

Such markets and technologies are already far beyond the ability of any single human to comprehend, and that gap between economic and technological reality and our ability to comprehend and predict it grows wider every year. In that sense, the singularity already happened, and long ago.

Indeed. As to why I find extreme consequences from general AI highly unlikely, see here. Alas, my main reason is partly buried in the comments (I really need to do a new post on this subject). It can be summarized as follows: for basic reasons of economics and computer science, specialized algorithms are generally far superior to general ones. Specialized algorithms are what we should hope for or fear, and their positive and negative consequences occur a little at a time -- and have been occurring for a long time already, so we have many actual real-world observations to go by. They can be addressed specifically, each passing tests 1-3, so that we can solve these problems and achieve these hopes one specialized task at a time, as well as induce general theories from these experiences (e.g. of security), without getting sucked into any of the near-infinity of Pascal scams one could dream up about the future of computing and robotics.

Load More