Comment author: nickLW 15 August 2012 08:08:21AM *  5 points [-]

ideal reasoners are not supposed to disagree

My ideal thinkers do disagree, even with themselves. Especially about areas as radically uncertain as this.

Comment author: Wei_Dai 23 July 2012 12:13:20PM 2 points [-]

Nick, do you see a fault in how I've been carrying on our discussions as well? Because you've also left several of our threads dangling, including:

  • How likely is it that an AGI will be created before all of its potential economic niches have been filled by more specialized algorithms?
  • How much hope is there for "security against malware as strong as we can achieve for symmetric key cryptography"?
  • Does "hopelessly anthropomorphic and vague" really apply to "goals"?

(Of course it's understandable if you're just too busy. If that's the case, what kind of projects are you working on these days?)

Comment author: nickLW 27 July 2012 03:36:30AM 2 points [-]

Wei, you and others here interested in my opinions on this topic would benefit from understanding more about where I'm coming from, which you can mainly do by reading my old essays (especially the three philosophy essays I've just linked to on Unenumerated). It's a very different world view than the typical "Less Wrong" worldview: based far more on accumulated knowledge and far less on superficial hyper-rationality. You can ask any questions that you have of me there, as I don't typically hang out here. As for your questions on this topic:

(1) There is insufficient evidence to distinguish it from an arbitrarily low probability.

(2) To state a probability would be an exercise in false precision, but at least it's a clearly stated goal that one can start gathering evidence for and against.

(3) It depends on how clearly and formally the goal is stated, including the design of observatons and/or experiments that can be done to accurately (not just precisely) measure progress towards and attainment or non-attainment of that goal.

As for what I'm currently working on, my blog Unenumerated is a good indication of my publicly accessible work. Also feel free to ask any follow-up questions or comments you have stemming from this thread there.

Comment author: CarlShulman 22 July 2012 02:35:49AM *  10 points [-]

Law is very highly evolved, the work of millions of people as smart or smarter than Yudkoswky over more than a millenium,

That seems pretty harsh! The Bureau of Labor Statistics reports 728,000 lawyers in the U.S., a notably attorney-heavy society within the developed world. The SMPY study of kids with 1 in 10,000 cognitive test scores found (see page 722) only a small minority studying law. The 90th percentile IQ for "legal occupations" in this chart is a little over 130. Historically populations were much lower, nutrition was worse, legal education or authority was only available to a small minority, and the Flynn Effect had not occurred. Not to mention that law is disproportionately made by politicians who are selected for charisma and other factors in addition to intelligence.

and tested empirically against the real world of real agents with a real diversity of values every day. It's not something you can ever come close to competing with by a philosophy invented from scratch.

It's hard to know what to make of this.

Perhaps that the legal system is good at creating incentives that closely align the interests of those it governs with the social good, and that this will work on new types of being without much dependence on their decisionmaking processes?

Contracts and basic property rights certainly do seem to help produce wealth. On the other hand, financial regulation is regularly adjusted to try to nullify new innovation by financiers that poses systemic risks or exploits government guarantees, but the financial industry still frequently outmaneuvers the legal system. And of course the legal system depends on the loyalty of the security forces for enforcement, and makes use of ideological agreement among the citizenry that various things are right or wrong.

Restraining those who are much weaker is easier than restraining those who are strong. A more powerful analogy would be civilian control over military and security forces. There do seem to have been big advances in civilian control over the military in the developed countries (fewer coups, etc), but they seem to reflect changes in ideology and technology more than law.

If it is easy to enforce laws on new AGI systems, then the situation seems fairly tractable, even for AGI systems with across-the-board superhuman performance which take action based on alien and inhumane cost functions. But it doesn't seem guaranteed that it will be easy to enforce such laws on smart AGIs, or that the trajectory of development will be "all narrow AI, all the time," given the great economic value of human generality.

Comment author: nickLW 22 July 2012 05:27:37PM -2 points [-]

The Bureau of Labor Statistics reports 728,000 lawyers in the U.S

I would have thought it obvious that I was talking about lawyers who have been developing law for at least a millenium, not merely currently living lawyers in one particular country. Oh well.

Since my posts seem to be being read so carelessly, I will no longer be posting on this thread. I highly recommend folks who want to learn more about where I'm coming from to visit my blog, Unenumerated. Also, to learn more about the evolutionary emergence of ethical and legal rules, I highly recommend Hayek -- Fatal Conceit makes a good startng point.

Comment author: nickLW 21 July 2012 10:09:52PM 9 points [-]

I only have time for a short reply:

(1) I'd rephrase the above to say that computer security is among the two most important things one can study with regard to this alleged threat.

(2) The other important thing is law. Law is the "offensive approach to the problem of security" in the sense I suspect you mean it (unless you mean something more like the military). Law is very highly evolved, the work of millions of people as smart or smarter than Yudkoswky over more than a millenium, and tested empirically against the real world of real agents with a real diversity of values every day. It's not something you can ever come close to competing with by a philosophy invented from scratch.

(3) I stand by my comment that "AGI" and "friendliness" are hopelessly anthropomorphic, infeasible, and/or vague.

(4) Computer "goals" are only usefully studied against actual algorithms, or clearly defined mathemetical classes of algorithms, not vague and imaginary concepts. Perhaps you can make some progress by for example advancing the study of postconditions, which seem to be the closest analog to goals in the software engineering world. One can imagine a world where postconditions are always checked, for example, and other software ignores the output of software that has violated one of its postconditions.

Comment author: OrphanWilde 19 July 2012 08:06:46PM 5 points [-]

His early works, such as The Selfish Gene, were actually really good books for convincing somebody of an alternative to creationism or guided creation, however. (Which isn't the same as convincing somebody of atheism, but does give somebody paralyzed by the question of where complex life came from a much-needed line of retreat.)

Comment author: nickLW 20 July 2012 03:27:34AM 5 points [-]

Selfish Gene itself is indeed quite sufficient to convince most thinking young people that evolution provides a far better explanation of how we got to be the way we are. It communicated far better than anybody else the core theories of neo-Darwinism which gave rise to evolutionary psychology, by stating bluntly the Copernican shift from group or individual selection to gene selection. Indeed, I'd still recommend it as the starting point for anybody interested in wading into the field of evolutionary psychology: you should understand the fairly elegant underlying theory before doing the deep dive into what is now a far less elegant and organized study (in part because many of its practioners still don't understand the underlying theory).

Dawkins also had some very interesting theories of his own about evolution and animal behavior in Extended Phenotype, and despite his skill as a communicator of science it's a great loss that he largely discontinued his actual research in science.

In Blind Watchmaker he actually expresses quite a bit of understanding and empathy for major creationist arguments, especially the watchmaker argument, in the process of debunking them far better than any evolutionist had ever debunked them before.

Since then, he's gone downhill, becoming by now pedantic and repetitive and shrill. Of course he went downhill from a great height very few of us will ever hope to achieve, but it's sad nevertheless.

Comment author: Wei_Dai 19 July 2012 06:40:45PM *  0 points [-]

When some day some people (or some things) build an AGI [...] Humans will already have been obsoleted for all jobs except, probably, those that for emotional reasons require interaction with another human

To rephrase my question, how confident are you of this, and why? It seems to me quite possible that by the time someone builds an AGI, there are still plenty of human jobs that have not been taken over by specialized algorithms due to humans not being smart enough to have invented the necessary specialized algorithms yet. Do you have a reason to think this can't be true?

ETA: My reply is a bit redundant given Nesov's sibling comment. I didn't see his when I posted mine.

Comment author: nickLW 19 July 2012 11:08:44PM 5 points [-]

I am far more confident in it than I am in the AGI-is-important argument. Which of course isn't anywhere close to saying that I am highly confident in it. Just that the evidence for AGI-is-unimportant far outweighs that for AGI-is-important.

Comment author: Vladimir_Nesov 19 July 2012 06:39:08PM *  0 points [-]

When some day some people (or some things) build an AGI, human-like or otherwise, it will at that time be extremely inferior to then-existing algorithms for any particular task (including any kind of learning or choice, including learning or choice of algorithms). Culture, including both technology and morality, will have changed beyond any of our recognitions long before that. Humans will already have been obsoleted for all jobs except, probably, those that for emotional reasons require interaction with another human (there's already a growth trend in such jobs today).

The phrasing suggests a level of certainty that's uncalled for for a claim that's so detailed and given without supporting evidence. I'm not sure there is enough support for even paying attention to this hypothesis. Where does it come from?

(Obvious counterexample that doesn't seem unlikely: AGI is invented early, so all the cultural changes you've listed aren't present at that time.)

Comment author: nickLW 19 July 2012 11:02:34PM 3 points [-]

All of these kinds of futuristic speculations are stated with false certainly -- especially the AGi-is-very-important argument, which is usually stated with a level of certainty that is incredible for an imaginary construct. As for my evidence, I provide it in the above "see here" link -- extensive economic observations have been done on the benefits of specialization, for example, and we have extensive experience in computer science with applying specialized vs. generalized algorithms to problems and assessing their relative efficiency. That vast amount of real-world evidence far outweighs the mere speculative imagination that undergirds the AGI-is-very-important argument.

Comment author: Wei_Dai 19 July 2012 12:12:23PM 0 points [-]

If uploads are infeasible, what about other possible ways to build AGIs? In any case, I'm responding to Nick's argument that we do not have have to worry about extreme consequences from AGIs because "specialized algorithms are generally far superior to general ones", which seems to be a separate argument from whether AGIs are feasible.

Comment author: nickLW 19 July 2012 05:36:43PM 3 points [-]

When some day some people (or some things) build an AGI, human-like or otherwise, it will at that time be extremely inferior to then-existing algorithms for any particular task (including any kind of learning or choice, including learning or choice of algorithms). Culture, including both technology and morality, will have changed beyond any of our recognitions long before that. Humans will already have been obsoleted for all jobs except, probably, those that for emotional reasons require interaction with another human (there's already a growth trend in such jobs today).

The robot apocalypse, in other worlds, will arrive and is arriving one algorithm at a time. It's a process we can observe unfolding, since it has been going on for a long time already, and learn from -- real data rather than imagination. Targetting an imaginary future algorithm does nothing to stop it.

If, for example, you can't make current algorithms "friendly", it's highly unlikely that you're going to make the even more hyperspecialized algorithms of the future friendly either. Instead of postulting imaginary solutions to imaginary problems, it's much more useful to work empirically, e.g. on computer scecurity that mathematically prevents algorithms in general from violating particular desired rights. Recognize real problems and demonstrate real solutions to them.

Comment author: Wei_Dai 18 July 2012 11:27:51PM *  2 points [-]

It can be summarized as follows: for basic reasons of economics and computer science, specialized algorithms are generally far superior to general ones.

I don't understand your reasoning here. If you have a general AI, it can always choose to apply or invent a specialized algorithm when the situation calls for that, but if all you have is a collection of specialized algorithms, then you have to try to choose/invent the right algorithm yourself, and will likely do a worse (possibly much worse) job than the general AI if it is smarter than you are. So why do we not have to worry about "extreme consequences from general AI"?

Comment author: nickLW 19 July 2012 06:00:14AM 5 points [-]

Skill at making such choices is itself a specialty, and doesn't mean you'll be good at other things. Indeed, the ability to properly choose algorithms in one problem domain often doesn't make you an expert at choosing them for a different problem domain. And as the software economy becomes more sophisticated these distinctions will grow ever sharper (basic Adam Smith here -- the division of labor grows with the size of the market). Such software choosers will come in dazzling variety: they like other useful or threatening software will not be general purpose. And who will choose the choosers? No sentient entity at all -- they'll be chosen they way they are today, by a wide variety of markets, except that there too the variety will be far greater.

Such markets and technologies are already far beyond the ability of any single human to comprehend, and that gap between economic and technological reality and our ability to comprehend and predict it grows wider every year. In that sense, the singularity already happened, and long ago.

Comment author: [deleted] 17 July 2012 01:12:46PM 4 points [-]

Seems like there is more going on than just "Do transhumanists endorse Pascalian bargains?" Because of confidence levels inside and outside an argument, the fact that someone (e.g. SI) makes an argument that a particular risk has a non-negligible probability does not mean that someone examining this claim should assign a non-negligible probability. It's very possible for someone thinking about (e.g.) AI risk to assign low probabilities and thus find themselves in a Pascalian situation even if SI argues that the probability of AI risk is high.

Comment author: nickLW 17 July 2012 06:18:15PM 5 points [-]

Indeed. As to why I find extreme consequences from general AI highly unlikely, see here. Alas, my main reason is partly buried in the comments (I really need to do a new post on this subject). It can be summarized as follows: for basic reasons of economics and computer science, specialized algorithms are generally far superior to general ones. Specialized algorithms are what we should hope for or fear, and their positive and negative consequences occur a little at a time -- and have been occurring for a long time already, so we have many actual real-world observations to go by. They can be addressed specifically, each passing tests 1-3, so that we can solve these problems and achieve these hopes one specialized task at a time, as well as induce general theories from these experiences (e.g. of security), without getting sucked into any of the near-infinity of Pascal scams one could dream up about the future of computing and robotics.

View more: Next