All of nickLW's Comments + Replies

nickLW30

ideal reasoners are not supposed to disagree

My ideal thinkers do disagree, even with themselves. Especially about areas as radically uncertain as this.

nickLW10

Wei, you and others here interested in my opinions on this topic would benefit from understanding more about where I'm coming from, which you can mainly do by reading my old essays (especially the three philosophy essays I've just linked to on Unenumerated). It's a very different world view than the typical "Less Wrong" worldview: based far more on accumulated knowledge and far less on superficial hyper-rationality. You can ask any questions that you have of me there, as I don't typically hang out here. As for your questions on this topic:

(1) Th... (read more)

6Wei Dai
I've actually already read those essays (which I really enjoyed, BTW), but still often cannot see how you've arrived at your conclusions on the topics we've been talking about recently. For the rest of your comment, you seem to have misunderstood my grandparent comment. I was asking you to respond to my arguments on each of the threads we were discussing, not just to tell me how you would answer each of my questions. (I was using the questions to refer to our discussions, not literally asking them. Sorry if I didn't make that clear.)
4Wei Dai
Nick, do you see a fault in how I've been carrying on our discussions as well? Because you've also left several of our threads dangling, including: * How likely is it that an AGI will be created before all of its potential economic niches have been filled by more specialized algorithms? * How much hope is there for "security against malware as strong as we can achieve for symmetric key cryptography"? * Does "hopelessly anthropomorphic and vague" really apply to "goals"? (Of course it's understandable if you're just too busy. If that's the case, what kind of projects are you working on these days?)
6CarlShulman
A careful reading of my own comment would have revealed my references to the US as only one heavily lawyered society (useful for an upper bound on lawyer density, and representing a large portion of the developed world and legal population), and to the low population of past centuries (which make them of lesser importance for a population estimate), indicating that I was talking about the total over time and space (above some threshold of intelligence) as well. I was presenting figures as the start of an estimate of long term lawyer population, and to indicate that to get "millions" one could not pick a high percentile within the population of lawyers, problematic given the intelligence of even 90th percentile attorneys.
nickLW130

I only have time for a short reply:

(1) I'd rephrase the above to say that computer security is among the two most important things one can study with regard to this alleged threat.

(2) The other important thing is law. Law is the "offensive approach to the problem of security" in the sense I suspect you mean it (unless you mean something more like the military). Law is very highly evolved, the work of millions of people as smart or smarter than Yudkoswky over more than a millenium, and tested empirically against the real world of real agents ... (read more)

Wei Dai120

It's not something you can ever come close to competing with by a philosophy invented from scratch.

I don't understand what you mean by this. Are you saying something like if a society was ever taken over by a Friendly AI, it would fail to compete against one ruled by law, in either a military or economic sense? Or do you mean "compete" in the sense of providing the most social good. Or something else?

I stand by my comment that "AGI" and "friendliness" are hopelessly anthropomorphic, infeasible, and/or vague.

I disagree w... (read more)

TimS410

The other important thing is law. Law is the "offensive approach to the problem of security" in the sense I suspect you mean it (unless you mean something more like the military). Law is very highly evolved, the work of millions of people as smart or smarter than Yudkoswky over more than a millenium, and tested empirically against the real world of real agents with a real diversity of values every day. It's not something you can ever come close to competing with by a philosophy invented from scratch.

As a lawyer, I strongly suspect this stateme... (read more)

Law is very highly evolved, the work of millions of people as smart or smarter than Yudkoswky over more than a millenium,

That seems pretty harsh! The Bureau of Labor Statistics reports 728,000 lawyers in the U.S., a notably attorney-heavy society within the developed world. The SMPY study of kids with 1 in 10,000 cognitive test scores found (see page 722) only a small minority studying law. The 90th percentile IQ for "legal occupations" in this chart is a little over 130. Historically populations were much lower, nutrition was worse, legal ed... (read more)

nickLW50

Selfish Gene itself is indeed quite sufficient to convince most thinking young people that evolution provides a far better explanation of how we got to be the way we are. It communicated far better than anybody else the core theories of neo-Darwinism which gave rise to evolutionary psychology, by stating bluntly the Copernican shift from group or individual selection to gene selection. Indeed, I'd still recommend it as the starting point for anybody interested in wading into the field of evolutionary psychology: you should understand the fairly elegant u... (read more)

I thought The Greatest Show On Earth (2010) was fantastic, and I'm currently rereading it. (I recommend this book to everyone. If you thought you understood evolution, you'll understand it better.) The first paragraph of the first chapter summarises just why Dawkins is so generally pissed off with religion these days:

Imagine that you are a teacher of Roman history and the Latin language, anxious to impart your enthusiasm for the ancient world – for the elegiacs of Ovid and the odes of Horace, the sinewy economy of Latin grammar as exhibited in the orator

... (read more)
nickLW60

I am far more confident in it than I am in the AGI-is-important argument. Which of course isn't anywhere close to saying that I am highly confident in it. Just that the evidence for AGI-is-unimportant far outweighs that for AGI-is-important.

nickLW20

All of these kinds of futuristic speculations are stated with false certainly -- especially the AGi-is-very-important argument, which is usually stated with a level of certainty that is incredible for an imaginary construct. As for my evidence, I provide it in the above "see here" link -- extensive economic observations have been done on the benefits of specialization, for example, and we have extensive experience in computer science with applying specialized vs. generalized algorithms to problems and assessing their relative efficiency. That vast amount of real-world evidence far outweighs the mere speculative imagination that undergirds the AGI-is-very-important argument.

4Vladimir_Nesov
The belief that an error is commonly made doesn't make it OK in any particular case. (When, for example, I say that I believe that AGI is dangerous, this isn't false certainty, in the sense that I do believe that it's very likely the case. If I'm wrong on this point, at least my words accurately reflect my state of belief. Having an incorrect belief and incorrectly communicating a belief are two separate unrelated potential errors. If you don't believe that something is likely, but state it in the language that suggests that it is, you are being unnecessarily misleading.)
5Wei Dai
Given the benefits of specialization, how do you explain the existence of general intelligence (i.e. humans)? Why weren't all the evolutionary niches that humans current occupy already taken by organisms with more specialized intelligence? My explanation is that generalized algorithms may be less efficient than specialized algorithms when specialized algorithms are available, but inventing specialized algorithm is hard (both for us and for evolution) so often specialized algorithms simply aren't available. You don't seem to have responded to this line of argument...
nickLW20

When some day some people (or some things) build an AGI, human-like or otherwise, it will at that time be extremely inferior to then-existing algorithms for any particular task (including any kind of learning or choice, including learning or choice of algorithms). Culture, including both technology and morality, will have changed beyond any of our recognitions long before that. Humans will already have been obsoleted for all jobs except, probably, those that for emotional reasons require interaction with another human (there's already a growth trend in su... (read more)

1Wei Dai
To rephrase my question, how confident are you of this, and why? It seems to me quite possible that by the time someone builds an AGI, there are still plenty of human jobs that have not been taken over by specialized algorithms due to humans not being smart enough to have invented the necessary specialized algorithms yet. Do you have a reason to think this can't be true? ETA: My reply is a bit redundant given Nesov's sibling comment. I didn't see his when I posted mine.
3Vladimir_Nesov
The phrasing suggests a level of certainty that's uncalled for for a claim that's so detailed and given without supporting evidence. I'm not sure there is enough support for even paying attention to this hypothesis. Where does it come from? (Obvious counterexample that doesn't seem unlikely: AGI is invented early, so all the cultural changes you've listed aren't present at that time.)
nickLW50

Skill at making such choices is itself a specialty, and doesn't mean you'll be good at other things. Indeed, the ability to properly choose algorithms in one problem domain often doesn't make you an expert at choosing them for a different problem domain. And as the software economy becomes more sophisticated these distinctions will grow ever sharper (basic Adam Smith here -- the division of labor grows with the size of the market). Such software choosers will come in dazzling variety: they like other useful or threatening software will not be general pu... (read more)

3Steve_Rayhawk
Can you expand on this? The way you say it suggests that it might be your core objection to the thesis of economically explosive strong AI. -- put into words, the way the emotional charge would hook into the argument here would be: "Such a strong AI would have to be at least as smart as the market, and yet it would have been designed by humans, which would mean there had to be a human at least as smart as the market: and belief in this possibility is always hubris, and is characteristically disastrous for its bearer -- something you always want to be on the opposite side of an argument from"? (Where "smart" here is meant to express something metaphorically similar to a proof system's strength: "the system successfully uses unknowably diverse strategies that a lesser system would either never think to invent or never correctly decide how much to trust".) I guess, for this explanation to work, it also has to be your core objection to Friendly AI as a mitigation strategy: "No human-conceived AI architecture can subsume or substitue for all the lines of innovation that the future of the economy should produce, much less control such an economy to preserve any predicate relating to human values. Any preservation we are going to get is going to have to be built incrementally from empirical experience with incremental software economic threats to those values, each of which we will necessarily be able to overcome if there had ever been any hope for humankind to begin with; and it would be hubris, and throwing away any true hope we have, to cling to a chimerical hope of anything less partial, uncertain, or temporary."
0Wei Dai
Would you agree that humans are in general not very good at inventing new algorithms, many useful algorithms remain undiscovered, and as a result many jobs are still being done by humans instead of specialized algorithms? Isn't it possible that this situation (i.e., many jobs still being done by humans, including the jobs of inventing new algorithms) is still largely the case by the time that a general AI smarter than human (for example, an upload of John von Neumann running at 10 times human speed) is created, which at a minimum results in many humans suddenly losing their jobs and at a maximum allows the AI or its creators to take over the world? Do you have an argument why this isn't possible or isn't worth worrying about (or hoping for)?
nickLW60

Indeed. As to why I find extreme consequences from general AI highly unlikely, see here. Alas, my main reason is partly buried in the comments (I really need to do a new post on this subject). It can be summarized as follows: for basic reasons of economics and computer science, specialized algorithms are generally far superior to general ones. Specialized algorithms are what we should hope for or fear, and their positive and negative consequences occur a little at a time -- and have been occurring for a long time already, so we have many actual real-wor... (read more)

It can be summarized as follows: for basic reasons of economics and computer science, specialized algorithms are generally far superior to general ones.

It would be better to present, as your main reason, "the kinds of general algorithms that humans are likely to develop and implement, even absent impediments caused by AI-existential risk activism, will almost certainly be far inferior to specialized ones". That there exist general-purpose algorithms which subsume the competitive abilities of all existing human-engineered special-purpose algori... (read more)

3Wei Dai
I don't understand your reasoning here. If you have a general AI, it can always choose to apply or invent a specialized algorithm when the situation calls for that, but if all you have is a collection of specialized algorithms, then you have to try to choose/invent the right algorithm yourself, and will likely do a worse (possibly much worse) job than the general AI if it is smarter than you are. So why do we not have to worry about "extreme consequences from general AI"?
nickLW60

I should have said something about marginal utility there. Doesn't change the three tests for a Pascal scam though.

The asteroid threat is a good example of a low-probability disaster that is probably not a Pascal scam. On point (1) it is fairly lottery-like, insofar as asteroid orbits are relatively predictable -- the unknowns are primarily "known unknowns", being deviations from very simple functions -- so it's possible to compute odds from actual data, rather than merely guessing them from a morass of "unknown unknowns". It passes ... (read more)