nickLW comments on Work on Security Instead of Friendliness? - Less Wrong

29 Post author: Wei_Dai 21 July 2012 06:28PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (103)

You are viewing a single comment's thread.

Comment author: nickLW 21 July 2012 10:09:52PM 9 points [-]

I only have time for a short reply:

(1) I'd rephrase the above to say that computer security is among the two most important things one can study with regard to this alleged threat.

(2) The other important thing is law. Law is the "offensive approach to the problem of security" in the sense I suspect you mean it (unless you mean something more like the military). Law is very highly evolved, the work of millions of people as smart or smarter than Yudkoswky over more than a millenium, and tested empirically against the real world of real agents with a real diversity of values every day. It's not something you can ever come close to competing with by a philosophy invented from scratch.

(3) I stand by my comment that "AGI" and "friendliness" are hopelessly anthropomorphic, infeasible, and/or vague.

(4) Computer "goals" are only usefully studied against actual algorithms, or clearly defined mathemetical classes of algorithms, not vague and imaginary concepts. Perhaps you can make some progress by for example advancing the study of postconditions, which seem to be the closest analog to goals in the software engineering world. One can imagine a world where postconditions are always checked, for example, and other software ignores the output of software that has violated one of its postconditions.

Comment author: TimS 22 July 2012 02:59:37AM 25 points [-]

The other important thing is law. Law is the "offensive approach to the problem of security" in the sense I suspect you mean it (unless you mean something more like the military). Law is very highly evolved, the work of millions of people as smart or smarter than Yudkoswky over more than a millenium, and tested empirically against the real world of real agents with a real diversity of values every day. It's not something you can ever come close to competing with by a philosophy invented from scratch.

As a lawyer, I strongly suspect this statement is false. As you seem to be referring to the term, Law is society's organizational rules about how and when to implement coercive violence. In the abstract, this is powerful, but concretely, this power is implemented by individuals. Some of them (i.e. police officers), care relatively little about the abstract issues - in other words, they aren't careful about the issues that are relevant to AI.

Further, law is filled with backdoors - they are called legislators. In the United States, Congress can make almost any judicially announced rule irrelevant by passing a statute. If you call that process "Law," then you aren't talking about the institution that draws on "the work of millions of smart people" over time.

Finally, individual lawyers' day-to-day work has almost no relationship to the parts of Law that you are suggesting is relevant to AI. Worse for your point, lawyers don't even engage with the policy issues of law with any frequency. For example, a lawyer litigating contracts might never engage with what promises should be enforced in her entire career.

In short, your paragraph about law is misdirected and misleading.

Comment author: Wei_Dai 22 July 2012 04:34:10AM 7 points [-]

It's not something you can ever come close to competing with by a philosophy invented from scratch.

I don't understand what you mean by this. Are you saying something like if a society was ever taken over by a Friendly AI, it would fail to compete against one ruled by law, in either a military or economic sense? Or do you mean "compete" in the sense of providing the most social good. Or something else?

I stand by my comment that "AGI" and "friendliness" are hopelessly anthropomorphic, infeasible, and/or vague.

I disagree with "hopelessly" "anthropomorphic" and "vague", but "infeasible" I may very well agree with, if you mean something like it's highly unlikely that a human team would succeed in creating a Friendly AGI before it's too late to make a difference and without creating unacceptable risk, which is why I advocate more indirect methods of achieving it.

Computer "goals" are only usefully studied against actual algorithms, or clearly defined mathemetical classes of algorithms, not vague and imaginary concepts.

People are trying to design such algorithms, things like practical approximations to AIXI, or better alternatives to AIXI. Are you saying they should refrain from using the word "goals" until they have actually come up with concrete designs, or what? (Again I don't advocate people trying to directly build AGIs, Friendly or otherwise, but your objection doesn't seem to make sense.)

Comment author: Steve_Rayhawk 22 July 2012 11:54:54PM *  12 points [-]

It's not something you can ever come close to competing with by a philosophy invented from scratch.

I don't understand what you mean by this.

A sufficient cause for Nick to claim this would be that he believed that no human-conceivable AI design would be able to incorporate by any means, including by reasoning from first principles or even by reference, anything functionally equivalent to the results of all the various dynamics of updating that have (for instance) made present legal systems as (relatively) robust (against currently engineerable methods of exploitation) as they are.

This seems somewhat strange to you, because you believe humans can conceive of AI designs that could reason some things from first principles (given observations of the world that the reasoning needed to be relevant to, plus reasonably anticipatable advantages of computing power over single humans) or incorporate results by reference.

One possible reason he might believe this would be that he believed that, whenever a human reasons about history or evolved institutions, there are something like two distinct levels of a computational complexity hierarchy at work, and that the powers of the greater level (history and the evolution of institutions) are completely inacessible to the powers of the lesser level (the human). (The machines representing the two levels in this case might be "the mental states accessible to a single armchair philosophy community", or, alternatively, "fledgling AI which, per a priori economic intuition, has no advantage over a few philosophers", versus "the physical states accessible in human history".)

This belief of his might be charged with a sort of independent half-intuitive aversion to making the sorts of (frequently catastrophic) mistakes that are routinely made by people who think they can metaphorically breach this complexity barrier. One effect of such an aversion would be that he would intuitively anticipate that he would always be, at least in expected value, wrong to agree with such people, no matter what arguments they could turn out to have. That is, it wouldn't increase his expected rightness to check to see if they were right about some proposed procedure to get around the complexity barrier, because, intuitively, the prior probability that they were wrong, the conditional probability that they would still be wrong despite being persuasive by any conventional threshold, and the wrongness of the cost that had empirically been inflicted on the world by mistakes of that sort, would all be so high. (I took his reference to Hayek's Fatal Conceit, and the general indirect and implicitly argued emotional dynamic of this interaction, to be confirmation of this intuitive aversion.) By describing this effect explicitly, I don't mean to completely psychologize here, or make a status move by objectification. Intuitions like the one I'm attributing can (and very much should!), of course, be raised to the level of verbally presented propositions, and argued for explicitly.

(For what it's worth, the most direct counter to the complexity argument expressed this way is: "with enough effort it is almost certainly possible, even from this side of the barrier, to formalize how to set into motion entities that would be on the other side of the barrier". To cover the pragmatics of the argument, one would also need to add: "and agreeing that this amount of effort is possible can even be safe, so long as everyone who heard of your agreement was sufficiently strongly motivated not to attempt shortcuts".)

Another, possibly overlapping reason would have to do with the meta level that people around here normally imagine approaching AI safety problems from -- that being, "don't even bother trying to invent all the required philosophy yourself; instead do your best to try to formalize how to mechanically refer to the process that generated, and could continue to generate, something equivalent to the necessary philosophy, so as to make that process happen better or at least to maximally stay out of its way" ("even if this formalization turns out to be very hard to do, as the alternatives are even worse"). That meta level might be one that he doesn't really think of as even being possible. One possible reason for this would be that he weren't aware that anyone actually ever meant to refer to a meta level that high, so that he never developed a separate concept for it. Perhaps when he first encountered e.g. Eliezer's account of the AI safety philosophy/engineering problem, the concept he came away with was based on a filled-in assumption about the default mistake that Eliezer must have made and the consequent meta level at which Eliezer meant to propose that the problem should be attacked, and that meta level was far too low for success to be conceivable, and he didn't afterwards ever spontaneously find any reason to suppose you or Eliezer might not have made that mistake. Another possible reason would be that he disbelieved, on the above-mentioned a priori grounds, that the proposed meta level was possible at all. (Or, at least, that it could ever be safe to believe that it were possible, given the horrors perpetrated and threatened by other people who were comparably confident in their reasons for believing similar things.)

Comment author: CarlShulman 22 July 2012 02:35:49AM *  10 points [-]

Law is very highly evolved, the work of millions of people as smart or smarter than Yudkoswky over more than a millenium,

That seems pretty harsh! The Bureau of Labor Statistics reports 728,000 lawyers in the U.S., a notably attorney-heavy society within the developed world. The SMPY study of kids with 1 in 10,000 cognitive test scores found (see page 722) only a small minority studying law. The 90th percentile IQ for "legal occupations" in this chart is a little over 130. Historically populations were much lower, nutrition was worse, legal education or authority was only available to a small minority, and the Flynn Effect had not occurred. Not to mention that law is disproportionately made by politicians who are selected for charisma and other factors in addition to intelligence.

and tested empirically against the real world of real agents with a real diversity of values every day. It's not something you can ever come close to competing with by a philosophy invented from scratch.

It's hard to know what to make of this.

Perhaps that the legal system is good at creating incentives that closely align the interests of those it governs with the social good, and that this will work on new types of being without much dependence on their decisionmaking processes?

Contracts and basic property rights certainly do seem to help produce wealth. On the other hand, financial regulation is regularly adjusted to try to nullify new innovation by financiers that poses systemic risks or exploits government guarantees, but the financial industry still frequently outmaneuvers the legal system. And of course the legal system depends on the loyalty of the security forces for enforcement, and makes use of ideological agreement among the citizenry that various things are right or wrong.

Restraining those who are much weaker is easier than restraining those who are strong. A more powerful analogy would be civilian control over military and security forces. There do seem to have been big advances in civilian control over the military in the developed countries (fewer coups, etc), but they seem to reflect changes in ideology and technology more than law.

If it is easy to enforce laws on new AGI systems, then the situation seems fairly tractable, even for AGI systems with across-the-board superhuman performance which take action based on alien and inhumane cost functions. But it doesn't seem guaranteed that it will be easy to enforce such laws on smart AGIs, or that the trajectory of development will be "all narrow AI, all the time," given the great economic value of human generality.

Comment author: private_messaging 22 July 2012 04:06:59PM *  5 points [-]

That seems pretty harsh!

There's 0.0001 prior for 1 in 10000 intelligence level. It's a low prior, you need a genius detector with an incredibly low false positive rate before most of your 'geniuses' are actually smart. A very well defined problems with very clear 'solved' condition (such as multiple novel mathematical proofs or novel algorithmic solution to hard problem that others try to solve) would maybe suffice, but 'he seems smart' certainly would not. This also goes for IQ tests themselves - while a genius would have high IQ score, high IQ scored person would most likely be someone somewhat smart slipping through the crack between what IQ test measures and what intelligence is (case example, Chris Langan, or Keith Raniere, or other high IQ 'geniuses' we would never suspect of being particularly smart if not for IQ tests).

Weak and/or subjective evidence of intelligence, especially given lack of statistical independence of evidence, should not get your estimate of intelligence of anyone very high.

Comment author: Wei_Dai 22 July 2012 09:33:10PM 2 points [-]

This is rather tangential, but I'm curious, out of those who score 1 in 10000 on a standard IQ test, what percentage is actually at least, say, 1 in 5000 in actual intelligence? Do you have a citation or personal estimate?

Comment author: private_messaging 23 July 2012 06:36:31AM *  0 points [-]

Would depend to how you evaluate actual intelligence. IQ test, at high range, measures reliability in solving simple problems (combined with, maybe, environmental exposure similarity to test maker when it comes to progressive matrices and other 'continue sequence' cases - the predictions by Solomonoff induction depend to machine and prior exposure, too). As an extreme example consider an intelligence test of very many very simple and straightforward logical questions. It will correlate with IQ but at the high range it will clearly measure something different from intelligence. All the intelligent individuals will score highly on that test, but so will a lot of people who are simply very good at simple questions.

A thought experiment: picture a class room of mind uploads, set for a half the procedural skills to read only, and teach them the algebra class. Same IQ, utterly different outcome.

I would expect that if the actual intelligence correlates with IQ to the factor of 0.9 (VERY generous assumption), the IQ could easily become non-predictive at as low as 99th percentile without creating any contradiction with the observed general correlation. edit: that would make about one out of 50 people with IQ of one in 10 000 (or one in 1000 or 1 in 1000 0000 for that matter) be intelligent at level of 1 in 5 000. That seems kind of low, but then, we mostly don't hear of the high IQ people just for IQ alone. edit: and the high IQ organizations like Mensa and the like are hopelessly unremarkable, rather than some ultra powerful groups of super-intelligences.

In any case the point is that the higher is the percentile the more confident you must be that you have no common failure mode between parts of your test.

edit: and for the record my IQ is 148 as measured on a (crappy) test in English which is not my native tongue. I also got very high percentile ratings in programming contest, and I used to be good at chess. I have no need to rationalize something here. I feel that a lot of this sheepish innumerate assumption that you can infer one in 10 000 level performance from a test in absence of failure modes of which you are far less certain than 99.99% , comes simply from signalling - to argue against applicability of IQ test in the implausibly high percentiles lets idiots claim that you must be stupid. When you want to select one in 10 000 level of performance in running 100 meters you can't do it by measuring performance at a standing jump.

Comment author: CarlShulman 23 July 2012 07:20:43AM *  2 points [-]

There are longitudinal studies showing that people with 99.99th percentile performance on cognitive tests have substantially better performance (on patents, income, tenure at top universities) than those at the 99.9th or 99th percentiles. More here.

and the high IQ organizations like Mensa and the like are hopelessly unremarkable, rather than some ultra powerful groups of super-intelligences.

Mensa is less selective than elite colleges or workplaces for intelligence, and much less selective for other things like conscientiousness, height, social ability, family wealth, etc. Far more very high IQ people are in top academic departments, Wall Street, and Silicon Valley than in high-IQ societies more selective than Mensa. So high-IQ societies are a very unrepresentative sample, selected to be less awesome in non-IQ dimensions.

Comment author: private_messaging 23 July 2012 08:50:45AM *  -1 points [-]

There are longitudinal studies showing that people with 99.99th percentile performance on cognitive tests

Uses other tests than IQ test, right? I do not dispute that a cognitive test can be made which would have the required reliability for detecting the 99.99th percentile. The IQ tests, however, are full of 'continue a short sequence' tests that are quite dubious even in principle. It is fundamentally difficult to measure up into 99.99th percentile, you need a highly reliable measurement apparatus, carefully constructed in precisely the way in which IQ tests are not. Extreme rarities like one in 10 000 should not be thrown around lightly.

Mensa is less selective than elite colleges or workplaces for intelligence

There are other societies. They all are not very selective for intelligence either, though, because they all rely on dubious tests.

and much less selective for other things like conscientiousness, height, social ability, family wealth, etc.

I would say that this makes those other places be an unrepresentative sample of the "high IQ" individuals. Even if those individuals who pass highly selective requirements on something else rarely enter mensa, they are rare (tautology on highly selective) and their relative under representation in mensa doesn't sway mensa's averages.

edit: for example consider the Nobel Prize winners. They all have high IQs but there is considerable spread and the IQ doesn't seem to correlate well with the estimate of "how many others worked on this and did not succeed".

Note: I am using "IQ" in the narrow sense of "what IQ tests measure", not as shorthand for intelligence. The intelligence has the capacity to learn component which IQ tests do not measure but tests of mathematical aptitude (with hard problems) or verbal aptitude do.

note2: I do not believe that the correlation entirely disappears even for IQ tests past 99th percentile. My argument is that for the typical IQ tests it well could. It's just that the further you get up the smaller fraction of the excellence is actually being measured.

Comment author: CarlShulman 23 July 2012 09:13:33AM *  1 point [-]

Uses other tests than IQ test, right?

Administering SATs to younger children, to raise the ceiling.

I would say that this makes those other places be an unrepresentative sample of the "high IQ" individuals.

Well Mensa is ~0 selectivity beyond the IQ threshold, and is a substitute good for other social networks, leaving it with the dregs. "Much more" is poor phrasing here, they're not rejecting 90%. If you look at the linked papers you'll see that a good majority of those at the 1 in 10,000 level on those childhood tests wind up with elite university/alumni or professional networks with better than Mensa IQ distributions.

Comment author: private_messaging 23 July 2012 09:38:31AM *  0 points [-]

Administering SATs to younger children, to raise the ceiling.

Ghmmm. I'm sure this measures a plenty of highly useful personal qualities that correlate with income. E.g. rate of learning. Or inclination to pursue intellectual work.

Well Mensa is ~0 selectivity beyond the IQ threshold, and is a substitute good for other social networks, leaving it with the dregs. "Much more" is poor phrasing here, they're not rejecting 90%. If you look at the linked papers you'll see that a good majority of those at the 1 in 10,000 level on those childhood tests wind up with elite university/alumni or professional networks with better than Mensa IQ distributions

Well, yes. I think we agree on all substantial points here but disagree on interpretation of my post. I referred specifically to "IQ tests" not to SAT, as lacking the rigour required for establishing 1 in 10 000 performance with any confidence, to balance on my point that e.g. 'that guy seems smart' shouldn't possibly result in estimate of 1 in 10 000 , and neither could anything that relies on rather subjective estimate of the difficulty of the accomplishments in the settings where you can't e.g. reliably estimate from number of other people who try and don't succeed.

Comment author: CarlShulman 23 July 2012 10:09:05AM *  0 points [-]

I referred specifically to "IQ tests" not to SAT, as lacking the rigour required for establishing 1 in 10 000 performance with any confidence, to balance on my point that e.g. 'that guy seems smart' shouldn't possibly result in estimate of 1 in 10 000

Note that these studies use the same tests (childhood SAT) that Eliezer excelled on (quite a lot higher than the 1 in 10,000 level), and that I was taking into account in my estimation.

Comment author: David_Gerard 22 July 2012 11:59:03PM 0 points [-]

Depends what you call "actual intelligence" as distinct from what IQ tests measure. private_messaging talks a lot in terms of observable real-world achievements, so presumably is thinking of something along those lines.

Comment author: evand 23 July 2012 12:42:27AM 0 points [-]

The easiest interpretation to measure would be a regression toward the mean effect. Putting a lower bound on the IQ scores in your sample means that you have a relevant fraction of people who tested higher than their average test score. I suspect that at the high end, IQ tests have few enough questions scored incorrectly that noise can let some < 1 in 5000 IQ test takers into your 1 in 10000 cutoff.

Comment author: David_Gerard 23 July 2012 07:30:28AM *  1 point [-]

I also didn't note the other problem: 1 in 10,000 is around IQ=155; the ceiling of most standardized (validated and normed) intelligence tests is around 1 in 1000 (IQ~=149). Tests above this tend to be constructed by people who consider themselves in this range, to see who can join their high IQ society and not substantially for any other purpose.

Comment author: nickLW 22 July 2012 05:27:37PM -2 points [-]

The Bureau of Labor Statistics reports 728,000 lawyers in the U.S

I would have thought it obvious that I was talking about lawyers who have been developing law for at least a millenium, not merely currently living lawyers in one particular country. Oh well.

Since my posts seem to be being read so carelessly, I will no longer be posting on this thread. I highly recommend folks who want to learn more about where I'm coming from to visit my blog, Unenumerated. Also, to learn more about the evolutionary emergence of ethical and legal rules, I highly recommend Hayek -- Fatal Conceit makes a good startng point.

Comment author: CarlShulman 22 July 2012 06:03:29PM *  4 points [-]

I would have thought it obvious that I was talking about lawyers who have been developing law for at least a millenium, not merely currently living lawyers in one particular country. Oh well.

Since my posts seem to be being read so carelessly, I will no longer be posting on this thread.

A careful reading of my own comment would have revealed my references to the US as only one heavily lawyered society (useful for an upper bound on lawyer density, and representing a large portion of the developed world and legal population), and to the low population of past centuries (which make them of lesser importance for a population estimate), indicating that I was talking about the total over time and space (above some threshold of intelligence) as well.

I was presenting figures as the start of an estimate of long term lawyer population, and to indicate that to get "millions" one could not pick a high percentile within the population of lawyers, problematic given the intelligence of even 90th percentile attorneys.

Comment author: private_messaging 22 July 2012 07:29:14PM *  1 point [-]

And why one should pick a high percentile, exactly, if the priors for high percentiles are proportionally low and strong evidence is absent? What's wrong with assuming 'somewhat above median', i.e. close to 50th percentile? Why is that even really harsh?

Comment author: CarlShulman 22 July 2012 08:00:40PM 7 points [-]

Extreme standardized testing (after adjusting for regression to the mean), successful writer (by hits, readers, reviews; even vocabulary, which is fairly strongly associated with intelligence in large statistical samples), impressing top philosophers with his decision theory work, impressing very smart and influential people (e.g. Peter Thiel) in real-time conversation.

Why is that even really harsh?

It would be harsh to a graduate student from a top hard science program or law school. The median attorney figure in the US today, let alone over the world and history, is just not that high.

Comment author: private_messaging 25 July 2012 02:59:14PM *  2 points [-]

impressing top philosophers with his decision theory work,

The TDT paper from 2012 reads like popularization of something, not like normal science paper on some formalized theory. I don't think impressing 'top philosophers' is impressive.

It would be harsh to a graduate student from a top hard science program or law school.

Or to a writer that gets royalties larger than typical lawyer. Or a smart and influential person, e.g. Peter Thiel.

But a blogger that successfully talked small-ish percentage of people he could reach, into giving him money for work on AI? That's hardly the evidence to sway 0.0001 prior. I do concede though that median lawyer might be unable to do that (but I dunno - only small percentage would be self deluded or bad enough to try). The world is full of pseudoscientists, cranks, and hustlers that manage this, and more, and who do not seem to be particularly bright.

Comment author: TimS 23 July 2012 12:53:25AM 1 point [-]

Is it really so hard to believe that there have been more than a million highly intelligent judges and influential lawyers since the Magna Carta was issued? (In my mind, the reference is to English Common Law - Civil Law works differently enough that counting participants is much harder).

As I said, I don't think this proves what nickLW asserts follows from it, but I think the statement "More than a million fairly intelligent individuals have put in substantial amounts of work to make the legal system capable of solving social problems decently well" is true, if mostly irrelevant to AI.

Comment author: CarlShulman 23 July 2012 01:42:47AM *  3 points [-]

since the Magna Carta was issued? (In my mind, the reference is to English Common Law

Limiting to the common law tradition makes it even more dubious. Today, the population of England and Wales is around 60 million. Wikipedia says:

March 2006 there were 1,825 judges in post in England and Wales, most of whom were Circuit Judges (626) or District Judges (572)

On the number of solicitors (barristers are much less numerous):

The number of solicitors qualified to work in England and Wales has rocketed over the past 30 years, according to new figures from the Law Society. The number holding certificates - which excludes retired lawyers and those no longer following a legal career - are at nearly 118,000, up 36% on ten years ago.

Or this:

There were 2,500 barristers and 32,000 solicitors in England and Wales in the early 1970s. Now there are 15,000 barristers and 115,000 solicitors.

And further in the past the overall population was much smaller, as well as poorer and with fewer lawyers (who were less educated, and more impaired by lead, micronutrient deficiencies, etc):

1315 – Between 4 and 6 million.[3] 1350 – 3 million or less.[4] 1541 – 2,774,000 [note 1][5] 1601 – 4,110,000 [5] 1651 – 5,228,000 [5] 1701 – 5,058,000 [5] 1751 – 5,772,000 [5] 1801 – 8,308,000 at the time of the first census. Census officials estimated at the time that there had been an increase of 77% in the preceding 100 years. In each county women were in the majority.[6] Wrigley and Schofield estimate 8,664,000 based on birth and death records.[5] 1811 – 9,496,000

"More than a million fairly intelligent individuals have put in substantial amounts of work

If we count litigating for particular clients on humdrum matters (the great majority of cases) in all legal systems everywhere, I would agree with this.

"have put in substantial amounts of work to make the legal system capable of solving social problems decently well"

It seems almost all the work is not directed at that task, or duplicative, or specialized to particular situations in ways that obsolesce. I didn't apply much of this filter in the initial comment, but it seems pretty intense too.

Comment author: TimS 23 July 2012 02:26:50AM *  2 points [-]

Ok, you've convinced me that millions is an overestimate.

Summing the top 60% of judges, top 10% of practicing lawyers, and the top 10% of legal thinkers who were not practicing lawyers - since 1215, that's more than 100,00 people. What other intellectual enterprise has that commitment for that period of time? The military has more people total, but far fewer deep thinkers. Religious institutions, maybe? I'd need to think harder about how to appropriately play reference class tennis - the whole Catholic Church is not a fair comparison because it covers more people than the common law.

Stepping back for a moment, I still think your particular criticism of nickLW's point is misplaced. Assuming that he's referencing the intellectual heft and success of the common law tradition, he's right that there's a fair amount of heft there, regardless of his overestimate of the raw numbers.

The existence of that heft doesn't prove what he suggests, but your argument seems to be assaulting the strongest part of his argument by asserting that there has not be a relatively enormous intellectual investment in developing the common law tradition. There has been a very large investment, and the investment has created a powerful institution.

Comment author: CarlShulman 23 July 2012 02:40:13AM *  3 points [-]

I agree that the common law is a pretty effective legal system, reflecting the work of smart people adjudicating particular cases, and feedback over time (from competition between courts, reversals, reactions to and enforcement difficulties with judgments, and so forth). I would recommend it over civil law for a charter city importing a legal system.

But that's no reason to exaggerate the underlying mechanisms and virtues. I also think that there is an active tendency in some circles to overhype those virtues, as they are tied to ideological disputes. [Edited to remove political label.]

but your argument seems to be assaulting the strongest part of his argument

Perhaps a strong individual claim, but I didn't see it clearly connected to a conclusion.

Comment author: TimS 23 July 2012 12:57:22PM 0 points [-]

Perhaps a strong individual claim, but I didn't see it clearly connected to a conclusion.

I agree with you that it isn't connected at all with his conclusions. Therefore, challenging it doesn't challenge his conclusion. Nitpicking something that you think is irrelevant to the opposing side's conclusion in a debate is logically rude.

Comment author: Wei_Dai 23 July 2012 12:13:20PM 2 points [-]

Nick, do you see a fault in how I've been carrying on our discussions as well? Because you've also left several of our threads dangling, including:

  • How likely is it that an AGI will be created before all of its potential economic niches have been filled by more specialized algorithms?
  • How much hope is there for "security against malware as strong as we can achieve for symmetric key cryptography"?
  • Does "hopelessly anthropomorphic and vague" really apply to "goals"?

(Of course it's understandable if you're just too busy. If that's the case, what kind of projects are you working on these days?)

Comment author: nickLW 27 July 2012 03:36:30AM 2 points [-]

Wei, you and others here interested in my opinions on this topic would benefit from understanding more about where I'm coming from, which you can mainly do by reading my old essays (especially the three philosophy essays I've just linked to on Unenumerated). It's a very different world view than the typical "Less Wrong" worldview: based far more on accumulated knowledge and far less on superficial hyper-rationality. You can ask any questions that you have of me there, as I don't typically hang out here. As for your questions on this topic:

(1) There is insufficient evidence to distinguish it from an arbitrarily low probability.

(2) To state a probability would be an exercise in false precision, but at least it's a clearly stated goal that one can start gathering evidence for and against.

(3) It depends on how clearly and formally the goal is stated, including the design of observatons and/or experiments that can be done to accurately (not just precisely) measure progress towards and attainment or non-attainment of that goal.

As for what I'm currently working on, my blog Unenumerated is a good indication of my publicly accessible work. Also feel free to ask any follow-up questions or comments you have stemming from this thread there.

Comment author: Wei_Dai 27 July 2012 05:11:18AM *  3 points [-]

I've actually already read those essays (which I really enjoyed, BTW), but still often cannot see how you've arrived at your conclusions on the topics we've been talking about recently.

For the rest of your comment, you seem to have misunderstood my grandparent comment. I was asking you to respond to my arguments on each of the threads we were discussing, not just to tell me how you would answer each of my questions. (I was using the questions to refer to our discussions, not literally asking them. Sorry if I didn't make that clear.)