Stabilizer comments on Rationality Quotes September 2013 - Less Wrong

5 Post author: Vaniver 04 September 2013 05:02AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (456)

You are viewing a single comment's thread.

Comment author: Stabilizer 02 September 2013 08:57:21PM *  34 points [-]

Don't ask what they think. Ask what they do.

My rule has to do with paradigm shifts—yes, I do believe in them. I've been through a few myself. It is useful if you want to be the first on your block to know that the shift has taken place. I formulated the rule in 1974. I was visiting the Stanford Linear Accelerator Center (SLAC) for a weeks to give a couple of seminars on particle physics. The subject was QCD. It doesn't matter what this stands for. The point is that it was a new theory of sub-nuclear particles and it was absolutely clear that it was the right theory. There was no critical experiment but the place was littered with smoking guns. Anyway, at the end of my first lecture I took a poll of the audience. "What probability would you assign to the proposition 'QCD is the right theory of hadrons.'?" My socks were knocked off by the answers. They ranged from .01 percent to 5 percent. As I said, by this time it was a clear no-brainer. The answer should have been close to 100 percent. The next day I gave my second seminar and took another poll. "What are you working on?" was the question. Answers: QCD, QCD, QCD, QCD, QCD,........ Everyone was working on QCD. That's when I learned to ask "What are you doing?" instead of "what do you think?"

I saw exactly the same phenomenon more recently when I was working on black holes. This time it was after a string theory seminar, I think in Santa Barbara. I asked the audience to vote whether they agreed with me and Gerard 't Hooft or if they thought Hawking’s ideas were correct. This time I got a 50-50 response. By this time I knew what was going on so I wasn't so surprised. Anyway I later asked if anyone was working on Hawking's theory of information loss. Not a single hand went up. Don't ask what they think. Ask what they do.

-Leonard Susskind, Susskind's Rule of Thumb

Comment author: AndHisHorse 03 September 2013 11:17:19AM 16 points [-]

Not necessarily a great metric; working on the second-most-probable theory can be the best rational decision if the expected value of working on the most probable theory is lower due to greater cost or lower reward.

Comment author: Protagoras 03 September 2013 12:59:39AM 7 points [-]

This is why many scientists are terrible philosophers of science. Not all of them, of course; Einstein was one remarkable exception. But it seems like many scientists have views of science (e.g. astonishingly naive versions of Popperianism) which completely fail to fit their own practice.

Comment author: lukeprog 05 September 2013 09:04:18PM *  8 points [-]

Yes. When chatting with scientists I have to intentionally remind myself that my prior should be on them being Popperian rather than Bayesian. When I forget to do this, I am momentarily surprised when I first hear them say something straightforwardly anti-Bayesian.

Comment author: shminux 05 September 2013 09:15:14PM 12 points [-]

Examples?

Comment author: lukeprog 08 September 2013 09:13:08PM 8 points [-]

Statements like "I reject the intelligence explosion hypothesis because it's not falsifiable."

Comment author: shminux 08 September 2013 10:39:59PM 4 points [-]

I see. I doubt that it is as simple as naive Popperianism, however. Scientists routinely construct and screen hypotheses based on multiple factors, and they are quite good at it, compared to the general population. However, as you pointed out, many do not use or even have the language to express their rejection in a Bayesian way, as "I have estimated the probability of this hypothesis being true, and it is too low to care." I suspect that they instinctively map intelligence explosion into the Pascal mugging reference class, together with perpetual motion, cold fusion and religion, but verbalize it in the standard Popperian language instead. After all, that is how they would explain why they don't pay attention to (someone else's) religion: there is no way to falsify it. I suspect that any further discussion tends to reveal a more sensible approach.

Comment author: lukeprog 08 September 2013 11:13:38PM 2 points [-]

Yeah. The problem is that most scientists seem to still be taught from textbooks that use a Popperian paradigm, or at least Popperian language, and they aren't necessarily taught probability theory very thoroughly, they're used to publishing papers that use p-value science even though they kinda know it's wrong, etc.

So maybe if we had an extended discussion about philosophy of science, they'd retract their Popperian statements and reformulate them to say something kinda related but less wrong. Maybe they're just sloppy with their philosophy of science when talking about subjects they don't put much credence in.

This does make it difficult to measure the degree to which, as Eliezer puts it, "the world is mad." Maybe the world looks mad when you take scientists' dinner party statements at face value, but looks less mad when you watch them try to solve problems they care about. On the other hand, even when looking at work they seem to care about, it often doesn't look like scientists know the basics of philosophy of science. Then again, maybe it's just an incentives problem. E.g. maybe the scientist's field basically requires you to publish with p-values, even if the scientists themselves are secretly Bayesians.

Comment author: EHeller 08 September 2013 11:31:57PM *  9 points [-]

The problem is that most scientists seem to still be taught from textbooks that use a Popperian paradigm, or at least Popperian language

I'm willing to bet most scientists aren't taught these things formally at all. I never was. You pick it up out of the cultural zeitgeist, and you develop a cultural jargon. And then sometimes people who HAVE formally studied philosophy of science try to map that jargon back to formal concepts, and I'm not sure the mapping is that accurate.

they're used to publishing papers that use p-value science even though they kinda know it's wrong, etc.

I think 'wrong' is too strong here. Its good for some things, bad for others. Look at particle-accelerator experiments- frequentist statistics are the obvious choice because the collider essentially runs the same experiment 600 million times every second, and p-values work well to separate signal from a null-hypothesis of 'just background'.

Comment author: jsteinhardt 15 September 2013 01:34:42AM 4 points [-]

For what it's worth, I understand well the arguments in favor of Bayes, yet I don't think that scientific results should be published in a Bayesian manner. This is not to say that I don't think that frequentist statistics is frequently and grossly mis-used by many scientists, but I don't think Bayes is the solution to this. In fact, many of the problems with how statistics is used, such as implicitly performing many multiple comparisons without controlling for this, would be just as large of problems with Bayesian statistics.

Either the evidence is strong enough to overwhelm any reasonable prior, in which case frequentist statistics wlil detect the result just fine; or else the evidence is not so strong, in which case you are reduced to arguing about priors, which seems bad if the goal is to create a societal construct that reliable uncovers useful new truths.

Comment author: lukeprog 15 September 2013 01:42:51AM *  7 points [-]

But why not share likelihood ratios instead of posteriors, and then choose whether or not you also want to argue very much (in your scientific paper) about the priors?

Comment author: private_messaging 15 September 2013 02:01:57AM -1 points [-]

What do you think "p<0.05" means?

Comment author: Mayo 29 September 2013 06:44:56AM 4 points [-]

No, the multiple comparisons problem, like optional stopping, and other selection effects that alter error probabilities are a much greater problem in Bayesian statistics because they regard error probabilities and the sampling distributions on which they are based as irrelevant to inference, once the data are in hand. That is a consequence of the likelihood principle (which follows from inference by Bayes theorem). I find it interesting that this blog takes a great interest in human biases, but guess what methodology is relied upon to provide evidence of those biases? Frequentist methods.

Comment author: lukeprog 29 September 2013 07:48:44AM 1 point [-]

Deborah, what do you think of jsteinhardt's Beyond Bayesians and Frequentists?

Comment author: Mayo 29 September 2013 06:52:12AM 4 points [-]

If there was a genuine philosophy of science illumination it would be clear that, despite the shortcomings of the logical empiricist setting in which Popper found himself , there is much more of value in a sophisticated Popperian methodological falsificationism than in Bayesianism. If scientists were interested in the most probable hypotheses, they would stay as close to the data as possible. But in fact they want interesting, informative, risky theories and genuine explanations. This goes against the Bayesian probabilist ideal. Moreover, you cannot falsify with Bayes theorem, so you'd have to start out with an exhaustive set of hypotheses that could account for data (already silly), and then you'd never get rid of them---they could only be probabilistically disconfirmed.

Comment author: Cyan 30 September 2013 12:40:44AM *  2 points [-]

Strictly speaking, one can't falsify with any method outside of deductive logic -- even your own Severity Principle only claims to warrant hypotheses, not falsify their negations. Bayesian statistical analysis is just the same in this regard.

A Bayesian analysis doesn't need to start with an exhaustive set of hypotheses to justify discarding some of them. Suppose we have a set of mutually exclusive but not exhaustive hypotheses. The posterior probability of an hypothesis under the assumption that the set is exhaustive is an upper bound for its posterior probability in an analysis with an expanded set of hypotheses. A more complete set can only make a hypotheses less likely, so if its posterior probability is already so low that it would have a negligible effect on subsequent calculations, it can safely be discarded.

But in fact they want interesting, informative, risky theories and genuine explanations. This goes against the Bayesian probabilist ideal.

I'm a Bayesian probabilist, and it doesn't go against my ideal. I think you're attacking philosophical subjective Bayesianism, but I don't think that's the kind of Bayesianism to which lukeprog is referring.

Comment author: lukeprog 05 September 2013 09:01:49PM 7 points [-]

Great quote.

Unfortunately, we find ourselves in a world where the world's policy-makers don't just profess that AGI safety isn't a pressing issue, they also aren't taking any action on AGI safety. Even generally sharp people like Bryan Caplan give disappointingly lame reasons for not caring. :(

Comment author: private_messaging 14 September 2013 08:41:28AM *  3 points [-]

Why won't you update towards the possibility that they're right and you're wrong?

This model should rise up much sooner than some very low prior complex model where you're a better truth finder about this topic but not any topic where truth-finding can be tested reliably*, and they're better truth finders about topics where truth finding can be tested (which is what happens when they do their work), but not this particular topic.

(*because if you expect that, then you should end up actually trying to do at least something that can be checked because it's the only indicator that you might possibly be right about the matters that can't be checked in any way)

Why are the updates always in one direction only? When they disagree, the reasons are "lame" according to yourself, which makes you more sure everyone's wrong. When they agree, they agree and that makes you more sure you are right.

Comment author: lukeprog 14 September 2013 03:19:49PM 7 points [-]

This model should rise up much sooner than some very low prior complex model where you're a better truth finder about this topic...

It's not so much that I'm a better truth finder, it's that I've had the privilege of thinking through the issues as a core component of my full time job for the past two years, and people like Caplan only raise points that have been accounted for in my model for a long time. Also, I think the most productive way to resolve these debates is not to argue the meta-level issues about social epistemology, but to have the object-level debates about the facts at issue. So if Caplan replies to Carl's comment and my own, then we can continue the object-level debate, otherwise... the ball's in his court.

Why are the updates always in one direction only? When they disagree, the reasons are "lame" according to yourself, which makes you more sure everyone's wrong. When they agree, they agree and that makes you more sure you are right.

This doesn't appear to be accurate. E.g. Carl & Paul changed my mind about the probability of hard takeoff. And when have I said that some public figure agreeing with me made me more sure I'm right? See also my comments here.

If I mention a public figure agreeing with me, it's generally not because this plays a significant role in my own estimates, it's because other people think there's a stronger correlation between social status and correctness than I do.

Comment author: private_messaging 14 September 2013 04:33:31PM *  2 points [-]

It's not so much that I'm a better truth finder, it's that I've had the privilege of thinking through the issues as a core component of my full time job for the past two years, and people like Caplan only raise points that have been accounted for in my model for a long time.

Yes, but why Caplan did not see it fit to think about the issue for a significant time, and you did?

There's also the AI researchers who have had the privilege of thinking about relevant subjects for a very long time, education, and accomplishments which verify that their thinking adds up over time - and who are largely the actual source for the opinions held by the policy makers.

By the way, note that the usual method of rejection of wrong ideas, is not even coming up with wrong ideas in the first place, and general non-engagement of wrong ideas. This is because the space of wrong ideas is much larger than the space of correct ideas.

What I expect to see in the counter-factual world where the AI risk is a big problem, is that the proponents of the AI risk in that hypothetical world have far more impressive and far more relevant accomplishments and credentials.

but to have the object-level debates about the facts at issue.

The first problem with highly speculative topics is that great many arguments exist in favour of either opinion on a speculative topic. The second problem is that each such argument relies on a huge number of implicit or explicit assumptions that are likely to be violated due to their origin as random guesses. The third problem is that there is no expectation that the available arguments would be a representative sample of the arguments in general.

This doesn't appear to be accurate. E.g. Carl & Paul changed my mind about the probability of hard takeoff.

Hmm, I was under the impression that you weren't a big supporter of the hard takeoff to begin with.

If I mention a public figure agreeing with me, it's generally not because this plays a significant role in my own estimates, it's because other people think there's a stronger correlation between social status and correctness than I do.

Well, your confidence should be increased by the agreement; there's nothing wrong with that. The problem is when it is not balanced by the expected decrease by disagreement.

Comment author: lukeprog 14 September 2013 05:01:19PM *  1 point [-]

What I expect to see in the counter-factual world where the AI risk is a big problem, is that the proponents of the AI risk in that hypothetical world have far more impressive and far more relevant accomplishments and credentials.

There are a great many differences in our world model, and I can't talk through them all with you.

Maybe we could just make some predictions? E.g. do you expect Stephen Hawking to hook up with FHI/CSER, or not? I think... oops, we can't use that one: he just did. (Note that this has negligible impact on my own estimates, despite him being perhaps the most famous and prestigious scientist in the world.)

Okay, well... If somebody takes a decent survey of mainstream AI people (not AGI people) about AGI timelines, do you expect the median estimate to be earlier or later than 2100? (Just kidding; I have inside information about some forthcoming surveys of this type... the median is significantly sooner than 2100.)

Okay, so... do you expect more or fewer prestigious scientists to take AI risk seriously 10 years from now? Do you expect Scott Aaronson and Peter Norvig, within 25 years, to change their minds about AI timelines, and concede that AI is fairly likely within 100 years (from now) rather than thinking that it's probably centuries or millennia away? Or maybe you can think of other predictions to make. Though coming up with crisp predictions is time-consuming.

Comment author: private_messaging 14 September 2013 05:25:14PM *  0 points [-]

Well, I too expect some form of something that we would call "AI", before 2100. I can even buy into some form of accelerating progress, albeit the progress would be accelerating before the "AI" due to the tools using relevant technologies, and would not have that sharp of a break. I even do agree that there is a certain level of risk involved in all the future progress including progress of the software.

I have a sense you misunderstood me. I picture this parallel world where legitimate, rational inferences about the AI risk exist, and where this risk is worth working at in 2013 and stands out among the other risks, as well as any other pre-requisites for making MIRI worthwhile hold. And in this imaginary world, I expect massively larger support than "Steven Hawkins hooked up with FHI" or what ever you are outlining here.

You do frequently lament that the AI risk is underfunded, under-supported, and there's under-awareness about it. In the hypothetical world, this is not the case and you can only lament that the rational spending should be 2 billions rather than 1 billion.

edit: and of course, my true rejection is that I do not actually see rational inferences leading there. The imaginary world stuff is just a side-note to explain how non-experts generally look at it.

edit2: and I have nothing against FHI's existence and their work. I don't think they are very useful, or address any actual safety issues which may arise, though, but with them I am fairly certain they aren't doing any harm either (Or at least, the possible harm would be very small). Promoting the idea that AI is possible within 100 years, however, is something that increases funding for AI all across the board.

Comment author: lukeprog 14 September 2013 05:58:49PM *  8 points [-]

I have a sense you misunderstood me. I picture this parallel world where legitimate, rational inferences about the AI risk exist, and where this risk is worth working at in 2013 and stands out among the other risks, as well as any other pre-requisites for making MIRI worthwhile hold. And in this imaginary world, I expect massively larger support than "Steven Hawkins hooked up with FHI" or what ever you are outlining here.

Right, this just goes back to the same disagreement in our models I was trying to address earlier by making predictions. Let me try something else, then. Here are some relevant parts of my model:

  1. I expect most highly credentialed people to not be EAs in the first place.
  2. I expect most highly credentialed people to not be familiar with the arguments for caring about the far future.
  3. I expect most highly credential people to be mostly just aware of risks they happen to have heard about (e.g. climate change, asteroids, nuclear war), rather than attempting a systematic review of risks (e.g. by reading the GCR volume).
  4. I expect most highly credentialed people to respond fairly well when actuarial risk is easily calculated (e.g. asteroid risk), and not-so-well when it's more difficult to calculate (e.g. many insurance companies went bankrupt after 9/11).
  5. I expect most highly credentialed people to have spent little time on explicit calibration training.
  6. I expect most highly credentialed people to not systematically practice debiasing like some people practice piano.
  7. I expect most highly credentialed people to know very little about AI, and very little about AI risk.
  8. I expect that in general, even those highly credentialed people who intuitively think AI risk is a big deal will not even contact the people who think about AI risk for a living in order to ask about their views and their reasons for them, due to basic VoI failure.
  9. I expect most highly credentialed people to have fairly reasonable views within their own field, but to often have crazy views "outside the laboratory."
  10. I expect most highly credentialed people to not have a good understanding of Bayesian epistemology.
  11. I expect most highly credentialed people to continue working on, and caring about, whatever their career has been up to that point, rather than suddenly switching career paths on the basis of new information and an EV calculation.
  12. I expect most highly credentialed people to not understand lots of pieces of "black swan epistemology" like this one and this one.
  13. etc.
Comment author: ciphergoth 15 September 2013 08:43:02AM 9 points [-]

Luke, why are you arguing with Dmytry?

Comment author: private_messaging 14 September 2013 06:47:41PM 1 point [-]

The question should not be about "highly credentialed" people alone, but about how they fare compared to people who are rather very low "credentialed".

In particular, on your list, I expect people with fairly low credentials to fare much worse, especially at identification of the important issues as well as on rational thinking. Those combine multiplicatively, making it exceedingly unlikely - despite the greater numbers of the credential-less masses - that people who lead the work on an important issue would have low credentials.

I expect most highly credentialed people to not be EAs in the first place.

What's EA? Effective altruism? If it's an existential risk, it kills everyone, selfishness suffices just fine.

e.g. many insurance companies went bankrupt after 9/11

Ohh, come on. That is in no way a demonstration that insurance companies in general follow faulty strategies, and especially is not a demonstration that you could do better.

I expect most highly credentialed people to not systematically practice debiasing like some people practice piano.

Indeed.

Comment author: [deleted] 14 September 2013 10:26:15PM *  3 points [-]

If it's an existential risk, it kills everyone, selfishness suffices just fine.

A selfish person protecting against existential risk builds a bunker and stocks it with sixty years of foodstuffs. That doesn't exactly help much.

Comment author: lukeprog 14 September 2013 06:54:18PM *  1 point [-]

In particular, on your list, I expect people with fairly low credentials to fare much worse

No doubt! I wasn't comparing highly credentialed people to low-credentialed people in general. I was comparing highly credentialed people to Bostrom, Yudkowsky, Shulman, etc.

Comment author: Stabilizer 09 September 2013 07:05:10AM 0 points [-]

After reading Robin's exposition of Bryan's thesis, I would disagree that his reasons are disappointingly lame.

Comment author: lukeprog 09 September 2013 04:28:57PM 1 point [-]

Bryan is expressing a "standard economic intuition" but... did you see Carl's comment reply on Caplan's post, and also mine?

Comment author: private_messaging 13 September 2013 02:27:30PM *  -1 points [-]

I did see Eelco Hoogendoorn 's and it is absolutely spot on.

I'm hardly a fan of Caplan, but he has some Bayesianism right:

  1. Based on how things like this asymptote or fail altogether, he has a low prior for foom.

  2. He has low expectation of being able to identify in advance (without the work equivalent to the creation of the AI) exact mechanisms by which it is going to asymptote or fail, irrespective of whenever it does or does not asymptote or fail, so not knowing such mechanisms does not bother him a whole lot.

  3. Even assuming he is correct he expects a plenty of possible arguments against this position (which are reliant on speculations), as well as expects to see some arguers, because the space of speculative arguments is very huge. So such arguments are not going to move him anywhere.

People don't do that explicitly any more than someone who's playing football is doing Newtonian mechanics explicitly. Bayes theorem is no less fundamental than the laws of motion of the football.

Likewise for things like non-testability: nobody's doing anything explicitly, it is just the case that due to something you guys call "conservation of expected evidence" , when there is no possibility of evidence against a proposition, then a possibility of evidence in favour of the proposition would violate the Bayes theorem.

Comment author: Estarlio 13 September 2013 02:40:53PM 0 points [-]

when there is no possibility of evidence against a proposition, then a possibility of evidence in favour of the proposition would violate the Bayes theorem.

I'm not sure how you could have such a situation, given that absence of expected evidence is evidence of the absence. Do you have an example?

Comment author: private_messaging 13 September 2013 03:06:49PM *  0 points [-]

Well, the probabilities wouldn't be literally zero. What I mean is that lack of a possibility of strong evidence against something, and only a possibility of very weak evidence against it (via absence of evidence) implies that strong evidence in favour of it must be highly unlikely. Worse, such evidence just gets lost among the more probable 'evidence that looks strong but is not'.

Comment author: Estarlio 13 September 2013 04:39:30PM 3 points [-]

Ah, I think I follow you.

Absence of evidence isn't necessarily a weak kind of evidence.

If I tell you there's a dragon sitting on my head, and you don't see a dragon sitting on my head, then you can be fairly sure there's not a dragon on my head.

On the other hand, if I tell you I've buried a coin somewhere in my magical 1cm deep garden - and you dig a random hole and don't find it - not finding the coin isn't strong evidence that I've not buried one. However, there there's so much potential weak evidence against. If you've dug up all but a 1cm square of my garden - the coin's either in that 1cm or I'm telling porkies, and what are the odds that - digging randomly - you wouldn't have come across it by then? You can be fairly sure, even before digging up that square, that I'm fibbing.

Was what you meant analogous to one of those scenarios?

Comment author: private_messaging 13 September 2013 04:42:44PM *  1 point [-]

Yes, like the latter scenario. Note that the expected utility of digging is low when the evidence against from one dig is low.

edit: Also. In the former case, not seeing a dragon sitting on your head is very strong evidence against there being a dragon. Unless you invoke un-testable invisible dragons which may be transparent to x-rays, let dust pass through it unaffected, and so on. In which case, I should have a very low likelihood of being convinced that there is a dragon on your head, if I know that the evidence against would be very weak.

edit2: Russel's teapot in the Kuiper belt is a better example still. When there can be only very weak evidence against it, the probability of encountering or discovering strong evidence in favour of it must be low also, making it not worth while to try to come up with evidence that there is a teapot in the Kuiper belt (due to low probability of success), even when the prior probability for the teapot is not very low.

Comment author: Estarlio 13 September 2013 06:19:07PM *  -1 points [-]

Then, to extend the analogy: Imagine that digging has potentially negative utility as well as positive. I claim to have buried both a large number of nukes and a magical wand in the garden.

In order to motivate you to dig, you probably want some evidence of magical wands. In this context that would probably be recursively improving systems where, occasionally, local variations rapidly acquire super-dominance over their contemporaries when they reach some critical value. Evolution probably qualifies there - other bipedal frames with fingers aren't particularly dominant over other creatures in the same way that we are, but at some point we got smart enough to make weapons (note that I'm not saying that was what intelligence was for though) and from then on, by comparison to all other macroscopic land-dwelling forms of life, we may as well have been god.

And since then that initial edge in dominance has only ever allowed us to become more dominant. Creatures afraid of wild animals are not able to create societies with guns and nuclear weapons - you'd never have the stability for long enough.

In order to motivate you not to dig, you probably want some evidence of nukes. In this context, recursively - I'm not sure improving is the right word here - systems with a feedback state, that create large amounts of negative value. Well, to a certain extent that's a matter of perspective - from the perspective of extinct species the ascendancy of humanity would probably not be anything to cheer about, if they were in a position to appreciate it. But I suspect it can at least stand on its own that it tends to be the case that failure cascades are easier to make than cascade successes. One little thing goes wrong on your rocket and then the situation multiplies; a small error in alignment rapidly becomes a bigger one; or the timer on your patriot battery is losing a fraction of a second and over time your perception of where the missiles are is off significantly. - it's only with significant effort that we create systems where errors don't multiply.

(This is analogous to altering your expected value of information - like if earlier you'd said you didn't want to dig and I'd said, 'well there's a million bucks there' instead - you'd probably want some evidence that I had a million bucks, but given such evidence the information you'd gain from digging would be worth more.)

This seems to be fairly closely analogous to Elizer's claims about AI, at least if I've understood them correctly, that we have to hit an extremely small target and it's more likely that we're going to blow ourselves to itty-bitty pieces/cover the universe in paperclips if we're just fooling around hoping to hit on it by chance.

If you believe that such is the case, then the only people you're going to want looking for that magic wand - if you let anyone do it at all - are specialists with particle detectors - indeed if your garden is in the middle of a city you'll probably make it illegal for kids to play around anywhere near the potential bomb site.

Now, we may argue over quite how strongly we have to believe in the possible existence of magitech nukes to justify the cost of fencing off the garden - personally I think the statement:

if you take a thorough look at actually existing creatures, it's not clear that smarter creatures have any tendency to increase their intelligence.

Is to constrain what you'll accept for potential evidence pretty dramatically - we're talking about systems in general, not just individual people, and recursively improving systems with high asymptotes relative to their contemporaries have happened before.

It's not clear to me that the second claim he makes is even particularly meaningful:

In the real-world, self-reinforcing processes eventually asymptote. So even if smarter creatures were able to repeatedly increase their own intelligence, we should expect the incremental increases to get smaller and smaller over time, not skyrocket to infinity.

Sure, I think that they probably won't go to infinity - but I don't see any reason to suspect that they won't converge on a much higher value than our own native ability. Pretty much all of our systems do, from calculators to cars.

We can even argue over how you separate the claims that something's going to foom from the false claims of such (I'd suggest, initially, just seeing how many claims that something was going to foom have actually been made within the domain of technological artefacts, it may be that the base-line credibility is higher than we think.) But that's a body of research that Caplan, as far as I'm aware, hasn't forwarded. It's not clear to me that it's a body of research with the same order of difficulty as creating an actual AI either. And, in its absence, it's not clear to me that to answer in effect, "I'll believe it when I see the mushroom cloud." is a particularly rational response.

Comment author: wedrifid 14 September 2013 09:41:53AM *  0 points [-]

After reading Robin's exposition of Bryan's thesis, I would disagree that his reasons are disappointingly lame.

Which could either indicate that the reasons are good or that your standards are lower than Luke's and so trigger no disappointment.

Comment author: Eliezer_Yudkowsky 03 September 2013 07:58:40PM 6 points [-]

Hm. A generalized phenomenon of overwhelming physicist underconfidence could account for a reasonable amount of the QM affair.