An over-simplification, but an evocative one:

  • The social sciences are contentious, their predictions questionable.
  • And yet social sciences use the scientific method; AI predictions generally don't.
  • Hence predictions involving human-level AI should be treated as less certain than any prediction in the social sciences.

 

New Comment
21 comments, sorted by Click to highlight new comments since: Today at 7:05 PM

To summarise the argument further.

"A lot of people talk rubbish about AI. Therefore most existing predictions are not very certain."

That doesn't in itself mean that it's hard to predict AI - merely that there are many existing predictions which aren't that good. Whether we could do better if we (to take the given example) used the scientific method isn't something the argument covers.

Whether we could do better if we (to take the given example) used the scientific method

I don't really see how we could do that. Yes, most predictions are rubbish - but a lot are rubbish because predicting AI is not something we have good ways of doing.

I don't see how the third proposition follows from the first two.

Clarified the second line.

That at least makes sense.

What did it originally say?

it didn't have the "; AI predictions generally don't."

I've been working with these predictions for such a long time, I forgot not everyone had this at the forefront of their minds.

[-][anonymous]12y20

The social sciences are contentious, their predictions questionable.

And yet social sciences use the scientific method; mathematics doesn't

Hence statements involving math should be treated as less certain than any prediction in the social sciences.

:-)

You're right - there is one area whose methods are even better than science. If only more problems could be solved like math problems!

I've started giving AI timelines as between 10 years and 100 years.

That seems reasonable. I give the 5-100 year range myself.

Figures.

That implicitly assumes that there aren't reasons why social sciences are contentious which don't also apply to AI predictions, but I don't think that's terribly unreasonable (EDIT: where by “I don't think that's terribly unreasonable” I mean that the reasons why the social sciences are contentious despite using the scientific method that I can think off the top of my head would also kind-of apply to AI predictions).

And yet social sciences use the scientific method; AI predictions generally don't.

Can you please clarify this point?

The social sciences are sciences; AI predictions are mainly speculative thinking by people who just put on their thinking caps and think really really hard about the future (see some of the examples in http://lesswrong.com/lw/e79/ai_timeline_prediction_data/).

Are you saying that these predictions are unscientific because they are based on untestable models? Or because the models are testable for "small" predictions, but the AI predictions based on them are wild extrapolations beyond the models' validity?

Most predictions don't use models; most models aren't tested; and AI predictions based on tested models are generally wild extrapolations.

It does sound pretty bad if that's the case. My suspicion is that the models are there, just implicit and poor-quality. Maybe trying to explicate, compare and critique them would be worthwhile.

Yes, people say all sorts of unjustified stuff about AI as if their musings were true, out of excitement and carelessness. But the line of thought in the post is ultimately destructive because it sets low expectations for no good reason.

To use the scientific method just means to make falsifiable predictions. So any arbitrary hypothesis counts, no matter how outlandish, so long as it's predictive. On the other hand, you don't need to use science in order to reason, and since "human-level AI" is not available for experimental study, we can only reason about it. But it's a pretty sure thing that such an AI will think that 1+1 equals 2...

There are no details here e.g. about the methodologies used to produce futurological predictions of the "time until X", or about the premises employed in reasoning about AI dispositions and capabilities; and that means there's no argument about the degree of reliability or usefulness that can be obtained when reasoning about AI; just the bare assertion, "not even as good as the worst of social science". Also, there's no consideration of the power of intention. A lot of the important statements in LW's AI futurology are about designing an AI to have desired properties.

I'm constructing a detailed analysis of all these points for my "How to Predict AI" paper.

And there are few details about methodologies, yes - because the vast majority of predictions have no methodologies. The quality of predictions is really, really low, and there are reasons to suspect that even when the methodologies are better, the prediction is still barely better than guesswork.

My stub was an unjustified snark, but the general sentiment behind it - that AI predictions (especially timeline predictions) are less reliable that social science results - is, as far as I can tell, true.

A working AI probably needs to duplicate thousands of individual systems found in the human mind. Whether we get there by scanning a brain for 4 years and 1 million electron beams working in parallel, or we have thousands of programming teams develop each subsystem, this is not going to be cheap.

You don't get there by accident - evolution did it, but it took millions of years, with each subsystem being developed to build upon previous ones.

Have you heard anything about some massive corporation or government getting ready to drop a few tril on an all out effort?

No, and the current discussions are how there are not enough common resources to pay for current needs. There isn't enough money to fund large militaries and to pay all of the expenses for the elderly and fix the roads and do everything else as it is. Money has to be borrowed from more successful economies, which just makes the fiscal crisis worse in the future.

Also, no corporation can justify spending more money than any company on the planet actually has to develop something that no one has ever done before and thus seems likely to fail.

Having read the brain emulation roadmap, and articles on how modern neural networks can model individual subsystems in the human mind successfully, this does not seem like a problem that we have to wait another 100 years to solve. The human race might be able to do it in 20 years if they started today and put the needed resources into the problem.

But it isn't going to happen, and predictions of success can't really be made until the actual process is actually started. It could be 10 years from now, it could be 200, before the actual effort is initiated. On the plus side, as time goes on, the cost to do this does go down to an extent. The total "bill of materials" for the hardware goes down with every year with Moore's law. Better software techniques make it more likely that such a huge project could be developed and not be so buggy it wouldn't run at all. But, in 30 years from now, it will still be a difficult and expensive endeavor needing a lot of resources.

[This comment is no longer endorsed by its author]Reply