TraditionalRationali comments on Open Thread: July 2010 - Less Wrong

6 Post author: komponisto 01 July 2010 09:20PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (653)

You are viewing a single comment's thread. Show more comments above.

Comment author: TraditionalRationali 02 July 2010 05:47:00AM *  2 points [-]

That it should be possible to Algorithmize Science seems clear from that the human brain can do science and the human brain should be possible to describe algorthmically. If not at a higher level, so at least -- in principle -- by quantum electrodynamics which is the (known and computable in principle) dynamics of electrons and nuclei that are the building blocks of the brain.( If it should be possible to do in practice it would have to be done at a higher level but as a proof of principle that argument should be enough.)

I guess, however, that what is actually meant is if the scientific method itself could be formalised (algorithmized), so that science could be "mechanized" in a more direct way than building human-level AIs and then let them learn and do science by the somewhat informal process used today by human scientists. That seems plausible. But has still to be done and seems rather difficult. The philosophers of science is working on understanding the scientific process better and better, but they seem still to have a long way to go before an actually working algorithmic description has been achieved. See also the discussion below on the recent article by Gelman and Shalizi criticizing bayesianism.

EDIT "done at a lower level" changed to "done at a higher level"

Comment author: WrongBot 02 July 2010 03:45:49PM 2 points [-]

The scientific method is already a vague sort of algorithm, and I can see how it might be possible to mechanize many of the steps. The part that seems AGI-hard to me is the process of generating good hypotheses. Humans are incredibly good at plucking out reasonable hypotheses from the infinite search space that is available; that we are so very often says more of the difficulty of the problem than our own abilities.

Comment author: NancyLebovitz 02 July 2010 04:27:03PM 1 point [-]

I'm pretty sure that judging whether one has adequately tested a hypothesis is also going to be very hard to mechanize.

Comment author: SilasBarta 02 July 2010 04:39:49PM 2 points [-]

The problem that I hear most often in regard to mechanizing this process has the basic form, "Obviously, you need a human in the loop because of all the cases where you need to be able to recognize that a correlation is spurious, and thus to ignore it, and that comes from having good background knowledge."

But you have to wonder: the human didn't learn how to recognize spurious correlations through magic. So however they came up with that capability should be some identifiable process.

Comment author: cupholder 03 July 2010 04:41:43AM 2 points [-]

The problem that I hear most often in regard to mechanizing this process has the basic form, "Obviously, you need a human in the loop because of all the cases where you need to be able to recognize that a correlation is spurious, and thus to ignore it, and that comes from having good background knowledge."

Those people should be glad they've never heard of TETRAD - their heads might have exploded!

Comment author: NancyLebovitz 03 July 2010 10:01:32AM 1 point [-]

That's intriguing. Has it turned out to be useful?

Comment author: cupholder 04 July 2010 05:31:24AM *  3 points [-]

It's apparently been put to use with some success. Clark Glymour - a philosophy professor who helped develop TETRAD - wrote a long review of The Bell Curve that lists applications of an earlier version of TETRAD (see section 6 of the review):

Several other applications have been made of the techniques, for example:

  1. Spirtes et al. (1993) used published data on a small observational sample of Spartina grass from the Cape Fear estuary to correctly predict - contrary both to regression results and expert opinion - the outcome of an unpublished greenhouse experiment on the influence of salinity, pH and aeration on growth.

  2. Druzdzel and Glymour (1994) used data from the US News and World Report survey of American colleges and universities to predict the effect on dropout rates of manipulating average SAT scores of freshman classes. The prediction was confirmed at Carnegie Mellon University.

  3. Waldemark used the techniques to recalibrate a mass spectrometer aboard a Swedish satellite, reducing errors by half.

  4. Shipley (1995, 1997, in review) used the techniques to model a variety of biological problems, and developed adaptations of them for small sample problems.

  5. Akleman et al. (1997) have found that the graphical model search techniques do as well or better than standard time series regression techniques based on statistical loss functions at out of sample predictions for data on exchange rates and corn prices.

Personally I find it a little odd that such a useful tool is still so obscure, but I guess a lot of scientists are loath to change tools and techniques.

Comment author: NancyLebovitz 02 July 2010 05:12:36PM 0 points [-]

Maybe it's just a matter of people kidding themselves about how hard it is to explain something.

On the other hand, some things (like vision and natural language) are genuinely hard to figure out.

I'm not saying the problem is insoluble. I'm saying it looks very difficult.

Comment author: cupholder 03 July 2010 05:08:23AM *  0 points [-]

One possible way to get started is to do what the 'Distilling Free-Form Natural Laws from Experimental Data' project did: feed measurements of time and other variables of interest into a computer program which uses a genetic algorithm to build functions that best represent one variable as a function of itself and the other variables. The Science article is paywalled but available elsewhere. (See also this bunch of presentation slides.)

They also have software for you to do this at home.