You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

nhamann comments on Messy Science - Less Wrong Discussion

12 [deleted] 30 September 2010 06:08AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (17)

You are viewing a single comment's thread.

Comment author: nhamann 30 September 2010 04:48:49PM *  1 point [-]

So trying to decide between approaches is at least partly tied to whether you think something is, say, really a biology problem, a computer science problem, or a mathematics problem.

This is an especially interesting problem, because it seems very difficult to rationally assess what kind of approach to an especially open problem is most likely to work. If we're looking at, for example, AI as a research problem, then what kind of evidence could we gather that would lead us to believe that one approach is likely to be more fruitful than another?

Gathering this evidence would seem to require that we know which features of intelligence are most important (so we can decide what details can be abstracted away in our approach and which details need to be modeled), but we really don't have access to that kind of information, and it's not clear what would give us access to it (this insight alone would constitute a large amount of progress on understanding intelligence).

This suggests important questions about the role of rationality in science. Namely, for all the talk of the "weapons-grade rationality" that Less Wrong offers, are such rationality techniques very useful for really hard scientific problems where we're ignorant of large chunks of the hypothesis space, and where accurately assessing the weight of the hypotheses we do know is highly nontrivial?

Edit: see comment below for why I think the last paragraph is wrong.

Comment author: nhamann 30 September 2010 05:43:31PM *  0 points [-]

are such rationality techniques very useful for really hard scientific problems...?

I now think that this was hyperbole. It seems obvious to me that the first and second fundamental questions of rationality are of fundamental importance to science. Namely:

  • What do I think I know, and why do I think I know it?
  • What am I doing, and why am I doing it?

The first question is essential for keeping track of the evidence you (think you) have, and the second question is essential for time management and for fighting akrasia, which is useful to those scientists who are mere mortals in their ability to be productive.

Rationality won't magically solve science, but it clearly makes it (at least slightly) easier.

Comment author: [deleted] 30 September 2010 04:55:31PM 0 points [-]

Exactly my point.

And in AI in particular, it's hard to judge by the standards of "instrumental rationality." You could say "The best guys are the ones who make the best prototypes." But there's always going to be someone who could say "That's not a prototype, that has nothing to do with general AI," and then there's someone else who'll say "General AI is an incoherent notion and a pipe dream; we're the only ones who can actually build something."

Comment author: nhamann 30 September 2010 05:20:59PM 1 point [-]

This is essentially tangential, but I would promptly walk away from anyone who said "General AI is an incoherent notion," given that the human brain exists.

Comment author: Jonathan_Graehl 30 September 2010 08:43:52PM 1 point [-]

No ... that's NATURAL intelligence. Also organic, non-GMO, and pesticide free :)