You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Daniel_Burfoot comments on Shane Legg's Thesis: Machine Superintelligence, Opinions? - Less Wrong Discussion

9 Post author: Zetetic 08 May 2011 08:04PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (45)

You are viewing a single comment's thread. Show more comments above.

Comment author: Daniel_Burfoot 09 May 2011 04:16:07AM 1 point [-]

Sure, many people are aware of the NFL theorem, but they don't take it seriously. If you don't believe me, read almost any computer vision paper. Vision researchers study algorithms, not images.

Comment author: timtyler 09 May 2011 05:57:14AM *  1 point [-]

Sure, many people are aware of the NFL theorem, but they don't take it seriously.

Legg's thesis says:

Some, such as Edmonds (2006), argue that universal definitions of intelligence are impossible due to Wolpert’s so called “No Free Lunch” theorem (Wolpert and Macready, 1997). However this theorem, or any of the standard variants on it, cannot be applied to universal intelligence for the simple reason that we have not taken a uniform distribution over the space of environments. Instead we have used a highly non-uniform distribution based on Occam’s razor.

The No Free Lunch theorems seem obviously-irrelevant to me. I have never understood why they get cited so much.

Comment author: Cyan 09 May 2011 04:45:00AM 0 points [-]

Don't any vision researchers use Bayes? If so, they'd have to be researching the formulation of priors for the true state of the scene, since the likelihood is almost trivial.

Comment author: paulfchristiano 09 May 2011 05:07:00AM 4 points [-]

I'm not really in the field, but I am vaguely familiar with the literature and this isn't how it works (though you might get that impression from reading LW).

A vision algorithm might face the following problem: reality picks an underlying physical scene and an image from some joint distribution. The algorithm looks at the image and must infer something about the scene. In this case, you need to integrate over a huge space to calculate likelihoods, which is generally completely intractable and so requires some algorithmic insight. For example, if you want to estimate the probability that there is an apple on the table, you need to integrate over the astronomically many possible scenes in which there is an apple on the table.

Comment author: SilasBarta 11 May 2011 11:07:36PM *  1 point [-]

I don't know if this contradicts you, but this is a problem that biological brain/eye systems have to solve ("inverse optics"), and Steven Pinker has an excellect discussion of it from a Bayesian perspective in his book How the Mind Works. He mentions that the brain does heavily rely on priors that match our environment, which significantly narrows down the possible scenes that could "explain" a given retinal image pair. (You get optical illusions when a scene violates these assumptions.)

Comment author: paulfchristiano 12 May 2011 05:22:53AM 0 points [-]

There are two parts to the problem: one is designing a model that describes the world well, and the other is using that model to infer things about the world from data. I agree that Bayesian is the correct adjective to apply to this process, but not necessarily that modeling the world is the most interesting part.

Comment author: Daniel_Burfoot 09 May 2011 04:35:50PM 1 point [-]

I think this paper, entitled "Region competition: Unifying snakes, region growing, and Bayes/MDL for multiband image segmentation" is indicative of the overall mindset. Even though the title explicitly mentions Bayes and MDL, the paper doesn't report any compression results - only segmentation results. Bayes/MDL are viewed as tricks to be used to achieve some other purpose, not as the fundamental principle justifying the research.