jsteinhardt comments on Criticisms of intelligence explosion - Less Wrong

15 Post author: lukeprog 22 November 2011 05:42PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (123)

You are viewing a single comment's thread. Show more comments above.

Comment author: jsteinhardt 24 November 2011 02:34:15AM 0 points [-]

p(x) = 1/x isn't an integrable function (diverges at both 0 and infinity).

(My real objection is more that it's pretty unlikely that we really have so little information that we have to quibble about which prior to use. It's also good to be aware of the mathematical difficulties inherent in trying to be an "objective Bayesian", but the real problem is that it's not very helpful for making more accurate empirical predictions.)

Comment author: DanielLC 24 November 2011 03:22:47AM 0 points [-]

p(x) = 1/x isn't an integrable function

Which is why I said a log-normal prior would be more reasonable.

My real objection is more that it's pretty unlikely that we really have so little information that we have to quibble about which prior to use.

How much information do we have? We know that we haven't managed to build an AI in 40 years, and that's about it.

We probably have enough information if we can process it right, but because we don't know how, we're best off sticking close to the prior.

Comment author: jsteinhardt 24 November 2011 03:39:33AM *  0 points [-]

Which is why I said a log-normal prior would be more reasonable.

Why a log-normal prior with mu = 0? Why not some other value for the location parameter? Log-normal makes pretty strong assumptions, which aren't justified if we for all practical purposes we have no information about the feedback constant.

How much information do we have? We know that we haven't managed to build an AI in 40 years, and that's about it.

We may have little specific information about AIs, but we have tons of information about feedback laws, and some information about self-improving systems in general*. I agree that it can be tricky to convert this information to a probability, but that just seems to be an argument against using probabilities in general. Whatever makes it hard to arrive at a good posterior should also make it hard to arrive at a good prior.

(I'm being slightly vague here for the purpose of exposition. I can make these statements more precise if you prefer.)

(* See for instance the Yudkowsky-Hanson AI Foom Debate.)