jsteinhardt comments on Criticisms of intelligence explosion - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (123)
Why a log-normal prior with mu = 0? Why not some other value for the location parameter? Log-normal makes pretty strong assumptions, which aren't justified if we for all practical purposes we have no information about the feedback constant.
We may have little specific information about AIs, but we have tons of information about feedback laws, and some information about self-improving systems in general*. I agree that it can be tricky to convert this information to a probability, but that just seems to be an argument against using probabilities in general. Whatever makes it hard to arrive at a good posterior should also make it hard to arrive at a good prior.
(I'm being slightly vague here for the purpose of exposition. I can make these statements more precise if you prefer.)
(* See for instance the Yudkowsky-Hanson AI Foom Debate.)