Wei_Dai comments on Frequentist Magic vs. Bayesian Magic - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (79)
It seems that we do tend to get into trouble when we make ontological claims, but why "bound to"? Your proposed FAI, after it has extracted human values, will still have to solve the ontological problem, right? If it can, then why can't we?
You advocate "being lazy" as FAI programmers and handing off as many problems as we can to the FAI, but I'm still skeptical that any FAI approach will succeed in the near future, and in the mean time, I'd like to try to better understand what my own values are, and how I should make decisions.
I don't believe even superintelligence can solve the ontology problem completely.
A fine goal, but I doubt it can contribute to FAI design (which, even it'll take more than a century to finish, still has to be tackled to make that possible). Am I right in thinking that you agree with that?
Why?
I'm not sure what you're referring to by "that" here. Do you mean "preserving our preferences"? Assuming you do...
No, I think we have at least two disagreements here:
My guess is that the root cause of these disagreements is my distrust of human math and software engineering abilities, stemming from my experiences in the crypto field. I think there is a good chance that we (unenhanced biological humans) will never find the correct FAI theory, and that in the event we think we've found the right FAI theory, it will turn out that we're mistaken. And even if we manage to get FAI theory right, it's almost certain that the actual AI code will be riddled with bugs. You seem to be less concerned with these risks.