Eliezer_Yudkowsky comments on Call for new SIAI Visiting Fellows, on a rolling basis - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (264)
I'm probably not the best person to explain why decision theory is interesting from an FAI perspective. For that you'd want to ask Eliezer or other SIAI folks. But I think the short answer there is that without a well-defined decision theory for an AI, we can't hope to prove that it has any Friendliness properties.
My own interest in decision theory is mainly philosophical. Originally, I wanted to understand how probabilities should work when there are multiple copies of oneself, either due to mind copying technology, or because all possible universes exist. That led me to ask, "what are probabilities, anyway?" The philosophy of probability is its own subfield in philosophy, but I came to the conclusion that probabilities only have meaning within a decision theory, so the real question I should be asking is what kind of decision theory one should use when there are multiple copies of oneself.
Your own answer is also pretty relevant to FAI. Because anything that confuses you can turn out to contain the black box surprise from hell.
Until you know, you don't know if you need to know, you don't know how much you need to know, and you don't know the penalty for not knowing.