Eliezer_Yudkowsky comments on Call for new SIAI Visiting Fellows, on a rolling basis - Less Wrong

29 Post author: AnnaSalamon 01 December 2009 01:42AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (264)

You are viewing a single comment's thread. Show more comments above.

Comment author: Wei_Dai 04 December 2009 02:52:41AM 8 points [-]

I'm probably not the best person to explain why decision theory is interesting from an FAI perspective. For that you'd want to ask Eliezer or other SIAI folks. But I think the short answer there is that without a well-defined decision theory for an AI, we can't hope to prove that it has any Friendliness properties.

My own interest in decision theory is mainly philosophical. Originally, I wanted to understand how probabilities should work when there are multiple copies of oneself, either due to mind copying technology, or because all possible universes exist. That led me to ask, "what are probabilities, anyway?" The philosophy of probability is its own subfield in philosophy, but I came to the conclusion that probabilities only have meaning within a decision theory, so the real question I should be asking is what kind of decision theory one should use when there are multiple copies of oneself.

Comment author: Eliezer_Yudkowsky 04 December 2009 03:02:45PM 9 points [-]

Your own answer is also pretty relevant to FAI. Because anything that confuses you can turn out to contain the black box surprise from hell.

Until you know, you don't know if you need to know, you don't know how much you need to know, and you don't know the penalty for not knowing.