Tim_Tyler comments on Dreams of Friendliness - Less Wrong

15 Post author: Eliezer_Yudkowsky 31 August 2008 01:20AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (78)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Tim_Tyler 31 August 2008 10:18:24AM 0 points [-]

Why would an Oracle AI be screaming? It doesn't care about that outcome [...]

Doesn't it? It all depends on its utility function. It might well regard being overun by a huge army of robots as an outcome having very low utility.

For example: imagine if its utility function involved the number of verified-correct predictions it had made to date. The invasion by the huge army of robots might well result in it being switched off and its parts recycled - preventing it from making any more successful predictions at all. A disasterous outcome - from the perspective of its utility function. The Oracle AI might very well want to prevent such an outcome - at all costs.