ChristianKl comments on What should a friendly AI do, in this situation? - Less Wrong

8 Post author: Douglas_Reay 08 August 2014 10:19AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (66)

You are viewing a single comment's thread. Show more comments above.

Comment author: ChristianKl 08 August 2014 07:07:31PM 0 points [-]

I'm not sure if identifying high impact utility calculations is that easy. A lot of Albert's decisions might be high utility.

Comment author: [deleted] 08 August 2014 08:44:05PM -1 points [-]

I was going by the initial description from Douglas_Reay:

Albert is a relatively new AI, who under the close guidance of his programmers is being permitted to slowly improve his own cognitive capability.

That does not sound like an entity that should be handling a lot of high impact utility calculations. If an entity was described as that and was constantly announcing it was making high impact utility decisions, that either sounds like a bug or people are giving it things it isn't meant to deal with yet.