brazil84 comments on Thoughts on the Singularity Institute (SI) - Less Wrong

256 Post author: HoldenKarnofsky 11 May 2012 04:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (1270)

You are viewing a single comment's thread. Show more comments above.

Comment author: brazil84 17 July 2014 01:19:28PM -1 points [-]

For example, the sort of robot cars we will probably have in a few years are clearly agents-- you tell them to "come here and take me there" and they do it without further intervention on your part (when everything is working as planned). This is useful in a way that any amount and quality of question answering is not.

Yes I agree. Evidently, the environment cars work in is too fast-paced and quickly changing for "tool ai" to be close in usefulness to "agent ai." To drive safely and effectively, you need to be making and implementing decisions on the time frame of a split second.

At the same time, the lesson to be learned is that useful ai can have a utility function which is pretty mundane -- e.g. "find a fast route from point A to point B while minimizing the chances of running off the road or running into any people or objects."

Similarly, instead of telling AI to "improve human welfare" we can tell it to do things like "find ways to kill cancerous cells while keeping collateral damage to a minimum." The higher level decisions about improving human welfare can be left to the traditional institutions - legislatures, courts, and individual autonomy.

Comment author: [deleted] 17 July 2014 01:31:55PM 2 points [-]

At the same time, the lesson to be learned is that useful ai can have a utility function which is pretty mundane -- e.g. "find a fast route from point A to point B while minimizing the chances of running off the road or running into any people or objects."

Self-driving cars aren't piloted by AGIs in the first place, let alone dangerous "world-optimization" AGIs.

Similarly, instead of telling AI to "improve human welfare" we can tell it to do things like "find ways to kill cancerous cells while keeping collateral damage to a minimum." The higher level decisions about improving human welfare can be left to the traditional institutions - legislatures, courts, and individual autonomy.

The whole point of Friendly AI is that we want something which is more effective at improving human welfare than our existing institutions. Our existing institutions are, by FAI standards, Unfriendly and destructive. Not existentially destructive, this is true (except on rare occasions like World War II), but neither are they trustworthy when handed, for instance, power over the life-and-death of Earth's ecosystem (which they are currently failing to save, despite our having no other planet to go to).