Tim_Tyler comments on My Naturalistic Awakening - Less Wrong

24 Post author: Eliezer_Yudkowsky 25 September 2008 06:58AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (43)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Tim_Tyler 26 September 2008 10:12:29PM 0 points [-]

The idea that superintelligences will more closely approximate rational utilitarian agents than current organisms is based on the idea that they will be more rational, suffer from fewer resource constraints, and be less prone to problems that cause them to pointlessly burn through their own resources. They will improve in these respects as time passes. Of course they will still use heuristics - nobody claimed otherwise.

I was referring to the single minded, focussed utility maximizer that Eliezer often uses in his discussions about AI.

This still sounds needlessly derogatory. Paper-clip maximisers have a dumb utility function, that's all. An expected utility maximiser is not necessarily "single minded": e.g. it may be able to focus on many things at once.

Optimisation is key to understanding intelligence. Criticising optimisers is criticising all intelligent agents. I don't see much point to doing that.