Tim_Tyler comments on My Naturalistic Awakening - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (43)
The idea that superintelligences will more closely approximate rational utilitarian agents than current organisms is based on the idea that they will be more rational, suffer from fewer resource constraints, and be less prone to problems that cause them to pointlessly burn through their own resources. They will improve in these respects as time passes. Of course they will still use heuristics - nobody claimed otherwise.
This still sounds needlessly derogatory. Paper-clip maximisers have a dumb utility function, that's all. An expected utility maximiser is not necessarily "single minded": e.g. it may be able to focus on many things at once.
Optimisation is key to understanding intelligence. Criticising optimisers is criticising all intelligent agents. I don't see much point to doing that.