It looks for goals and algorithms to achieve the goald.
What criterion should it use to choose between goals?
(also, there's a typo)
Well that's the point. The intelligence itself defines the criterion. Choosing goals presumes a degree of self-reflection that a paperclip maximizer does not have.
If a paperclip maximizer starts asking why it does what it does, then there are two possible outcomes. Either it realises that maximizing paperclips is required for a greater good, in which case it is not really a paperclip maximizer, but a "greater good" maximizer, and paperclip maximising isn't the end to itself.
Or it realises that paperclip maximising is absolutely pointless and ther...
Level 1: Algorithm-based Intelligence
An intelligence of level 1 acts on innate algorithms, like a bacterium that survives using inherited mechanisms.
Level 2: Goal-oriented Intelligence
An intelligence of level 2 has an innate goal. It develops and finds new algorithms to solve a problem. For example, the paperclip maximizer is a level-2 intelligence.
Level 3: Philosophical Intelligence
An intelligence of level 3 has neither any preset algorithms nor goals. It looks for goals and algorithms to achieve the goal. Ethical questions are only applicable to intelligence of level 3.