An intelligence of level 1 acts on innate algorithms, like a bacterium that survives using inherited mechanisms.
This suggestion seems disengaged from the biological literature. It has become known in recent years, for instance, that bacteria live very complicated social lives. From The Social Lives of Microbes:
...It used to be assumed that bacteria and other microorganisms lived relatively independent unicellular lives, without the cooperative behaviors that have provoked so much interest in mammals, birds, and insects. However, a rapidly expanding bod
I have the sense that this may be too simple.
Are humans structurally distinguishable from paperclip maximizers?
Are "innate algorithms" and "finds new algorithms" really qualitatively different?
I sometimes consider this topic. I would phrase it "How can intelligence generally be categorized?" Ideally we would be able to measure and categorize the intelligence level of anything; for example rocks, bacterium, eco-systems, suns, algorithms (AI), aliens that are smarter than humans.
Intelligence appears to be related to the level of abstraction that can be managed. This is roughly what is captured in the OP's list. Higher levels of abstraction allow an intelligence to integrate input from broader or more complex contexts, to model and to res...
It looks for goals and algorithms to achieve the goald.
What criterion should it use to choose between goals?
(also, there's a typo)
If a paperclip maximizer starts asking why it does what it does, then there are two possible outcomes. Either it realises that maximizing paperclips is required for a greater good, in which case it is not really a paperclip maximizer, but a "greater good" maximizer, and paperclip maximising isn't the end to itself.
In other words, if a paperclip maximizer isn't a paperclip maximizer, then it isn't a paperclip maximizer.
Or it realises that paperclip maximising is absolutely pointless and there is something better to do. In that case, it stops being a paperclip maximiser.
According to what criterion would it determine what constitutes "better"?
What you're describing isn't an agent that doesn't have a goal and decides on one. It's an agent that has something like a goal / a utility function / a criterion for "better" / morals (those are roughly equivalent here), and uses that to decide on sub-goals.
I strongly recommend reading the metaethics sequence (if you haven't already).
I think the problem is that while I believe and presumed an absolute moral system, you don't.
Level 1: Algorithm-based Intelligence
An intelligence of level 1 acts on innate algorithms, like a bacterium that survives using inherited mechanisms.
Level 2: Goal-oriented Intelligence
An intelligence of level 2 has an innate goal. It develops and finds new algorithms to solve a problem. For example, the paperclip maximizer is a level-2 intelligence.
Level 3: Philosophical Intelligence
An intelligence of level 3 has neither any preset algorithms nor goals. It looks for goals and algorithms to achieve the goal. Ethical questions are only applicable to intelligence of level 3.