An intelligence of level 1 acts on innate algorithms, like a bacterium that survives using inherited mechanisms.
This suggestion seems disengaged from the biological literature. It has become known in recent years, for instance, that bacteria live very complicated social lives. From The Social Lives of Microbes:
...It used to be assumed that bacteria and other microorganisms lived relatively independent unicellular lives, without the cooperative behaviors that have provoked so much interest in mammals, birds, and insects. However, a rapidly expanding bod
I have the sense that this may be too simple.
Are humans structurally distinguishable from paperclip maximizers?
Are "innate algorithms" and "finds new algorithms" really qualitatively different?
I sometimes consider this topic. I would phrase it "How can intelligence generally be categorized?" Ideally we would be able to measure and categorize the intelligence level of anything; for example rocks, bacterium, eco-systems, suns, algorithms (AI), aliens that are smarter than humans.
Intelligence appears to be related to the level of abstraction that can be managed. This is roughly what is captured in the OP's list. Higher levels of abstraction allow an intelligence to integrate input from broader or more complex contexts, to model and to res...
It looks for goals and algorithms to achieve the goald.
What criterion should it use to choose between goals?
(also, there's a typo)
Relevance is a good point.
Changing or stop having desires damages my belief in an absolute morality as much as changing or stop having sensory perception damages my belief in an absolute reality.
My belief in an absolute morality is as strong or as weak as my belief in my absolute reality. It doesn't matter whether morality or reality really exists, but that we treat them similarly. It is slightly dissonant to conduct science as if it exists, but to become relativist when arguing about morality.
In the end, it is not what we should believe, but how our thinking work. When thinking about anything normative, we automatically presume absolute morality. At least, we believe that arguments have to be logically consisted, and even if that is the only absolute thing we believe in, it would be absolute morality. Otherwise, we are nihilist, which is certainly an attainable position.
Concerning relevance: Using the same line of argument, there are also "absolute cuteness", "absolute beauty" and other "absolute things" (if we have a perception of them and there is some intersubjective consensus). They are probably somehow related to absolute morality, they may be subsets of a bigger system, since they are all mental phenomena. They are relevant to varying degrees, while morality and reality are two absolute things, that matter us a lot, unless we are nihilist.
Ah, I see where you're coming from.
My thesis (and, I think, the general consensus position on this site) is this: One's morality is a feature of one's individual brain, rather than of physics. In particular, one should not expect that other people -- and, especially, nonhuman other minds -- will deduce the same absolute morality that you believe in, no matter how intelligent they are. (A sufficiently intelligent mind might deduce "Draq believes that the absolute morality is X", but not "the absolute morality is X".)
Have you read No Universally Compelling Arguments?
Level 1: Algorithm-based Intelligence
An intelligence of level 1 acts on innate algorithms, like a bacterium that survives using inherited mechanisms.
Level 2: Goal-oriented Intelligence
An intelligence of level 2 has an innate goal. It develops and finds new algorithms to solve a problem. For example, the paperclip maximizer is a level-2 intelligence.
Level 3: Philosophical Intelligence
An intelligence of level 3 has neither any preset algorithms nor goals. It looks for goals and algorithms to achieve the goal. Ethical questions are only applicable to intelligence of level 3.