An intelligence of level 1 acts on innate algorithms, like a bacterium that survives using inherited mechanisms.
This suggestion seems disengaged from the biological literature. It has become known in recent years, for instance, that bacteria live very complicated social lives. From The Social Lives of Microbes:
...It used to be assumed that bacteria and other microorganisms lived relatively independent unicellular lives, without the cooperative behaviors that have provoked so much interest in mammals, birds, and insects. However, a rapidly expanding bod
I have the sense that this may be too simple.
Are humans structurally distinguishable from paperclip maximizers?
Are "innate algorithms" and "finds new algorithms" really qualitatively different?
I sometimes consider this topic. I would phrase it "How can intelligence generally be categorized?" Ideally we would be able to measure and categorize the intelligence level of anything; for example rocks, bacterium, eco-systems, suns, algorithms (AI), aliens that are smarter than humans.
Intelligence appears to be related to the level of abstraction that can be managed. This is roughly what is captured in the OP's list. Higher levels of abstraction allow an intelligence to integrate input from broader or more complex contexts, to model and to res...
It looks for goals and algorithms to achieve the goald.
What criterion should it use to choose between goals?
(also, there's a typo)
Relevance is the right question. When dealing with purely abstract concepts like mathematics, it's useless to ask whether they exist. It's extraordinarily unlikely that any empirical evidence could persuade me that 1+1 does not equal 2, but I can realistically doubt whether the addition of natural numbers is a good model for counting clouds.
Similarly, the question should not be whether the absolute moral system you believe in is true or valid or genuinely universal, but rather whether it accurately and precisely models how you judge and desire.
Since you could stop having desires and making judgments without damaging your belief in your absolute moral system, it seems reasonable that you could alter them as well, or even that you have already done so. How sure are you that what you believe to be fundamentally morally right matches what you actually want?
Relevance is a good point.
Changing or stop having desires damages my belief in an absolute morality as much as changing or stop having sensory perception damages my belief in an absolute reality.
My belief in an absolute morality is as strong or as weak as my belief in my absolute reality. It doesn't matter whether morality or reality really exists, but that we treat them similarly. It is slightly dissonant to conduct science as if it exists, but to become relativist when arguing about morality.
In the end, it is not what we should believe, but how our thi...
Level 1: Algorithm-based Intelligence
An intelligence of level 1 acts on innate algorithms, like a bacterium that survives using inherited mechanisms.
Level 2: Goal-oriented Intelligence
An intelligence of level 2 has an innate goal. It develops and finds new algorithms to solve a problem. For example, the paperclip maximizer is a level-2 intelligence.
Level 3: Philosophical Intelligence
An intelligence of level 3 has neither any preset algorithms nor goals. It looks for goals and algorithms to achieve the goal. Ethical questions are only applicable to intelligence of level 3.