An intelligence of level 1 acts on innate algorithms, like a bacterium that survives using inherited mechanisms.
This suggestion seems disengaged from the biological literature. It has become known in recent years, for instance, that bacteria live very complicated social lives. From The Social Lives of Microbes:
...It used to be assumed that bacteria and other microorganisms lived relatively independent unicellular lives, without the cooperative behaviors that have provoked so much interest in mammals, birds, and insects. However, a rapidly expanding bod
I have the sense that this may be too simple.
Are humans structurally distinguishable from paperclip maximizers?
Are "innate algorithms" and "finds new algorithms" really qualitatively different?
I sometimes consider this topic. I would phrase it "How can intelligence generally be categorized?" Ideally we would be able to measure and categorize the intelligence level of anything; for example rocks, bacterium, eco-systems, suns, algorithms (AI), aliens that are smarter than humans.
Intelligence appears to be related to the level of abstraction that can be managed. This is roughly what is captured in the OP's list. Higher levels of abstraction allow an intelligence to integrate input from broader or more complex contexts, to model and to res...
It looks for goals and algorithms to achieve the goald.
What criterion should it use to choose between goals?
(also, there's a typo)
But I believe that there is one single right answer. Otherwise, it becomes quite confusing.
There is no one single right answer, and yes it is quite confusing.
The simple reason for this is that everything operates within a context. Context creates meaning; in the absence of context, there is no meaning. This is the context principle.
Let's agree on a definition of morality/ethics, that it is what we should do to reach a desirable state or value, given that we both understand what "value" or "should" mean.
The meanings for "should" and "desirable state/value" will have to be established within a context. Outside of that context those terms may have different meanings, or may be meaningless.
By saying "Let's agree on a definition of morality/ethics" and "given that we both understand" you are attempting to establish a common context with the other commenters on LW. A common context provides shared meaning and opens a path for communication between disparate domains.
You say:
I believe in and presumed an absolute moral system
To me this implies that you believe in a moral system that can be applied to all contexts.
Given your rough definition of morality:
it is what we should do to reach a desirable state or value
I can think of contexts where morality is meaningless. For example electrons don't have desires and don't respond to the idea of should.
So morality can't applied to all contexts, and so in that sense it can't be absolute.
In a previous post you seem to realize this to some extent:
You say:
I think only a level-3 intelligence can be a moral agent.
By this, level-1 and level-2 intelligences operate in morality free contexts. They can't be moral or not-moral.
If you observe a paperclip maximizer engaged in not-moral behavior, you are labeling the behavior as not-moral from within your context. The paperclip maximizer's behavior does not have an inherent quality of moral or not-moral.
So what is your context for the "one single right answer"?
Is there anything absolute according to your defintion?
Are numbers absolute? I can think of a context, where numbers are meaningless. E.g. if I am talking about Picasso.
Is the physical reality absolute? I can think of a context where the physical reality isn't absolute. For example, if I am thinking of numbers.
Level 1: Algorithm-based Intelligence
An intelligence of level 1 acts on innate algorithms, like a bacterium that survives using inherited mechanisms.
Level 2: Goal-oriented Intelligence
An intelligence of level 2 has an innate goal. It develops and finds new algorithms to solve a problem. For example, the paperclip maximizer is a level-2 intelligence.
Level 3: Philosophical Intelligence
An intelligence of level 3 has neither any preset algorithms nor goals. It looks for goals and algorithms to achieve the goal. Ethical questions are only applicable to intelligence of level 3.