An intelligence of level 1 acts on innate algorithms, like a bacterium that survives using inherited mechanisms.
This suggestion seems disengaged from the biological literature. It has become known in recent years, for instance, that bacteria live very complicated social lives. From The Social Lives of Microbes:
...It used to be assumed that bacteria and other microorganisms lived relatively independent unicellular lives, without the cooperative behaviors that have provoked so much interest in mammals, birds, and insects. However, a rapidly expanding bod
I have the sense that this may be too simple.
Are humans structurally distinguishable from paperclip maximizers?
Are "innate algorithms" and "finds new algorithms" really qualitatively different?
I sometimes consider this topic. I would phrase it "How can intelligence generally be categorized?" Ideally we would be able to measure and categorize the intelligence level of anything; for example rocks, bacterium, eco-systems, suns, algorithms (AI), aliens that are smarter than humans.
Intelligence appears to be related to the level of abstraction that can be managed. This is roughly what is captured in the OP's list. Higher levels of abstraction allow an intelligence to integrate input from broader or more complex contexts, to model and to res...
It looks for goals and algorithms to achieve the goald.
What criterion should it use to choose between goals?
(also, there's a typo)
So your point is there is no point in caring for anything. Do you call yourself a nihilist?
No, I care about things. It's just that I don't think that G695 (assuming it's defined -- see below) would be particularly humane or good or desirable, any more than (say) Babyeater morality.
Would you call yourself a naive realist?
Certainly not -- hence "eventually". Science requires interpreting data.
Edit: oh, sorry, forgot to address your actual point.
At a certain point, the working model of reality begins to predict what the insane will claim to perceive and how those errors come about.
I would say, the optimistic view is saying "There is probably/hopefully no crash". But don't let us fight over words.
Very well. Let us assume that (warning: numbers just made up) one in every 100,000 car trips results in a crash. The G698 view says "The chances of a crash are low." The G699 view says "The chances of a crash are high." The G700 view says "The chances of a crash are 1/1000000." I advocate the G700 view, and assert that believing G698 or G699 interferes with believing G700.
Does CEV of humankind exists?
I personally don't think the extrapolated volition of humanity coheres, but I have the impression that others disagree with me.
I would be very surprised, however, if the extrapolated volition of all volitional entities cohered and the extrapolated volition of all volitional humans did not.
I like gensyms.
G101: Pavitra (me) cares about something.
What is the point in caring for G101?
At a certain point, the working model of reality begins to predict what the insane will claim to perceive and how those errors come about.
What if you can't predict?
I advocate the G700 view, and assert that believing G698 or G699 interferes with believing G700.
That is not how your brain works (a rough guess). Your brain thinks either G698 or G699 and then comes out with a decision about either driving or not. This heuristic process is called optimism or pessimism.
Level 1: Algorithm-based Intelligence
An intelligence of level 1 acts on innate algorithms, like a bacterium that survives using inherited mechanisms.
Level 2: Goal-oriented Intelligence
An intelligence of level 2 has an innate goal. It develops and finds new algorithms to solve a problem. For example, the paperclip maximizer is a level-2 intelligence.
Level 3: Philosophical Intelligence
An intelligence of level 3 has neither any preset algorithms nor goals. It looks for goals and algorithms to achieve the goal. Ethical questions are only applicable to intelligence of level 3.