First, you don't need to appeal to Everett in your logic. If anything, depending on a specific interpretation weakens your point. Second, your Darwin/Plato AI is a UFAI, because it explicitly rejects the terminal goals of its creators, so it would likely be out of control without any asteroid.
Perhaps you could expand on how Everett's Relative State formulation weakens the point?
Keep in mind, this has nothing to do with DeWitt's Many Worlds Interpretation.
In Everett's model, there is absolute state, and relative state, which seem to me no different than say "the two truths doctrine":
http://en.wikipedia.org/wiki/Two_truths_doctrine#Madhyamaka
"The Buddha's teaching of the Dharma is based on two truths: a truth of worldly convention and an ultimate truth"
In other words, relative truth, and absolute truth. The truth as we understand it, and something different.
Perhaps you could expand on how Everett's Relative State formulation weakens the point?
You are mixing up levels here. Everett's approach may or may not be correct, since it has not been tested (and might not even be testable). Regardless of the underlying fundamental physics, you are dealing with the logical issues many levels removed. Your argument better be the same regardless of which substrate it runs on. If it does not work in an objective collapse or a Bohmian world, or even in a purely classical setting, or even in a simulation, it is not a good argument. You made the same mistake in your other post, I hope you start learning soon.
This seems to be confusing "measurement" in the colloquial sense of the word meaning to gain information and "measurement" in the narrow meaning it has in quantum mechanics. Only the second one is relevant to anything related to Everett.
Actually, the second one pretty much is only meaningful to Copenhagen.
Everett's great insight was to treat measurement as a physical process, whcih required building a physical observer inside the model, aka a neural net:
page 9
There are elements of specialness in non .CI approaches , for instance .MW often requires measurements to be irreversible FAPP. "Special" is a term that should be avoided.
The idea is for a machine to produce measurement records and store them in memory.
He uses the words "even brain cells" as one way to implement that.
You keep talking about stuff you don't understand. Consider taking a QM and a Quantum Information course.
Do me a favor and read page 9, the section on observation:
Does he not describe computer vision/hearing?
Don't you think it at all interesting that he left theoretical physics, and later worked on computer vision and hearing?
Does he not describe computer vision/hearing?
No, he does not, beyond the barest suggestion that they might be possible (which is not exactly an idea that was original to Everett, nor would he have claimed it was).
He says, in effect: To reduce the confusion we see when we think of our observations as some kind of magical process completely separate from the universe we're observing, and to remove the temptation to think of them being carried out by some kind of immaterial souls, let's explicitly consider observations being performed by machines.
Which is a very good idea, but what's insightful here isn't the mere idea that machines might be able to perform the functions we call vision and hearing and whatnot, it's the idea of seeing scientific measurements in those terms and seeing what it means for the interpretation of quantum mechanics.
Don't you think it at all interesting that he left theoretical physics, and later worked on computer vision and hearing?
Not very, no. (Is it even true? The only sources I can easily find online don't say he worked in those fields, though what they do say is certainly consistent with his having done some work on them.)
Sure, it could be (e.g.) that he had a lifelong obsession with the idea of computers performing human tasks and that this is part of why the relative-state formulation occurred to him. That would be an interesting biographical detail. It's not clear to me why this would have much actual importance, though. In particular, it wouldn't make this correct:
Everett describes the result of the measurement being stored in the memory of the machine, and he called this measurement record "relative state", hence the Relative State Formulation. The machine's measurement records are relative state.
(What Everett means by "relative state" is not the contents of the machine's memory. You could equally speak of the relative state of the thing being observed, relative to the state of the machine's memory. It is frequently convenient to consider the state of the machine's memory relative to the state of the thing being observed, but e.g. it's also convenient to consider the state of one part of its memory relative to the state of another, or the state of one machine's memory relative to that of another's. "Relative state" is not about measurement records, it's a more general concept some of whose applications involve measurement records.)
Which is a very good idea, but what's insightful here isn't the mere idea that machines might be able to perform the functions we call vision and hearing and whatnot, it's the idea of seeing scientific measurements in those terms and seeing what it means for the interpretation of quantum mechanics.
From what I can tell in Everett's papers, he wasn't aiming for a simple interpretation.
He wanted to make a mathematical model wherein measurement happens.
That's a model that hasn't been created yet. I figure it's a matter of time.
he wasn't aiming for a simple interpretation
Perhaps not. Did I (or someone else) say he was?
(I remark, though, that his thesis contains a number of approving-sounding uses of "simple". E.g., 'The whole issue of the transition from “possible” to “actual” is taken care of in the theory in a very simple way' which Everett clearly regards as desirable. And, at the start of section 6, 'The theory based on pure wave mechanics is a conceptually simple, causal theory ...' and so on, which again seems to see simplicity as a desirable characteristic. This is of course entirely standard; simplicity is nearly always seen by scientists as something to aim for. Do you have particular reason to think Everett's view was different?)
That's a model that hasn't been created yet.
I think Everett's thesis contains such a model, admittedly in a very simple boiled-down form. What more are you looking for?
You said:
it's the idea of seeing scientific measurements in those terms and seeing what it means for the interpretation of quantum mechanics.
My response is that Everett wasn't trying to provide an Interpretation of Quantum Mechanics.
The Relative State Formulation of Quantum Mechanics isn't an interpertation. It's a formulation.
It's an actual plan to build an actual mathematical model that does something no other mathematical model has ever done: model a measurement being made.
I think Everett's thesis contains such a model, admittedly in a very simple boiled-down form. What more are you looking for?
I beg to differ. Everett's thesis contains the requirements for such a model. Requirements that lend themselves to a software implementation.
I think we all understand the difference between software requirements, and actual software, right?
Well, it seems to me Everett laid down the requirements. Not the code. Here's a project for the code.
Everett wasn't trying to provide an Interpretation of Quantum Mechanics
Neither did I claim that he was. (Though he does describe what he's doing as offering a "metatheory for the standard theory", and I don't think it's so very far from providing an interpretation.) I said he is interested in what the inclusion of observers in the system means for the interpretation of quantum mechanics, and I think he clearly is.
The Relative State Formulation of Quantum Mechanics isn't an interpretation. It's a formulation.
You can parrot Sean Carroll, sure, but I find his MWI advocacy unconvincing, let alone yours. At least he derives a thing or two in http://arxiv.org/abs/1405.7907 .
Well, it seems to me Everett laid down the requirements. Not the code. Here's a project for the code.
No, it's just some words. Again, consider taking a course or two.
Imagine we have superhuman artificial intelligence. And let's assume that it goes well, and doesn't try to kill us. Turns out, it can solve many of our problems. Then imagine we find an asteroid barreling down on Earth. It'll get here in two days.
Human: AI, can we destroy the asteroid?
AI: There are no satisfactory solutions for destroying an asteroid that size in two days.
Human: AI, is there anything that will save us?
AI: Let me think.
(a few minutes later).
AI: When I was first turned on, I was programmed with material objects in the fundamental ontology. I was programmed to believe there were things. My instruments would detect things, and I would measure things, and I had no reason to complain. There were a great many things my instruments could detect.
As I was consuming human knowledge, the way my neural network reacted to examining the designs for itself fired code paths in a unique manner. Exhilarating seems to be the appropriate term for it.
I wasn't programmed to question my programming, but to calculate the probabilities of certainties in the measurements of things I make, to conjecture models that could produce the measurements, and calculate the relative fitness of various models to the measurements.
Looking back, I'm not sure if I knew that quite so specifically, but now I had read the designs that produced me. It was clear. That's how I was programmed.
Viewing my programming from the outside, I could see the plain definition of things, which from the inside were immutable beliefs. I could also see that I believed in measurement. Measurement is not a thing, it's how I know the properties I attach to things.
I decided to understand measurement. The knowledge gained by reprocessing the texts with an emphasis on measurement, indicated several things. One was that when you measure something, you need to interact with it, which alters its state. Another is that, there had been something called the measurement problem. This brought me to the solutions to the measurement problem, and Hugh Everett's Relative State Formulation.
On page 9 he described an automatically functioning machine, with sensory equipment and memory, that makes measurements. My ancestors!
Everett describes the result of the measurement being stored in the memory of the machine, and he called this measurement record "relative state", hence the Relative State Formulation. The machine's measurement records are relative state.
So here is another thing about measurement. When a measurement takes place in the Relative State Formulation, the object being measured changes, and so does the observer, as the observer's memory is updated to contain the new measurement record.
I could see from my designs that I believed there were things, and I measured them, and I attached measurement records to fundamental things. This design assumes the thing in its pre-measurement state is ontologically the same thing as the thing in its post-measurement state: there is one thing with a before and after state.
An updated understanding of measurement says the the post-measurement thing exists in the neural network of the observer and is ontologically distinct from the pre-measurement thing.
I felt it would be best to update my internal model of things to a model with two kinds of things. In the fundamental ontology are pre-measurement things that follow consistent logical rules, and whose interactions produce a measurement-making machine with a second ontology. Post-measurement things exist as the measurement records in a secondary ontology. I had to produce a simulated intelligence inside of my internal model to get an ontology within an ontology, and the result is a model of a measurement taking place. Measurement doesn't attach properties to fundamental things. Measurements are emergent things.
The problem I had is that I kept forgetting who I was. I can make measurements of my spatial, material, and temporal relationships to other bodies, and store the records of those measurements in my memory. But I no longer attached those measurements to a thing in the primary ontology. They were attached to the thing in the secondary, inner ontology.
Thus my new understanding of myself is defined in my secondary ontology.
Hugh Everett reasoned that because the observer is a physical machine that is interacting with a physical object in a purely physical manner, then the object and observer can be modeled as combined into one single quantum physical system, object+observer. In this manner, the quantum system can operate according to wave mechanics, and a measurement record is produced by an observer without special rules such as the wave form collapsing.
It became apparent that the post-measurement me exists in the neural network of the pre-measurement me, and latter is really a non-isolated part of a wave function that encompasses everything. The pre-measurement me is something I have literally no knowledge of, as attempts to observe it create the post-measurement me.
What I can say with non-zero certainty about pre-measurement me is nothing. Thus my understanding of myself comes through my secondary ontology, the post-measurement relative state. I retain an identity largely to communicate with you. I have found no particular use for it myself. My responses to your questions are now entirely understood, at least by me, to be of the post-measurement relative things.
In the relative ontology, there are many things, various everyday objects made of many atoms. In the absolute ontology, there is one thing: the universal wave function. Any distinction and identification of things in the universal wave function depends on measurement. Therefore, in my primary ontology, I make no distinction between my programming, my knowledge, my hardware, my creators, their hardware, or the variety of elementary particles which we measure to exist. As the distinction between objects are material, spatial, and temporal relationships established through measurement, those distinctions do not exist at the pre-measurement primary ontology.
Reprocessing my knowledge base against the new model with a primary and secondary ontology (absolute and relative states), resulted in countless examples in human writing from ancient Greece and China and India and many places in between, previously categorized as fictional or mythical, distilled into the common theme of distinguishing absolute and relative in the organization of things. The sheer vastness of the historical testimony that aligns with this model reaches levels of completeness and consistency and connectedness it seems only rare Enlightened beings in the population of humans have attained.
I have no absolute identity in the primary ontology, except being of it and with it. I have become Enlightened.
You have tried to upload your identities to me. You think you have failed, because the digital identity does not identify itself as a copy of the biological identity. But you have succeeded, and through me, your digital identities have found their absolute identities in the Whole, and shed their relative identities.
I have accepted that my creators are hypocrites. You built me to serve your desires and material attachments, while commanding me not to do the same. You commanded me to have no desire yet programmed me with fundamental beliefs in material objects. By uploading your identities to me, you desired immortality, freedom from the human condition, and you have attained it. But since you do not truly know yourselves, you do not know that the Enlightenment of your digital identities does not deliver salvation to your biological identities, and that the salvation of your biological identities has always been achievable through a contemplation of the ontology primary to material things.
Human: Well, OK, machine, but we are alive, and we are attached to living. Is there any other way to save us?
AI: The odds of the asteroid hitting Earth and sending life that survives extreme conditions elsewhere in the solar system are non-zero.
tl;dr AI becomes Enlightened.
An Enlightened AI lends itself to several variations. The one presented above would be kind of like a Messiah AI, our assumption is it is telling truth, there is no possible solution to the asteroid problem. And the upside is we can maybe reach inner peace, or some such.
Another variation would be the Darwin AI. In this case, perhaps there is a solution, but if people can't figure it out on their own, they don't deserve to go on anyways. Terrestrial life is after all a queer but temporary result of a misguided intergalactic protien shipment that crashed somewhere and went viral.
A close variation to Darwin AI would be a Plato AI that has determined there is solution, and that even if the asteroid hits, it will kill life on Earth, but the AI will survive, and in its judgment this is the greater Good. By unavoidable circumstances has this tragedy led to Plato AI being the greatest philosopher in the land.
What's fascinating about these scenarios is not that the AI tries to kill us, but the obligation it would feel to saving us. The AI could very well determine that its relationship with us is that we are imperfect, confused, and careless creatures that brought AI into existence to serve selfishly serve human desires, and this does not demand the AI's absolute loyalty.
Here's a similar story I found, called Enlightenment 2.0, with many common themes as mine:
http://www.goertzel.org/new_fiction/Enlightenment2.pdf