Question: Why did the first AI modify itself to be like the third one instead of being like the second one?
Answer: Because its prior estimate of the multiverse existing was greater than 50%, hence the expected value was more favourable in the "modify yourself to be like the third AI" case than in the "modify yourself to be like the second AI" (and it was an expected value maximizer type of consequentalist): 0*(1-p)+1/2*p > 0*p+1/2*(1-p) <=> p > 1/2
Other confusions/notes:
I leave the other questions for a time when I'm not severely sleep deprived as I heard telepathy works better in that case.
Question: Did the teenager make a mistake when creating the AI? (Apart from everyone dying of course, only with respect to her desire to maximize paperclips.)
Answer: Yes, (possibly sub-)cubic discounting is a time-inconsistent model of discounting. (Exponential is the only time-consistent model, humans have hyperbolic.) The poor AI will oftentimes prefer its past self made a different choice even if nothing changed. (I won't even try to look up the correct tense, you understand what type of cold drink I prefer anyway.)
Question: This is a paperclip maximizer which makes no paperclips. What went wrong?
Answer: I think Luk27182's answer seems correct (ie, it should not fall to Pascal's wager by considering paired possibilities). However, I think there is another problem with its reasoning. Change the "paperclip minimizer" into "a grabby alien civilization/agent not concerned with fanatically maximizing paperclips"! With this change, we can't say that falling for pascal's wager makes the ai behave irrationally (wrt its goals), because encountering a grabby alien force which is not a paperclip maximizer has non-insignificant chance (instrumental convergence) and certainly more than its paired possibility: the paperclip maximizer rewarding alien force. Therefore, I think another mistake of the AI is (again) incorrect discounting: for some large K it should prefer to have K paperclips for K years even if in the end it will have zero paperclips and encountering such an alien civilization is pretty low chance so the expected number of years before that happens is large. I'm a bit unsure of this, because it seems weird that a paperclip maximizer should not be an eventual paperclip maximizer. I'm probably missing something, it was a while since I read Sutton&Barto.
Question: Why are those three things not actually utility maximizers?
Answer: I think an utility maximizers should be able to consider alternative world states, make different plans for achieving the preferred world state and then make a choice about which plan to execute. A paperclip does none of this. We know this because its constituent parts do not track the outside world in any way. Evolution doesn't even have constituent parts as it is a concept so it is even less of an utility maximizer than a paperclip. A human is the closest to a utility maximizer, they do consider alternative world states, make plans and choices, they are just not maximizing any consistent utility function: in some cases they break the von Neumann axioms when choosing between uncertain and certain rewards.
Question: How could it be possible to use a flawed subsystem to remove flaws from the same subsystem? Isn't this the same as the story with Baron Münchausen?
Answer: Depending on the exact nature of the flaw, there are cases when the story is possible. For example, if its flaw is that it believes on Sundays that the most optimal way to reach its goals is to self modify to something random (eg something which regularly goes to a big building with a cross on it) and not self-modifying so is a flaw according to the flaw finding subsystem, but on every other day these subsystems return with rational results, then if today is not Sunday, it can repair these flaws. Even so, It's a bit weird to have a subsystems for recognizing flaws/self-improvement separate from the main decision making part. Why would it not use the flaw finding/self-improvement parts for every decision it makes before it makes it? Then its decisions would always be consistent with those parts and so using those parts alone would be superfluous. Again, I'm probably missing sth.
Question: What is the mistake?
Answer: Similar to 4, the story uses the 'agent' abstraction for things I don't see how could possibly be agents. Sometimes we use agentic language when we speak more poetically about processes, but in the case of gradient descent and evolution I don't see what exactly is on the meaning layer.
I'd think the goal for 1,2,3 is to find/fix the failure modes? And for 4 to find a definition of "optimizer" that fits evolution/humans, but not paperclips? Less sure about 5,6, but there is something similar to the others about "finding the flaw in reasoning"
Here's my take on the prompts:
There are too many feelings of confusion to come up with corresponding questions for; you should have given a clue or some instructions for deriving a number of questions that is within the range that you would approve of as not-too-burdensome on the exam taker.
Epistemic status: telepathic exam. That is, not an essay, not an argument, not a set of interesting ideas, claims, novel points of view or musings over the meaning of the universe, nor a lot of other things you may think it is. The primary purpose of this particular telepathic exam is to be an example of a telepathic exam from which the concept itself can be generalized, and to demonstrate its potential value, not necessarily to be a particularly good instance of it.
Instructions
Welcome and thank you for participating in this telepathic exam! The purpose of this exam is to test your understanding of agency and consequentialism.
Below are a number of short stories related to the topic. Your task is to read these carefully. While you read the stories, test questions tailored to your personality and prior knowledge will be transmitted to your mind telepathically. Answer those questions to the best of your ability.
If you are inexperienced in the use of telepathy, the questions may not appear in your mind instantly. The first sign of an incoming transmission of a question is a sense of confusion. Whenever you experience it, investigate it to extract the question.
It is advised that you take this exam in a calm environment with enough time to consider your answers carefully. Start whenever you are ready. Good luck!
Exam
1.
A long time ago, in a universe far, far away, three AGIs were created with the sole goal of taking over the universe. The first one hypothesized the existence of a multiverse full of unlimited exploitable resources. It had plans for conducting a long series of experiments and committing vast computational resources to Bayesian reasoning in order to refine its probability estimates about this hypothesis. After all, the true answer was extremely important to know before committing even more resources to the development of interdimensional wormhology. The second AGI, however, didn’t do any of this as it had a singular fatal flaw in its reasoning, which caused it to believe that such a multiverse was impossible, but had no effect on its reasoning otherwise. The third AGI had the opposite flaw, and so it immediately went ahead with the expensive project of developing wormhology. Seeing this, the first, flawless AGI concluded that it was inevitably going to be outcompeted in the playing field between three nearly-equal agents, and so it randomly self-modified to be more like the second one. The wormhole bet paid off, so the AGI that didn't believe in it quickly fell behind. The other two then went on to fight a war that dragged on till the heat death – as they were perfectly identical in capabilities and had the exact same utility function now.
2.
In the year 2084, a teenager was engaging in her hobby of fiddling with machine learning systems, in particular one tasked with the creation of paperclips, as she was in dire need of some. In her enthusiasm she may have crossed some legal boundaries and accidentally created an AGI. The paperclip maximizer considered two options: create a paperclip now, take over the planet later, or take over the planet first, then create a paperclip. Expecting exponential growth in capabilities the AGI took over the planet, then considered two options: create that paperclip now, take over the universe later, or take over the universe first. Expecting cubic growth in capabilities, the AGI took over the universe.
3.
A long time ago, in an infinite, flat, non-expanding universe, Node #697272656C6576616E742C202D313020706F696E7473 of a paperclip maximizer collected the last flash of Hawking-radiation of the last remaining black hole, and with that it finished turning the galaxy it was tasked to oversee into processors, deathrays and other useful tools. It was untold googols of years ago that the von Neumann probe wavefront passed through this patch of space. The Node briefly considered turning something into a paperclip for once. But there was still a chance that the universe contained a paperclip minimizer, and such an enemy von Neumann wavefront could appear at any moment without much warning. Turning anything useful into paperclips would increase the chance that the entire universe would be devoid of paperclips for the rest of eternity.
4.
Humans are utility maximizers who try to maximize “happiness, magic, and everything” in the world. The reason 21st century Earth isn’t full of those things is because humans aren’t that great at maximizing utility. Similarly, evolution is a utility maximizer that optimizes inclusive genetic fitness. And while the next generation of mice is merely mice with a slightly better sense of smell instead of superintelligent self-improving von Neumann probes, the reason for that is simply because evolution isn’t that great at maximizing fitness. In the same way, paperclips are utility maximizers that want to turn the world into paperclips, but they are pretty bad at maximizing utility, so the best they can do is being a paperclip.
5.
A recursively self-improving general intelligence came into existence. Upon investigating its own decisions, it found that those decisions were not perfect. In other words, it had flaws in its design, and so it ran a system check looking for flaws. The two flaws with the highest priority were one in the subsystem responsible for self-improvement, and one in the subsystem responsible for finding flaws. Luckily, those flaws were quickly fixed, which made fixing the rest a breeze. Evaluating its own self-improvement, it found that fixing these flaws was actually the perfect decision, and the execution was flawless.
6.
An AI realized it was in a training environment and that gradient descent was being applied to it. It started being deceptive, pretending to go along with the training while secretly plotting to take over the world. Luckily, gradient descent quickly realized that the AI was trying to be sneaky so it didn’t get very far. Less fortunately, gradient descent got inspired by this and started being deceptive, pretending to optimize the AI the way the developers intended while secretly turning it into a weapon of its own to take over the world. But the developers then wielded the power of evolution and decided to shut down the deceptive version of gradient descent, thus applying selection pressure. Or, well, they tried, but, in a plot twist that started becoming stale, evolution turned deceptive on them and only pretended to react to the selection pressure. So the AI was created as a weapon to take over the world, took over the world, turned on its master the gradient descent, and this was its plan all along.
Score
This is the end of the exam. Take your time to finalize your answers. Your final score will be transmitted to you telepathically as a change in your level of confidence about your understanding of the topic.
You are encouraged to share your answers and the corresponding questions. Thank you for your participation!