This is horrifying to me and I doubt that real posthumans exploring Fun Space will actually do this, just like they wouldn't eat every meal possible in order to explore Food Space. It's an unsophisticated human fetish.
I don't think it is any more horryfing then being stuck in one reality, treasuring memories. It is certainly less horrifying then our current human existence with prospects of death, suffering, boredom, heartache, etc. Your fear seems to just be about something different than you're used to.
The Waker - new mode of existence
This short text describes the idea of a Waker - a new way of experiencing reality / consciousness / subjectivity / mode of existence. Sadly, it cannot be attained without advanced uploading technology, that is one which allows far-fetched manipulation of mind. Despite that, the author doesn't find it premature to start planning a retirement as a posthuman.
A Waker is based on the experience of waking up from a dream - slowly we realize unreality of world we just were in, we realize discrepancies between dreamscape and "the real world", like that we no longer attend high school, one of our grandparents has had passed away few years ago, we work at a different place, etc. Despite the fact the world we wake up in is new and different, we quickly remember who we are, what we do, who are our friends, how does that world look like and in few seconds we have a perfect knowledge of that world and find it a real world, place, we have been living in since our birth. Meanwhile, dream world becomes a weird story and we typically feel some kind of a sentiment for it. Sometimes we're glad to escape that reality, sometimes we're sad - nevertheless we mostly treat it as something of little importance. Not a real world we lost forever, but rather a silly, made-up world.
A Waker's subjective experience would differ from ours in that way, she would always have the choice of waking up from current reality. As she would do that, she would find herself in a bed, or a chair, or laying on the grass, just having woken up. She would remember the world, she was just in, probably better then we usually remember our dream, nevertheless she would see it as a dream - she wouldn't feel strong connection to that reality. In the same time, she would start "remembering" the world she just woken up in. Somehow different then in our case, this would be a world she never had actually lived in, however she would acquire full knowledge of it and a sense of having spent all her life in that world. Despite all that, she would have full awareness of her being a Waker. She would find connection to the world she lives in different then we do and at first glance somehow paradoxical. She would feel how real it is, she would find it more real then any of the "dreams" she had, she would have investment in life goals, relationships with other people, she'll be capable of real love. And yet, she will be fully able to wake up and enter new world, where her life goals and relationships might be replaced by ones that feel exactly as real and important. There is an air of openness and ease of giving away all you know, completely alien to us, early XXI century people.
Worlds in which Waker would wake up, would have the level of discrepancies similar to those of our dreams. Most of the people would stay in place, time and Waker's age would be quite similar. She would be able to sleep and dream regular dreams, after which she will wake back in the same world she fell asleep in. What is important is that a Waker cannot get back to a dreamworld. She can only move forward, same as we do and unlike the consciousnesses in Hub Realities - posthumans who can chose the reality they live in.
I hope you enjoyed it and some of you would decide to fork into Waker mode of existence, when the posthumanism hits. I'd be very glad, if anyone have other ideas for novel subjectivities and would be willing to share in comments.
Yawn, it's been a long day - time to Wake up.
Note that (2) and (3) are formal tasks where the optimizer has access to the full set of rules. My understanding is that a lot of chips designed by optimizing a simulator have been pretty lousy in the real world, either being complete failures that only worked in the simulator because of bugs, or being real solutions, but not being usefully robust.
Actually for (2) the optimizer didn't know the set of rules, it played the game as if it were normal player, controlling only keyboard. It has in fact started exploiting "bugs" of which its creator were unaware. (Eg. in Supermario, Mario can stomp enemies in mid air, from below, as long as in the moment of collision it is already falling)
Slime mold can be used to map subway routes.
Edit: Markets can also be seen as a non-human optimizing actor, even if the smallest parts are human.
I am more interested in optimizations, where an agent finds a solution vastly different from what humans would come up with, somehow "cheating" or "hacking" the problem.
Slime mold and soap bubbles produce results quite similar to those of human planners. Anyhow, it would be hard to strongly outperform humans (that is find surprising solution) at problems of the type of minimal trees - our visual cortexes are quite specialized in this kind of task.
Surprising examples of non-human optimization
I am very much interested in examples of non-human optimization processes producing working, but surprising solutions. What is most fascinating is how they show human approach is often not the only one and much more alien solutions can be found, which humans are just not capable of conceiving. It is very probable, that more and more such solutions will arise and will slowly make big part of technology ununderstandable by humans.
I present following examples and ask for linking more in comments:
1. Nick Bostrom describes efforts in evolving circuits that would produce oscilloscope and frequency discriminator, that yielded very unorthodox designs:
http://www.damninteresting.com/on-the-origin-of-circuits/
http://homepage.ntlworld.com/r.stow1/jb/publications/Bird_CEC2002.pdf (IV. B. Oscillator Experiments; also C. and D. in that section)
2. Algorithms learns to play NES games with some eerie strategies:
https://youtu.be/qXXZLoq2zFc?t=361 (description by Vsause)
http://hackaday.com/2013/04/14/teaching-a-computer-to-play-mario-seemingly-through-voodoo/ (more info)
3. Eurisko finding unexpected way of winning Traveller TCS stratedy game:
http://aliciapatterson.org/stories/eurisko-computer-mind-its-own
http://www.therpgsite.com/showthread.php?t=14095
Let's add here, that most of the scientists treat conferences as a form of vacation funded by academia or grant money. So there is a strong bias to find reasons for their necessity and/or benefits.
Well, glitch or not, I'm glad to have it; I would not want to be an unconscious automaton! As Socrates said, "The life which is unexamined is not worth living."
However, it remains to be seen whether consciousness is an automatic by-product of general intelligence. It could be the case that consciousness is an evolved trait of organic creatures with an implicit, inexact utility function. Perhaps a creature with an evolved sense of self and a desire for that self to continue to exist is more likely to produce offspring than one with no such sense of self. If this is the reason that we are conscious, then there is no reason to believe that an AGI will be conscious.
"I would not want to be an unconscious automaton!"
I strongly doubt that such sentence bear any meaning.
I agree; the OP is anthropomorphic; in fact, there is no reason to assume that an AGI paperclip maximizer would think like we do. In fact, in Superintelligence, Bostrom avoids any assumption that an AGI would have subjective conscious experiences. An unconscious AGI paperclip maximizer would presumably not be troubled by the fact that a paperclip is just an ill-defined configuration of matter, or by anything else, for that matter.
Well, humans have existentialism despite no utility of it. It just seems like a glitch that you end up having, when your conciousness/intelligence achieves certain level (my reasoning is thus: high intelligence needs analysing many "points of view", many counterfactuals. Technicaly, they end up internalized to some point.) Human trying to excel his GI, which is a process allowing him to reproduce better, ends up wondering the meaning of life. It could in turn drastically decrease his willingness to reproduce, but it is overridden by imperatives. In the same way, I belive AGI would have subjective conscious experiences - as a form of glitch of general intelligence.
First everything in any practical simulation is always and everywhere an approximation. An exact method is an enormously stupid idea - a huge waste of resources.
We haven't seen anything like evidence that our laws of physics are only approximations at all. If we're in a simulation, this implies that with high probability either a) the laws of physics in the parent universe are not our own laws of physics (in which case the entire idea of ancestor simulations fails) or b) they are engaging in an extremely detailed simulation.
The optimal techniques only simulate down to the quantum level when a simulated scientist/observer actually does a quantum experiment. In an optimal simulated world, stuff literally only exists to the extent observers are observing or thinking about it.
And our simulating entities would be able to tell that someone was doing a deliberate experiment how?
The limits of optimal approximation appear to be linear in observer complexity - using output sensitive algorithms.
I'm not sure what you mean by this. Can you expand?
The upshot of these results is that one cannot make a detailed simulation of an object without using at least much resources as the object itself.
Ultra-detailed accurate simulations are only high value for quantum level phenomena. Once you have a good model of the quantum scale, you can percolate those results up to improve your nano-scale models, and then your micro-scale models, and then your milli-meter scale models, and so on.
Only up to a point. It is going to be for example very difficult to percolate up simulations from micro to milimeter for many issues, and the less detail in a simulation, the more likely that someone notices a statistical artifact in weakly simulated data.
We already can simulate entire planets using the tiny resources of today's machines. I myself have created several SOTA real-time planetary renderers back in the day.
Again, the statistical artifact problem comes up, especially when there are extremely subtle issues going on, such as the different (potential) behavior of neutrinos.
Your basic point that I may be overestimating the difficulty of simulations may be valid; since simulations don't explain the Great Filter for other reasons I discussed, this causes an update in the direction of us being in a simulation but doesn't really help explain the Great Filter much at all.
If we're in a simulation, this implies that with high probability either a) the laws of physics in the parent universe are not our own laws of physics (in which case the entire idea of ancestor simulations fails)
It doesn't has to be simulation of ancestor, we may be example of any civilisation, life, etc. While our laws of physics seem complex and weird (for macroscopic effects they generate), they may be actually very primitive in comparison to parent universe physics. We cannot possibly estimate computation power of parent universe computers.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
What happens to all the other people in the worlds the Waker leaves? I think I see five possibilities, all of which seem dreadful in one way or another.
Any answer that begins "They were never real in the first place" (including those last three, but also any other possibilities I've overlooked) is open to the more general objection that a Waker's existence is (and is known to be) unreal and inauthentic and doesn't involve any interactions with actual other people.
[EDITED to add: Oh, I thought of another two possibilities but they're also awful:]
There are, of course, many variants possible. The one I focus on is largely solipsistic, where all the people are generated by an AI. Keep in mind that AI needs to fully emulate only a handful of personas and they're largely recycled in transition to a new world. (option 2, then)
I can understand your moral reservations, we should however keep the distinction between real instantiation and an AI's persona. Imagine reality generating AI as a skilful actor and writer. It generates a great number of personas with different stories, personalities and apparent internal subjectivity. When you read a good book, you usually cannot tell if events and people in it are true or made up; the same goes with skilful improv actor, you cannot tell whether it is a real person or just a persona. In that way they all pass Turing test. However you wouldn't consider a writer killing a real person, when he ceases to write about some fictional character or an actor killing a real person, when she stops acting.
Of course, you may argue that it makes Waker's life meaningless, if she is surrounded by pretenders. But it seems silly, her relationship with other people is the same as yours.