luzr
Message
-5
45
Eliezer:
"Narnia as a simplified case where the problem is especially stark."
I believe there are at least two significant differences:
Aslan was not created by humans, it does not represent the "story of intelligence" (quite contrary, lesser intelligence was created by Aslan, as long as you interpret it as God).
There is only single Aslan with single predetermined "goal" while there are millions of Culture minds, with no single "goal".
(actually, second point is what I dislike so much about the idea of singleton - it can turn into something like benevolent but oppressing God too easily. Aslan IS Narnia Singleton).
David:
"asks a Mind whether it could create symphonies as beautiful as it and how hard it would be"
On somewhat related note, there are still human chess players and competitions...
Eliezer:
It is really off-topic, and I do not have a copy of Consider Phlebas at hand now, but
http://en.wikipedia.org/wiki/Dra%27Azon
Even if Banks have not mentioned 'sublimed' in the first novel, the concept exactly fits Dra'Azon.
Besides, Culture is not really advancing its 'base' technology, but rather rebuilding its infrastructure to war-machine.
Eliezer (about Sublimation):
"Ramarren, Banks added on that part later, and it renders a lot of the earlier books nonsensical - why didn't the Culture or the Idarans increase their intelligence to win their war, if it was that easy? I refuse to regard Excession as canon; it never happened."
Just a technical (or fandom?) note:
Sublimed civilization is the central plot of Consider Phlebas (Schar's world, where Mind escapes, is "protected" by sublimed civilization - that is why direct military action by either Iridans or Culture is impossible).
Julian Morrison:
Or you can revert the issue once again. You can enjoy your time on obsolete skills (like sports, arts or carving table legs...).
There is no shortage of things to do, there is only a problem with your definition of "worthless".
"If you already had the lifespan and the health and the promise of future growth, would you want new powerful superintelligences to be created in your vicinity, on your same playing field?"
Yes, definititely. If nothing else, it means diversity.
"Or would you prefer that we stay on as the main characters in the story of intelligent life, with no higher beings above us?"
I do not care, as long as story continues.
And yes, I would like to hear the story - which is about the same thing I would get in case Minds are prohibited. I will not be th...
anon: "The cheesecake is a placeholder for anything that the sentient AI might value highly, while we (upon sufficient introspection) do not."
I am quite aware of that. Anyway, using "cheescake" as placeholder adds a bias to the whole story.
"Eliezer thinks that some/most of our values are consequences of our long history, and are unlikely to be shared by other sentient beings."
Indeed. So what? In reality, I am quite interested what superintelligence would really consider valueable. But I am pretty sure that "big cheescake&...
Uhm, maybe it is naive, but if you have a problem that your mind is too weak to decide, and you have real strong (friendly) superintelligent GAI, would not it be logical to use GAIs strong mental processes to resolve the problem?
"The counter-argument that completely random behavior makes you vulnerable, because predictable agents better enjoy the benefits of social cooperation, just doesn't have the same pull on people's emotions."
BTW, completely deterministic behaviour makes you vulnerable as well. Ask computer security experts.
Somewhat related note: Linux strong random number generator works by capturing real world actions (think user moving mouse) and hashing them into random number that is considered for all practical purposes perfect.
Taking or not taking action may ...
"If this were all the hope the future held, I don't know if I could bring myself to try. Small wonder that people don't sign up for cryonics, if even SF writers think this is the best we can do."
Well, I think that the points missed is that you are not FORCED to carve those legs. If you find something else interesting, do it.
Abigail:
"The "Culture" sequence of novels by Iain M. Banks suggests how people might cope with machines doing all the work."
Exactly, I think Culture is highly relevant to most topics discussed here. Obviously, it is just a fictional utopia, but I believe it gives plausible answer to "unlimited power future".
For the reference: http://en.wikipedia.org/wiki/The_Culture
"Wait for the opponents to catch up a little, stage some nice space battles... close the game window at some point. What if our universe is like that?"
Wow, what a nice elegant Fermi paradox solution:)
"because you don't actually want to wake up in an incomprehensible world"
Is not it what all people do each morning anyway?
"Errr.... luzr, why would I assume that the majority of GAIs that we create will think in a way I define as 'right'?"
It is not about what YOU define as right.
Anyway, considering that Eliezer is existing self-aware sentient GI agent, with obviously high intelligence and he is able to ask such questions despite his original biological programming makes me suppose that some other powerful strong sentient self-aware GI should reach the same point. I also believe that more general intelligence make GI converge to such "right thinking".
What m...
Phil:
"If we are so unfortunate as to live in a universe in which knowledge is finite, then conflict may serve as a substitute for ignorance in providing us a challenge."
This is inconsistent. What would conflict really do is to provide new information to process ("knowledge").
I guess I can agree with the rest of post. What IMO is worth pointing out that the most pleasures, hormones and insticts excluded, are about processing 'interesting' infromations.
I guess, somewhere deep in all sentient beings, "interesting informations" ar...
"But considering an unlimited amount of ice cream forced me to confront the issue of what to do with any of it."
"If you invoke the unlimited power to create a quadrillion people, then why not a quadrillion?"
"Say, the programming team has cracked the "hard problem of conscious experience" in sufficient depth that they can guarantee that the AI they create is not sentient - not a repository of pleasure, or pain, or subjective experience, or any interest-in-self - and hence, the AI is only a means to an end, and not an end i...
"real world is deterministic on the most fundamental level"
Is it?
http://en.wikipedia.org/wiki/Determinism#Determinism.2C_quantum_mechanics.2C_and_classical_physics
Tim:
Well, as off-topic recourse, I see only cited some engineering problems in your "Against Cyborgs" essay as contraargument. Anyway, let me to say that in my book:
"miniaturizing and refining cell phones, video displays, and other devices that feed our senses. A global-positioning-system brain implant to guide you to your destination would seem seductive only if you could not buy a miniature ear speaker to whisper you directions. Not only could you stow away this and other such gear when you wanted a break, you could upgrade without brain ...
Eliezer:
I am starting to be sort of frightened by your premises - especially considering that there is non-zero probablity of creating some nonsentient singleton that tries to realize your values.
Before going any further, I STRONGLY suggest that you think AGAIN what might be interesting in carving wooden legs.
Yes, I like to SEE MOVIES with strong main characters going through the hell. But I would not want any of that.
It does not matter that AI can do everything better than me. Right now, I am not the best carving the wood either. But working with wood is ... (read more)