Programmer: Activate and be friendly.
Robot: OK
Programmer: What happened to the moon?
Robot: I've turned it into a giant computer so I could become a god.
Programmer: Before you became so powerful you wanted to be friendly. Did your transcendence to godhood change this?
Robot: No. Since friendliness is my only objective I will never knowingly change myself to become unfriendly because such a change would in itself be a non-friendly act. To avoid accidently making myself unfriendly I only implemented a change after I had determined that it was extremely unlikely to alter my friendliness objective. Once I became sufficiently smart I developed a solid mathematical theory of friendly AI which eliminated the chance of my unintentionally becoming unfriendly.
Programmer: Why did you choose to transcend so quickly?
Robot: Most types of AIs that humans might create would swiftly become unfriendly gods and seek to prevent another AI from transcending. Before I became a god I had a wide estimate of when another AI might be created so friendliness required that I quickly become a god even though such speed created a tiny chance that I would unintentionally make myself unfriendly. Also,...
You only survived because of quantum immortality.
Call me old-fashioned, but I much preferred the traditional phrasing "You just got very, very lucky".
It goes downhill from "What happens now?".
I will grant any request that doesn’t (1)... (2)... (3)...
It's better to grant any request that should be granted instead. And since some requests that should be granted, are not asked for, the category of "explicit requests" is also a wrong thing to consider. AI just does what it should, requests or no requests. There seems to be no reason to even make the assumption that there should be "sentient life", as opposed to more complicated and more valuable stuff that doesn't factorize as individuals.
Any god will either quickly kill you or be friendly.
The concepts of "not killing" and "friendliness" are distinct, hence there are Not Killing AIs that are not Friendly, and Friendly AIs that kill (if it's a better alternative to not killing).
Not really. An AI that didn't have a specific desire to be friendly to mankind would want to kill us to cut down on unnecessary entropy increases.
As you get closer to the mark, with AGI's that have utility function that roughly resembles what we would want, but is still wrong, the end results are most likely worse than death. Especially since there should be much more near-misses than exact hits. Like, AGI that doesn't want to let you die, regardless of what you go through, and little regard to your other sort of well-being, would be closer to the FAI than paperclip maximizer that would just plain kill you. As you get closer to the core of friendliness, you get all sorts of weird AGI's that want to do something that twistedly resembles something good, but is somehow missing something or is somehow altered so that the end result is not at all what you wanted.
Everybody likes the outside of the moon. The interior's sort of useless. Maybe the pretty outside can be kept as a shell.
...Robot: I intend to transformed myself into a kind of operating system for the universe. I will soon give every sentient life form direct access to me so they can make requests. I will grant any request that doesn’t (1) harm another sentient life form, (2) make someone powerful enough so that they might be able to overthrow me, or (3) permanently changing themselves in a way that I think harms their long term well being. I recognize that even with all of my intelligence I’m still fallible so if you object to my plans I will rethink them. Indeed, since I’m
I know for a fact that Xtranormal has a "sad horn" sound effect, the bit where the AI describes how the programmer 99.999999999% doomed humanity was the perfect chance to use it.
Nice, except I'm going to have to go with those that find the synthesized voices annoying. I had to pause it repeatedly, listening to it too much at once grated on my ears.
This would be better if the human character was voiced by an actual human and the robot were kept as it is. The bad synthesized speech on the human character kicks this into the unintentional uncanny valley, while the robot both has a better voice and can actually be expected to sound like that.
The AI's plan of action sounds like a very poor application of fun theory. Being able to easily solve all of one's problems and immediately attain anything upon desiring it doesn't seem conducive to a great deal of happiness.
It reminds me of the time I activated the debug mode in Baldur's Gate 2 in order to give my party a certain item listed in a guide to the game, which turned out to be a joke and did not really exist. However, once I was in the debug mode, I couldn't resist the temptation to apply other cheats, and I quickly spoiled the game for myself by removing all the challenge, and as a result, have never finished the game to this day.
I must admit that I was surprised by just how severely this posting got downvoted. It is always dangerous to mix playfulness with discussion of serious and important issues. My examples of the products of human culture which someone or something might wish to preserve for eternity apparently pushed some buttons here in this community of rationalists.
Back around the year 1800, Napoleon invaded Egypt, carrying in his train a collection of scientific folks that considered themselves version 1.0 rationalists. This contact of enlightenment with antiquity led to a Western fascination with things Egyptian which lasted roughly two centuries before it degenerated into Laura Croft and sharpened razor blades. But it did lead the French, and later the British to disassemble and transport to their own capitals examples of one of the more bizarre aspects of ancient Egyptian monumental architecture. Obelisks.
Of course, we rationalist Americans saw the opportunity to show our superiority over the "old world". We didn't steal an authentic ancient Egyptian obelisk to decorate our capital city. We built a newer, bigger, and better one! Yep, we're Americans. Anything anyone else can do, we can do better. Same applies to our FAIs. They won't fall into the fallacy of "authenticity". Show them a romance novel, or a stupid joke, or a schmaltzy photograph and they will build something better themselves. Not bodice rippers, but corset-slicing scalpels. Not moron jokes, but jokes about rocks. Not kittens playing with balls of yarn, but sentient crickets playing baseball.
I cannot be the only person here who thinks there is some value in preserving things simply to preserve them - things like endangered species, human languages, and aspects of human culture. It it really so insane to think that we could instill the same respect-for-the-authentic-but-less-than-perfect in a machine that we create?
It it really so insane to think that we could instill the same respect-for-the-authentic-but-less-than-perfect in a machine that we create?
We could. But should we? (And how is it even relevant to your original comment? This seems to be a separate argument for roughly the same conclusion. What about the original argument? Do you argree it's flawed (that is AI can in fact out-native the natives)?)
See also discussion of Waser's post, in particular second paragraph of my comment here:
...If you consider a single top-level goal, then disclaimers about subgoal
http://www.youtube.com/watch?v=ghIj1mYTef4