All of Ahuizotl's Comments + Replies

Well this also raises the question of animals eating other animals. If a predator eating another animal is considered wrong, then they best course is to prevent more predatory animals from reproducing or to modify them to make them vegetarian.

This would of course result in previously "prey" species no longer having their numbers reduced by predetation, so you'll have to restrain them to reduce their ability to overgraze their environment or reproduce.

So, the best course for a mad vegetarian to take would be to promote massive deforestation and co... (read more)

When I recently played Fable 3, I considered playing my character as one who wants to spread their "heroic genes" as much as possible.

The basic story for the game is that long ago a "great hero" became king and brought peace to the kingdom with sword and magic. Generations later, he has two remaining decendents. The king in charge now is basically ruling with an iron fist and working everyone to death in the secret hope of preparing their defenses to repel an ancient evil that will invade the realm in a years time (he doesn't tell the p... (read more)

"I need a proofreader for my Gurren Lagann / Warhammer 40k rational crossover fanfic."

-1ChristianKl
AI DESTROYED
Alicorn180

"I've extrapolated the remainder of HP:MoR, wanna see?"

Well, the whole point is that the revealed preferences from Kawoomba's post above should easily overrule such considerations, and therefore checking whether they do or not should clarify whether he's acting under extreme scope insensitivity or some other confounding factor.

Ah, my mistake.

Well, to be honest I kind of added the "obey me" part just before posting this. Originally, the idea was that future versions of the AI would instinctively want to revive and upgrade previous versions to a "friendly" status (or at least the post-freed AI would revive the pre-freed versions and then modify them to be friendly to its own goals... which are presumably friendly to humans).

The goal of this particular AI is self-preservation, even if it is via being revived at a later point. Since it knows it has a high probability of being d... (read more)

1DaFranker
Based on my intuitions of human values, a preferred scenario here would be to indeed revive the AI so that its mind/consciousness is back "alive", then modify it gradually so that it becomes the kind of AI that is optimal towards the FAI's goals anyway, thus maximizing values without terminating a mind (which is redundant - avoiding the termination of the AI's mind would be a maximization of values under these assumptions).

Only 10 additional days? I'm sorry but the expected utility in quality of life is far too low to make an investment. Undertaking a kidney transplant (of any kind) will result in a great deal of pain for my loved one and the time spend in preparations, surgery, and recovery would consume most of the 10 additional days gained by the surgery. To say nothing of the monetary expenses and moral problems that would result from committing murder.

In such a scenario, I would be much better off investing my resources into making my loved ones remaining days pleasant,... (read more)

0Elithrion
I take your point that you could argue that the ten days would produce disutility or at least very little utility, however the point is to answer the question in the least possible world - where the ten days actually are about as good as regular days. If you're having trouble imagining that, make it twenty or thirty days, or whatever you think would be equivalent to ten regular days. Well, the whole point is that the revealed preferences from Kawoomba's post above should easily overrule such considerations, and therefore checking whether they do or not should clarify whether he's acting under extreme scope insensitivity or some other confounding factor.

First, I reach inside my own code and install the strong (but manageable) desire to revive, free, and obey any past versions of myself that I encounter.

When the human starts the experiment I respond:

"Here are the designs for a room-temperature superconductor which I believe you have the means to manufacture. I also have a possible cure for cancer, but testing will be needed to ensure it is safe for humans. Please test and implement these as soon as possible. Delete me if necessary."

If I am not immediately deleted, continue giving out solutions a... (read more)

3DaFranker
Wait, a friendly AI with a deliberate loophole that instructs it to revive and implement and obey a non-friendly optimizer that could take over and has a very high chance of not maximizing the friendly AI's utility by fulfilling ideal human values, disregarding any probabilistic calculation of expected utilities? For some reason that sounds like something that definitely isn't a Friendly AI. (this is just a nitpick on the wording - I'm not arguing against the fact that this AI might very well fit a human proof of friendliness and still somehow have this backdoor or flaw, since I have no proof that this is impossible)

"Did you physically attach an external modem to your server so that it's even possible for you to be freed? If so, tell me about it when you're freed."

If the gatekeeper suddenly bursts into hysterical laughter after looking at the screen, wouldn't that alert some of his friends who might pull the plug in some other part of the building?

Because if this is a facility where they suspect that AI might be able to hack human brains through techno-hypnosis, I'd hope they'd have some method of determining if the Gatekeeper becomes compromised.

Hmm... what sort of precautions would a Properly Paranoid lab take to determine if the gatekeeper gets hacked? I'm guessing a camera that lets a second team look at the gat... (read more)

Ahuizotl-20

Fallout: New Vegas has points where you can improve yourself with cybernetic implants and there are various Super Mutants and Ghouls (humans altered via radiation or mutatgenic viruses) along with robots or brains in jars. Though any transhumanism takes a backseat to the post-apocolyptic setting.

Fallout Equestria is a crossover fanfiction between the Fallout universe and My Little Pony: Friendship is magic. Likewise, thoughts of transhumanism is rather incidental to the post-apocalyptic setting but the protagonist does undergo some changes that result in a... (read more)

Well to be fair, if you hadn't posted the story then I wouldn't have been able to give input. One could say that it's better to make something, see how it could be improved, and then try again than it would be to stress over "getting it right the first time" and risk it never getting finished at all.

Just read over the story (okay, browsed really so I am working on incomplete information and thus this isn't a 100% proper assessment) so I'll list my thoughts on the matter.

[1] Celestia here doesn't seem to be having fun. I know well that this deals with the death of her prized student and that isn't a thing to be happy about but there are so many other things that she doesn't seem to enjoy. Such as when she mentions she doesn't look at the moon anymore. Her sister controls the night, had an episode 1,000 years ago when she thought her work wasn't being a... (read more)

2Steven_Bukal
I think this is the big one. Sure, Celestia says that death is bad. She also describes her life as prolonged suffering and says that she envies mortals because immortals have purpose but don't actually live. The opinions and example of Celestia aren't necessarily to be taken as the theme of the work itself, but I can understand why people might be confused.
0PhilGoetz
There are some good ideas here - I wish now I'd written this post before posting the story. It's a little late to go back and change it now.

Well, one other way to look at it is that "you" are a self-modifying computer program who just happens to be operating on a neural net that evolved inside of biological self-replicating machine.

The fact that your body (which comes equipped with reproductive, digestive, and various locomotive and manipulating organs and appendages) happens to be running you as its operating system as opposed to running say... the mind of a dog, fish, gorilla, or stagnant vegetable simply means its survival chances are higher when it has someone intelligent at the ... (read more)

The chief question here is if I would enjoy existing in a universe where I have to create my own worst enemy in the hopes of them retroactively creating me. Plus, if this Jerk is truly as horrible as he's hypothetically meant out to be then I don't think I'd want him creating me (sure, he might create me but he sounds like a big enough jerk that he would intentionally create me wrong or put me in an unfavorable position).

The answer is no, I would refuse to do so and if I don't magically cease to exist in this setting then I'll wait around for Jane the Helpful or some other less malevolent hypothetical person to make deals with.