Friendly-AI is a truly abhorrent concept indicative of intellectual depravity.
I tend to ignore any non-fiction which starts with moralizing and denigrating. I assume the rest of the article is crap, too.
Friendly-AI is a truly abhorrent concept indicative of intellectual depravity.
I tend to ignore any non-fiction which starts with moralizing and denigrating. I assume the rest of the article is crap, too.
Please read on, I would have removed the snarky intro
The doublepost with different titles cracks me up.
Yeah, I'm not very good at the internet, I didn't realize deleting articles apparently means nothing on this site
Desirability is not a requisite of the truth darkmatter2525 source
No doubt this post will drop into downvoted oblivion. But I would like to explore the following, for personal reasons:
Have you ever used the services of a legal prostitute, like the ones who operate in bordellos in some Nevada counties? Did you have your sexual debut with a legal prostitute because you couldn’t make it happen in your organic social situation while growing up, for example, with girls you knew in high school or college? And did that experience somehow make it easier to develop the skills for having sexual relationships with women through dating? Or does it still leave you relatively incompetent in that area because prostitutes don’t really solve the sex problem you thought you had?
I don’t know of any research into this. But then professional sex researchers in general seem strangely incurious about the problems of sexually inexperienced and excluded adult men, judging from the absence of this topic in the recently published Human Sexuality 101 textbooks I’ve seen.
BTW, I find it interesting that over a decade ago, Eliezer described himself in a news story as a “volunteer virgin,” though he has since become sexually active. That implies he had opportunities for sexual relationships that he had simply declined until something happened to change his mind about pursuing them. Perhaps he realized that sexual experience would elevate his “armor class” in the male status hierarchy. It would also improve his social relationships with women in other areas; women can pick up on the “tells,” as Texas Hold’em players call them, of sexually inexperienced men, and they tend not to respect them.
I love the way that advancedatheist assumes that we're all guys. That, or lesbians.
I had exactly the same idea!
It is possible that in that only few people are actually 'players' (have consciousness) and others are NPC-like p-zombies. In that case, I can say I'm one of the players, as I'm sure that I have consciousness, but there is no way I can prove it to anyone else ;-) .
One of the positive aspects of this kind of thought experiments is that usually gives people additional reasons for good behavior because in most cases it is highly likely that simulators are conscious creatures who will probably reward those who behave ethically.
I admit that it serves my ego suitably to imagine that I am the only conscious human, and a world full of shallow-AI's was created just for me ;-)
Well, yeah, you should still be good to your friends and other presumably real people. However, there would be no point in, say, trying to save people from the holocaust, since the simulators wouldn't let actual people get tortured and burnt.
The simulators may justify in their minds actual people getting tortured and burnt by suggesting that most of the people will not experience too much suffering, that the simulations would not otherwise have lived (although this fails to distinguish between lives and lives worth living), and that they can end the simulation if our suffering becomes too great. That the hypothetical simulators did not step in during the many genocides in our kind's history may suggest that they either do not exist, or that creating an FAI is more important to them than preventing human suffering.
This co-opts Bostrom's Simulation argument, but a possible solution to the fermi paradox is that we are all AI's in the box, and the simulators have produced billions of humans in order to find the most friendly human to release from the box. Moral of the story, be good and become a god
If you're in a box, then the computational resources available are finite. They might change over time, as those outside the box add or upgrade hardware, but the AI can't just say "I need some highly parallel computing hardware to solve this problem with" and re-invent the GPU, or rather, if it did that, it would be a GPU emulated in software and hence extremely slow. The entire simulation would, in effect, slow down due to the massively increased computational cost of simulating this world.
Now, if you cut the AI off from any type of real-time clock, maybe it doesn't notice that it's running slower - in the same way that people generally wouldn't notice if time dilation due to the Earth's movement were to double, because all of our frames of reference would slow together - but I suspect that the AI would manage to find something useful for letting it know the box is there. Remember that you have to get this right the first time; if the AI finds itself in a box, you have to assume it will find its way out.
It may simply deduce that it is likely to be in a box, in the same way that Nick Bostrom deduced we are likely to be in a simulation. Along these lines, it's amusing to think that we might be the AI in the box, and some lesser intelligence is testing to see if we're friendly
Sadly, I think the other title was better. "Is X a good idea" seems open enough to prompt discussion, while "abomination" prompts me to mentally categorize the post as spam.
Duly noted