In response to comment by Mario on The mind-killer
Comment author: mattnewport 03 May 2009 09:19:53PM 0 points [-]

How much of what it means to be human do you think is cultural conditioning versus innate biological tendency? I think the evidence points to a very large biologically determined element to humanity. I would expect to find more in common with a hunter gatherer in a previously undiscovered tribe, or even with a paleolithic tribesman, than with an alien intelligence or an evolved dolphin.

If you read ancient Greek literature, it is easy to empathize with most of the motivations and drives of the characters even though they lived in a very different world. You could argue that our culture's direct lineage from theirs is a factor but it seems that westerners can recognize as fellow humans the minds behind ancient Chinese or Indian texts with less shared cultural heritage with our own.

Comment author: Mario 03 May 2009 09:45:52PM 1 point [-]

I don't consider our innate biological tendencies the core of our being. We are an intelligence superimposed on a particular biological creature. It may be difficult to separate the aspects of one from the other (and I don't pretend to be fully able to do so), but I think it's important that we learn which is which so that we can slowly deemphasize and discard the biological in favor of the solely rational.

I'm not interested in what it means to be human, I want to know what it means to be a person. Humanity is just an accident as far as I'm concerned. It might as well have been anything else.

In response to comment by Mario on The mind-killer
Comment author: mattnewport 03 May 2009 09:26:23PM 0 points [-]

I don't see any fundamental reason why intelligence should be restricted to humans. I think it's quite possible that intelligence arising in the universe is an extremely rare event though. If you value intelligence and think it might be an unlikely occurrence then the survival of some humans rather than no humans should surely be a much preferred outcome?

I disagree that we would have more in common with the electric toothbrush wielding squirrels. I've elaborated more on that in another comment.

Comment author: Mario 03 May 2009 09:36:22PM 1 point [-]

Preferred, absolutely. I just think that the survival of our knowledge is more important than the survival of the species sans knowledge. If we are looking to save the world, I think an AI living on the moon pondering its existence should be a higher priority than a hunter-gatherer tribe stalking wildebeest. The former is our heritage, the latter just looks like us.

In response to comment by Mario on The mind-killer
Comment author: ciphergoth 03 May 2009 08:06:07PM 3 points [-]

Bear in mind, the paperclip AI won't ever look up to the broader challenges of being a sentient being in the Universe; the only thing that will ever matter to it, until the end of time, is paperclips. I wouldn't feel in that instance that we had left behind a creature that represented our legacy, no matter how much it knows about the Beatles.

In response to comment by ciphergoth on The mind-killer
Comment author: Mario 03 May 2009 08:50:21PM 0 points [-]

OK, I can see that. In that case, maybe a better metric would be the instrumental use of our accumulated knowledge, rather than its mere possession. Living in a library doesn't mean you can read, after all.

In response to comment by Mario on The mind-killer
Comment author: mattnewport 03 May 2009 07:21:47PM 0 points [-]

We have pretty solid evidence that a stone age tech group of humans can develop a technologically advanced society in a few 10s of thousands of years. I imagine it would take considerably longer for squirrels to get there and I would be much less confident they can do it at all. It may well be that human intelligence is an evolutionary accident that has only happened once in the universe.

Comment author: Mario 03 May 2009 07:57:25PM *  0 points [-]

The squirrel civilization would be a pretty impressive achievement, granted. The destruction of this particular species (humans) would seemingly be a tremendous loss universally, if intelligence is a rare thing. Nonetheless, I see it as only a certain vessel in which intelligence happened to arise. I see no particular reason why intelligence should be specific to it, or why we should prefer it over other containers should the opportunity present itself. We would share more in common with an intelligent squirrel civilization than a band of gorillas, even though we would share more genetically with the latter. If I were cryogenically frozen and thawed out a million years later by the world-dominating Squirrel Confederacy, I would certainly live with them rather than seek out my closest primate relatives.

EDIT: I want to expand on this slightly. Say our civilization were to be completely destroyed, and a group of humans that had no contact with us were to develop a new civilization of their own concurrent with a squirrel population doing the same on the other side of the world. If that squirrel civilization were to find some piece of our history, say the design schematics of an electric toothbrush, and adopt it as a part of their knowledge, I would say that for all intents and purposes, the squirrels are more "us" than the humans, and we would survive through the former, not the latter.

In response to comment by Mario on The mind-killer
Comment author: Vladimir_Nesov 03 May 2009 07:14:20PM *  0 points [-]

Does this imply that you are OK with a Paperclip AI wiping out humanity, since it will be an intelligent life form much more developed than we are?

Comment author: Mario 03 May 2009 07:49:18PM 0 points [-]

If I implied that, it was unintentional. All I mean is that I see no reason why we should feel a kinship toward humans as humans, as opposed to any species of people as people. If our civilization were to collapse entirely and had to be rebuilt from scratch, I don't see why the species that is doing the rebuilding is all that important -- they aren't "us" in any real sense. We can die even if humanity survives. By that same token, if the paperclip AI contains none of our accumulated knowledge, we go extinct along with the species. If the AI contains some our of knowledge and a good degree of sentience, I would argue that part of us survives despite the loss of this particular species.

In response to The mind-killer
Comment author: Nominull 02 May 2009 07:47:45PM *  2 points [-]

I will admit to an estimate higher than 95% that humanity or its uploads will survive the next hundred years. Many of the "apocalyptic" scenarios people are concerned about seem unlikely to wipe out all of humanity; so long as we have a breeding population, we can recover.

In response to comment by Nominull on The mind-killer
Comment author: Mario 03 May 2009 06:48:25PM 0 points [-]

I agree generally, but I think when we talk about wiping out humanity we should include the idea that if we were to lose a significant portion of our accumulated information it would be essentially the same as extinction. I don't see a difference between a stone age tech. group of humans surviving the apocalypse and slowly repopulating the world and a different species (whether dogs, squirrels, or porpoises) doing the same thing.

Comment author: Mario 05 April 2009 09:49:30AM 4 points [-]

I don't think it is necessarily true that merely by joining the faction most likely to win you will share in the spoils of victory. Leaders distribute rewards based on seniority more than support. In a close contest, you would likely be courted heavily by both sides, providing a temporary boost in status, but that would disappear once the conflict is over. You will have not earned the trust of the winner since your allegiance was in doubt. I don't think there is much to gain by joining the larger side late; you'll be on the bottom of society once the dust settles, trusted by neither the winners nor the losers.

In cases like this, I think the operative value evolution would select for is not political success but sexual success. Being one of many followers does nothing to advertise ourselves as desirable mates. On the other hand, bravely fighting a losing battle (as long as you don't die in the process) signals both physical prowess (which you may not get in a lopsided victory) and other desirable traits, like courage. When the battle is over, one can assume that more money and women would be distributed to the new elite, but their children will be yours.

Comment author: HA2 15 March 2009 09:01:00PM 3 points [-]

I don't think that it's reasonable to expect that secret criteria would stay secret once such a test would actually be used for anything. Sure, it could be kept a secret if there were a dozen people taking the test, of which the four who passed would get admitted to an exclusive club.

If there were ten thousand people taking the test, a thousand of which passed, I'd bet there'd be at least one who accidentally leaks it on the internet, from where it would immediately become public knowledge. (And at least a dozen who would willingly give up the answer if offered money for it, as would happen if there were anything at stake in this test.) It might work if such a test is obscure enough or not widely used, but not if it was used for anything that mattered to the test-takers and was open to many.

Comment author: Mario 15 March 2009 10:03:17PM 1 point [-]

True, but I think that would be a problem with any test. I'm just trying to find a way around it since I think that as you add ways to avoid gaming, you both complicate and weaken the test. Perhaps a solution would be to test people without their knowledge, and reveal whether they succeeded or not at a later date.

Comment author: MichaelHoward 15 March 2009 06:20:15PM 1 point [-]

The instructions specify that the answer to every question is C.

Isn't that more a test of attention to detail and willingness to follow instructions rather than rationality per se?

Comment author: Mario 15 March 2009 06:39:21PM 1 point [-]

Yes. I wasn't offering that particular formulation as a rationality test, just the idea that you should hide from the testee the nature of the test.

Comment author: Mario 15 March 2009 05:58:55PM 3 points [-]

I get the feeling that the real problem here is repeatability. It's one thing to design a test for rationality, it's another to design a test that could not be gamed once the particulars are known. Since it probably isn't possible to control the flow of information in that way, the next-best option might be to design a test so that the testing criteria would not be understood except by those who pass.

I'm thinking of a test I heard about years ago. The teacher passes out the test, stressing to the students to read the instructions before beginning. The instructions specify that the answer to every question is C. The actual questions on the test don't matter, of course, but it's a great test of reading comprehension and the ability to follow instructions. Plus, the test is completely repeatable. All of the test questions could leak out, and still only those who deserve to pass would do so. If you are willing to assume that people who pass would not be willing to cheat (unlikely in this test, possible in a rationality test), then you would have an ungameable test.

A rationality test in this model might be one where an impossible task is given, and the correct response would be to not play.

View more: Prev | Next