Comment author: gwern 08 January 2012 08:26:31PM *  6 points [-]

You know, the funny thing is, there are transhumanist themes in the drafts and deleted materials for Evangelion. For example, parsing the SEELE discussions in the EoE draft and comments by Gainaxers, one has the impression that originally the plan of Gendo and Yui was to upload humans into immortal Evas so they could colonize other worlds, the human body being too frail and short-lived for space travel! (Like many of the explanations and details, I think they were cut by Anno to focus on the psychology themes that interested him more.)

Comment author: h-H 09 January 2012 03:14:20AM 1 point [-]

I like this, source please?

Comment author: JoshuaZ 04 October 2011 01:49:05AM *  11 points [-]

I'm curious. Have you ever lost a loved one due to someone else's actions? The closest experience I have to this is a cousin who was killed about a year ago by a speeding driver. My cousin Brandon wasn't that old. He hadn't been a great student in highschool but had really shaped up and become a lot more responsible in college. Brandon was working to become a chef, something he was clearly good at and clearly enjoyed. My cousin was on his bike and never even saw the car. He had on a helmet. It saved his life, for a few days. His grandmother, my aunt, was on an airplane flight when the accident happened. She was on her way to the funeral of another relative who had killed himself. She found out about the accident as her plane taxied to the gate.

At first, after a few days in the hospital it seemed that Brandon was going to make it. Then he took a sudden turn for the worst and his organs started to fail. The end was so sudden that some of my relatives saw in their inboxes the email update saying that Brandon wasn't like to make it right under the email saying he had died.

Then, it turned out that the driver of the car had a history of speeding problems. He received in a year in jail for vehicular homicide. A small compensation for the entire life Brandon had in front of him.

If someone came up to me, and gave me the choice of making that driver die a slow painful, agonizing death I'd probably say yes. It would be wrong. Deeply wrong. But the emotion is that strong; I don't know if I could override it.

But I can still understand that that's wrong. The driver was an aging Vietnam vet with a history of medical problems. He had little family. He was so distraught over what happened that when initially put in jail before the trial, there was worry that he might kill himself. He seems to be an old, lonely, broken man. Harming him accomplishes little. And yet, despite all that, the desire to see him suffer still burns deeply within me.

How much more would I feel if I thought that someone had killed a relative, or even my own child? And if the court had repeatedly agreed and told me that that was the guilty person. How could I ever emotionally acknowledge that I had been after the wrong person, that not only had I persecuted the wrong person, but the person who had done this terrible deed was still out there, and free? I'd like to believe that I'm a rational person so that I could make that acknowledgment. But the fact that even when it is just a cousin I still deeply desire someone to suffer in ways that help no one at all... I doubt I could do it.

To call the Kerchers evil or their desires evil is a deep failure of empathy.

Comment author: h-H 04 October 2011 02:06:54AM *  2 points [-]

upvoted for empathy remark, but I don't know JoshuaZ, a "slow painful, agonizing death" for a mistake sounds too vengeful to me..

Comment author: AdeleneDawner 30 April 2011 01:54:01AM 5 points [-]

We're not in disagreement about that. But your assumption that emotions are necessary for goals to be formed is still an untested one.

There's a relevant factoid that's come up here on LW a few times before: Apparently, people with significant brain damage to their emotional centers are unable to make choices between functionally near-identical things, such as different kinds of breakfast cereal. But, interestingly, they get stuck when trying to make those choices - implying that they do attempt to e.g. acquire cereal in the first place; they're not just lying in a bed somewhere staring at the ceiling, and they don't immediately give up the quest to acquire food as unimportant when they encounter a problem.

It would be interesting to know the events that lead up to the presented situation; it would be interesting to know whether people with that kind of brain damage initiate grocery-shopping trips, for example. But even if they don't - even if the grocery trip is the result of being presented with a fairly specific list, and they do otherwise basically sit around - it seems to at least partially disprove your 'standby mode' theory, which would seem to predict that they'd just sit around even when presented with a grocery list and a request to get some shopping done.

Comment author: h-H 01 May 2011 04:09:50AM *  0 points [-]

but isn't being presented with a to-do list or alternatively feeling hungry then finding food different than 'forming goals'?

to be more precise, maybe the 'survival instinct' that leads them to seek food is not located in their emotional centers so some goals might survive regardless. but yes, the assumption is untested AFAIK.

Comment author: Normal_Anomaly 16 April 2011 03:10:34PM 5 points [-]

Clippy is usually brought up as a most dangerous AI that we should avoid creating at all costs, yet what's the point of creating copies of us and tile the universe with them? how is that different than what clippy does?

That's an easy one. I value humans, I don't value paperclips.

Shouldn't we focus on engineered/controlled value drift rather than preventing it entirely?

According to EY's CEV document, CEV does this. It lets/makes our values drift in the way we would want them to drift.

Comment author: h-H 16 April 2011 06:45:18PM *  1 point [-]

very smart people have issues with CEV, example: http://lesswrong.com/lw/2b7/hacking_the_cev_for_fun_and_profit/

and as far as I remember CEV was sort of abandoned a while ago by the community.

and yes, you value humans, others in the not so distant future might not given the possibility of body/brain modification. anyway, the gist of my argument is that CEV doesn't seem to work if there is not going to be much coherence of all of humanity's extrapolated volition's-a point that's already been made clear in previous threads by many people-what I'm trying to add to that is to point out the overwhelming possibility of there being 'alien minds' among us before a FAI could be built.

I also raised the question that If body modification is widely available, is it ok to prevent people from acquiring an 'alien' set of morals, one that would later on be a possible hindrance to CEV-like proposals? how can we tell if its alien or not in the first place?

wireless-heading, value drift and so on

-3 h-H 16 April 2011 06:45AM

A typical image of the wire-head is that of a guy with his brain connected via a wire thingy to a computer, living in a continuous state of pleasure, sort of like being drugged up for life.

What I mean by wireless heading-which is not such an elegant term but anyway- is the idea of little to no value drift. Clippy is usually brought up as a most dangerous AI that we should avoid creating at all costs, yet what's the point of creating copies of us and tile the universe with them? how is that different than what clippy does?

by 'us' I mean beings who share our intuitive understanding or can agree with us on things like morality or joy or not being bored etc.

Shouldn't we focus on engineered/controlled value drift rather than preventing it entirely? is that possible to program into an AI? somehow I don't think so. It seems to me that the whole premise of a single benevolent AI depends to a large extent on the similarity of basic human drives, supposedly we're so close to each other it's not a big deal to prevent value drift.

but once we get really close to the singularity all sorts of technologies will cause humanity to 'fracture' into so many different groups, that inevitably there will be some groups with what we might call 'alien minds', minds so different than most baseline humans as they are now that there wouldn't be much hope of convincing them to 'rejoin the fold' and not create an AI of their own. for all we know they might even have an easier time creating an AI that's friendly to them than it is for baseline humans to do the same, considering this a black swan event-or one that is impossible to predict when it will happen-what to do?

discuss.

Comment author: h-H 12 March 2011 01:50:54PM *  -1 points [-]

without a body the brain won't 'work', the brain is very much linked to the rest of the body, the fiction that we only need the head to 'reanimate' a person back to normal is just that, fiction.

wei Dai:"rebuilding/simulating the body to the level of detail needed to support cognition" yes,but how complex is the nervous system? which wire connects to which, or is that not important? seems to me that you're oversimplifying..

Comment author: lukeprog 16 February 2011 06:13:51AM 27 points [-]

One marker to watch out for is a kind of selection effect.

In some fields, only 'true believers' have any motivation to spend their entire careers studying the subject in the first place, and so the 'mainstream' in that field is absolutely nutty.

Case examples include philosophy of religion, New Testament studies, Historical Jesus studies, and Quranic studies. These fields differ from, say, cryptozoology in that the biggest names in the field, and the biggest papers, are published by very smart people in leading journals and look all very normal and impressive but those entire fields are so incredibly screwed by the selection effect that it's only "radicals" who say things like, "Um, you realize that the 'gospel of Mark' is written in the genre of fiction, right?"

Comment author: h-H 16 February 2011 06:25:20PM 1 point [-]

I have to ask, how much do you know of 'Quranic studies'? as far as I know, the new testament and quran are structured quite differently, hence research-which I'm not aware of-would be different as well?

Comment author: billswift 05 January 2011 02:09:03PM 10 points [-]

More specifically it is completely rational to use that argument against theists, because one of their arguments for god is that the world is too complex not to have been designed; so in that circumstance you are just pointing out that their claim is just pushing the complexity back one step. If the world is so complex that it needs a designer, then so is god.

Comment author: h-H 09 January 2011 12:46:37AM *  2 points [-]

I think tighter definitions are needed here, some theistic traditions consider all existence to be 'god' etc.

In response to comment by wedrifid on Yes, a blog.
Comment author: PhilGoetz 19 November 2010 07:46:52PM *  19 points [-]

If they can't stop students from using Wikipedia, pretty soon schools will be reduced from teaching how to gather facts, to teaching how to think!

In response to comment by PhilGoetz on Yes, a blog.
Comment author: h-H 25 November 2010 11:15:12PM *  3 points [-]

I'm curious, have you used Wikipedia for non-scientific/technical stuff? it can be quite a biased source there..

Comment author: RichardKennaway 02 November 2010 09:13:03PM 0 points [-]

Well, I'm not sure how far that advances things, but a possible failure mode -- or is it? -- of a Friendly AI occurs to me. In fact, I foresee opinions being divided about whether this would be a failure or a success.

Someone makes an AI, and intends it to be Friendly, but the following happens when it takes off.

It decides to create as many humans as it can, all living excellent lives, far better than what even the most fortunate existing human has. And these will be real lives, no tricks with simulations, no mere tickling of pleasure centres out of a mistaken idea of real utility. It's the paradise we wanted. The only catch is, we won't be in it. None of these people will be descendants or copies of us. We, it decides, just aren't good enough at being the humans we want to be. It's going to build a new race from scratch. We can hang around if we like, it's not going to disassemble us for raw material, but we won't be able to participate in the paradise it will build. We're just not up to it, any more than a chimp can be a human.

It could transform us little by little into fully functional members of the new civilisation, maintaining continuity of identity. However, it assures us, and our proof of Friendliness assures us that we can believe it, the people that we would then be would not credit our present selves as having made any significant contribution to their identity.

Is this a good outcome, or a failure?

Comment author: h-H 05 November 2010 06:49:20AM *  0 points [-]

it's good ..

you seem to be saying-implying?- that continuity of identity should be very important for minds greater than ours, see http://www.goertzel.org/new_essays/IllusionOfImmortality.htm

I 'knew' the idea presented in the link for a couple of years, but it simply clicked when I read the article, probably the writing style plus time did it for me.

View more: Prev | Next