Comment author: TheOtherDave 24 June 2012 10:40:59PM 0 points [-]

Upvoted for not backing away from a concrete prediction.
I would be very surprised by that result.

Comment author: h-H 24 June 2012 10:50:02PM *  0 points [-]

Upvoted for good reasons for upvoting :)

For data, we could run a LW poll as a start and see. And out of curiosity, why would you be surprised?

Comment author: TheOtherDave 24 June 2012 10:22:04PM 0 points [-]

It seems to follow from this model that if we measure the tendency towards procrastination in two groups, one of which is selected for their demonstrable capability for math, or more generally for deep, insightful thought, and the other of which is not, we should find that the former group procrastinates more than the latter group.

Yes?

Comment author: h-H 24 June 2012 10:35:37PM *  1 point [-]

Yes & I'd modify that slightly to "the former group needs to more actively combat procrastination".

Comment author: Viliam_Bur 19 April 2012 09:09:28AM 1 point [-]

Going meta is not only what we love best, it's what we're best at, and that's always been so.

Do we love going meta? Yes, we do.

Are we good at it? Sometimes yes, sometimes no; it also depends on the individual. But going meta is good for signalling intelligence, so we do it even when it's just a waste of time.

Has it always been so? Yes, unpracticality and procrastination of many intelligent people is widely known.

Comment author: h-H 24 June 2012 09:21:14PM *  0 points [-]

The Akrasia you refer to is actually a feature, not a bug. Just picture the opposite: Intelligent people rushing to conclusions and caring more about getting stuff done instead of forsaking the urge to go with first answers and actually think.

My point is, we decry procrastination so much but the fact is it is good that we procrastinate, if we didn't have this tendency we would be doers not thinkers. Not that I'm disparaging either, but you can't rush math, or more generally deep, insightful thought, that way lies politics and insanity.

In an nutshell, perhaps we care more for thinking about things -or alternatively get a rush from the intellectual crack- so much that we don't really want to act, or at least don't want to act on incomplete knowledge, and hence the widespread procrastination, which given the alternative, is a very Good thing.

Poly marriage?

-9 h-H 06 June 2012 07:57PM

A thought occurred to me today as I skimmed an article in a rationality forum where the subject of gay marriage cropped up; seeing as the issue has been hotly contested in various public fora and especially the courts, what about poly? After all, many if not all the arguments for gay marriage apply to poly marriage as well.

Questions for LWers who are currently in a such a relationship, or have an opinion to share:

Do polies want to marry each other or do such relationships not lend themselves to permanence above a threshold of partners? Should polies campaign for the right for a civil union anyway? what are the up and down sides of this? etc

 

 

Comment author: JoshuaZ 04 October 2011 01:49:05AM *  11 points [-]

I'm curious. Have you ever lost a loved one due to someone else's actions? The closest experience I have to this is a cousin who was killed about a year ago by a speeding driver. My cousin Brandon wasn't that old. He hadn't been a great student in highschool but had really shaped up and become a lot more responsible in college. Brandon was working to become a chef, something he was clearly good at and clearly enjoyed. My cousin was on his bike and never even saw the car. He had on a helmet. It saved his life, for a few days. His grandmother, my aunt, was on an airplane flight when the accident happened. She was on her way to the funeral of another relative who had killed himself. She found out about the accident as her plane taxied to the gate.

At first, after a few days in the hospital it seemed that Brandon was going to make it. Then he took a sudden turn for the worst and his organs started to fail. The end was so sudden that some of my relatives saw in their inboxes the email update saying that Brandon wasn't like to make it right under the email saying he had died.

Then, it turned out that the driver of the car had a history of speeding problems. He received in a year in jail for vehicular homicide. A small compensation for the entire life Brandon had in front of him.

If someone came up to me, and gave me the choice of making that driver die a slow painful, agonizing death I'd probably say yes. It would be wrong. Deeply wrong. But the emotion is that strong; I don't know if I could override it.

But I can still understand that that's wrong. The driver was an aging Vietnam vet with a history of medical problems. He had little family. He was so distraught over what happened that when initially put in jail before the trial, there was worry that he might kill himself. He seems to be an old, lonely, broken man. Harming him accomplishes little. And yet, despite all that, the desire to see him suffer still burns deeply within me.

How much more would I feel if I thought that someone had killed a relative, or even my own child? And if the court had repeatedly agreed and told me that that was the guilty person. How could I ever emotionally acknowledge that I had been after the wrong person, that not only had I persecuted the wrong person, but the person who had done this terrible deed was still out there, and free? I'd like to believe that I'm a rational person so that I could make that acknowledgment. But the fact that even when it is just a cousin I still deeply desire someone to suffer in ways that help no one at all... I doubt I could do it.

To call the Kerchers evil or their desires evil is a deep failure of empathy.

Comment author: h-H 04 October 2011 02:06:54AM *  2 points [-]

upvoted for empathy remark, but I don't know JoshuaZ, a "slow painful, agonizing death" for a mistake sounds too vengeful to me..

Comment author: AdeleneDawner 30 April 2011 01:54:01AM 5 points [-]

We're not in disagreement about that. But your assumption that emotions are necessary for goals to be formed is still an untested one.

There's a relevant factoid that's come up here on LW a few times before: Apparently, people with significant brain damage to their emotional centers are unable to make choices between functionally near-identical things, such as different kinds of breakfast cereal. But, interestingly, they get stuck when trying to make those choices - implying that they do attempt to e.g. acquire cereal in the first place; they're not just lying in a bed somewhere staring at the ceiling, and they don't immediately give up the quest to acquire food as unimportant when they encounter a problem.

It would be interesting to know the events that lead up to the presented situation; it would be interesting to know whether people with that kind of brain damage initiate grocery-shopping trips, for example. But even if they don't - even if the grocery trip is the result of being presented with a fairly specific list, and they do otherwise basically sit around - it seems to at least partially disprove your 'standby mode' theory, which would seem to predict that they'd just sit around even when presented with a grocery list and a request to get some shopping done.

Comment author: h-H 01 May 2011 04:09:50AM *  0 points [-]

but isn't being presented with a to-do list or alternatively feeling hungry then finding food different than 'forming goals'?

to be more precise, maybe the 'survival instinct' that leads them to seek food is not located in their emotional centers so some goals might survive regardless. but yes, the assumption is untested AFAIK.

Comment author: Normal_Anomaly 16 April 2011 03:10:34PM 5 points [-]

Clippy is usually brought up as a most dangerous AI that we should avoid creating at all costs, yet what's the point of creating copies of us and tile the universe with them? how is that different than what clippy does?

That's an easy one. I value humans, I don't value paperclips.

Shouldn't we focus on engineered/controlled value drift rather than preventing it entirely?

According to EY's CEV document, CEV does this. It lets/makes our values drift in the way we would want them to drift.

Comment author: h-H 16 April 2011 06:45:18PM *  1 point [-]

very smart people have issues with CEV, example: http://lesswrong.com/lw/2b7/hacking_the_cev_for_fun_and_profit/

and as far as I remember CEV was sort of abandoned a while ago by the community.

and yes, you value humans, others in the not so distant future might not given the possibility of body/brain modification. anyway, the gist of my argument is that CEV doesn't seem to work if there is not going to be much coherence of all of humanity's extrapolated volition's-a point that's already been made clear in previous threads by many people-what I'm trying to add to that is to point out the overwhelming possibility of there being 'alien minds' among us before a FAI could be built.

I also raised the question that If body modification is widely available, is it ok to prevent people from acquiring an 'alien' set of morals, one that would later on be a possible hindrance to CEV-like proposals? how can we tell if its alien or not in the first place?

wireless-heading, value drift and so on

-3 h-H 16 April 2011 06:45AM

A typical image of the wire-head is that of a guy with his brain connected via a wire thingy to a computer, living in a continuous state of pleasure, sort of like being drugged up for life.

What I mean by wireless heading-which is not such an elegant term but anyway- is the idea of little to no value drift. Clippy is usually brought up as a most dangerous AI that we should avoid creating at all costs, yet what's the point of creating copies of us and tile the universe with them? how is that different than what clippy does?

by 'us' I mean beings who share our intuitive understanding or can agree with us on things like morality or joy or not being bored etc.

Shouldn't we focus on engineered/controlled value drift rather than preventing it entirely? is that possible to program into an AI? somehow I don't think so. It seems to me that the whole premise of a single benevolent AI depends to a large extent on the similarity of basic human drives, supposedly we're so close to each other it's not a big deal to prevent value drift.

but once we get really close to the singularity all sorts of technologies will cause humanity to 'fracture' into so many different groups, that inevitably there will be some groups with what we might call 'alien minds', minds so different than most baseline humans as they are now that there wouldn't be much hope of convincing them to 'rejoin the fold' and not create an AI of their own. for all we know they might even have an easier time creating an AI that's friendly to them than it is for baseline humans to do the same, considering this a black swan event-or one that is impossible to predict when it will happen-what to do?

discuss.

Comment author: h-H 12 March 2011 01:50:54PM *  -1 points [-]

without a body the brain won't 'work', the brain is very much linked to the rest of the body, the fiction that we only need the head to 'reanimate' a person back to normal is just that, fiction.

wei Dai:"rebuilding/simulating the body to the level of detail needed to support cognition" yes,but how complex is the nervous system? which wire connects to which, or is that not important? seems to me that you're oversimplifying..

Comment author: lukeprog 16 February 2011 06:13:51AM 27 points [-]

One marker to watch out for is a kind of selection effect.

In some fields, only 'true believers' have any motivation to spend their entire careers studying the subject in the first place, and so the 'mainstream' in that field is absolutely nutty.

Case examples include philosophy of religion, New Testament studies, Historical Jesus studies, and Quranic studies. These fields differ from, say, cryptozoology in that the biggest names in the field, and the biggest papers, are published by very smart people in leading journals and look all very normal and impressive but those entire fields are so incredibly screwed by the selection effect that it's only "radicals" who say things like, "Um, you realize that the 'gospel of Mark' is written in the genre of fiction, right?"

Comment author: h-H 16 February 2011 06:25:20PM 1 point [-]

I have to ask, how much do you know of 'Quranic studies'? as far as I know, the new testament and quran are structured quite differently, hence research-which I'm not aware of-would be different as well?

View more: Prev | Next