Posts

Sorted by New

Wiki Contributions

Comments

Another example of a real-world Moral quandry that the real world would love H+ disucssion lists to take on, is the issue of how much medical care to invest in end-of-life patients. Medical advances will continue to make more expensive treatment options available. In Winnipeg, there was a case recently where a patient in a terminal coma had his family insist on not taking him off life support. In Canada in the last decade or so, the decision was based on a doctor's prescription. Now it also encompases family and the patient's previous wishes. 3 doctors quit over the case. My first instinct was to suggest doctors be trained exclusively to be coma-experts, but it seems medical boards might already have accomplished this. I admire a fighting spirit, and one isolated case doesn't tax the healthcare system much. But if this becomes a regular occurrence...this is another of many real-world examples that require intelligent thought. Subhan's position has already been proven wrong many many times. There are cognitive biases but they aren't nearly as strong or all-encompassing as is being suggested here. For example, I'd guess every reader on this list is aware that other people are capable of suffering and feeling happiness that corresponds with their own experiences. This isn't mirror-neurons or some other "bias", it is simple grade school deduction that refutes Subhan's position. You don't have to be highly Moral, to admit it's out there in some people. For instance, most children get what Subhan doesn't.

(ZMDavis wrote:) "But AGI is [...] not being a judge or whoever writes laws."

If Eliezer turns out to be right about the power of recursive self-improvement, then I wouldn't be so sure."

Argh. I didn't mean that as a critique on EY's prowess as an AGI theorist or programmer. I doubt Jesus would've wanted people to deify him, just to be nice to eachother. I doubt EY meant for his learning of philosophy to be interpreted as some sort of Moral code, he was just arrogant enough not to state he was sometimes using his list to as a tool to develop his own philosophy. I'm assuming any AGI project would be a team, and I'd doubt he'd challenge his best comparitive advantage is not ethics. Maybe he plans on writing the part of the code that tells an AGI how to stop using resources for a given job.

Yes, EY's past positions about Morality are closer to Subhan's than Obert's. But AGI is software programming and hardware engineering, not being a judge or whoever writes laws. I wouldn't suggest deifying EY if your goal is to learn ethics.

"Why the obsession with making other people happy?"

Not obsessed. Just pointing out the definition of morality. High morality is making yourself and other people happy.

Phillip Huggan: "Or are you claiming such an act is always out of self-interest?" (D.Bider:) Such acts are. Stuff just is. Real reasons are often unknowable; and if known, would be trivial, technical, mundane.

That's deep.

"Stuff is. Fitting stuff that happens into a moral framework? A hopeless endeavor for misguided individuals seeking to fulfil the romantic notion that things should make sense."

To me, there is nothing unintelligible about the ntion that my acts can have consequences. Generally I'm not preachy about it as democracy and ethical investing are appropriate forums to channel my resources towards in Canada. But the flawed line of reasoning that knowledge can never correlate with reality only finds salvation in solipsism, not a very likely scenario IMO. These kinds of reasonings are used by tyrants, for the record (it is god's will, it is for the national good, etc).

"If we're going to intervene because a child in Africa is dying of malaria or hunger - both thoroughly natural causes of death - then should we not also intervene when a lion kills an antelope, or a tribe of chimpanzees is slaughtered by their neighbors?"

Natural doesn't make it good. I'd value the child more highly because his physiology is more known (language and written records help) in how to keep him happy, and more importantly because he could grow up to invent a cure for malaria. Yes, eventually we should intervene by providing the chimps with mechanical dummies to murder, if murder makes them happy. Probably centuries away from that. It's nice that you draw the line around at least a group of others, but you seem to be using your own inability to understand Morality as evidence that others who have passed you on the Moral ladder, should come back down. You shouldn't be so self-conscious about this and certainly shouldn't be spreading the meme. I don't understand chemistry well or computer programming at all, but I don't go loudly proclaiming fake computer programming syntax or claiming that atoms don't exist, like EY is inciting here and like you are following. I'm not calling you evil. I'm saying you probably have the capacity to do more good, assuming you are middle class and blowing money on superfluous status-symbol consumer goods. Lobbying for a luxury tax is how I would voice my opinion, a pretty standard avenue I learned from a Macleans magazine back issue. Here, my purpose is to deprogram as many people as possible stuck in a community devoted to increasing longevity, but using means (such as lobbying for the regression of law) that meme-spread promote the opposite.

(Subhan wrote:) "And if you claim that there is any emotion, any instinctive preference, any complex brain circuitry in humanity which was created by some external morality thingy and not natural selection, then you are infringing upon science and you will surely be torn to shreds - science has never needed to postulate anything but evolution to explain any feature of human psychology -" Subhan: "Suppose there's an alien species somewhere in the vastness of the multiverse, who evolved from carnivores. In fact, through most of their evolutionary history, they were cannibals. They've evolved different emotions from us, and they have no concept that murder is wrong -"

The external morality thingy is other people's brain states. Prove the science comment, Subhan. It is obviously a false statement (once again, argument reduces to solipsism which can be a topic buts needs to be clearly stated as such). Evolution doesn't explain how I learned long division in grade 1. Our human brains are evolutionary horrible calculators, not usually able to chunk more than 8 memorized numbers or do division without learning math. Learning and self-reflection dominate reptillian brains in healthy individuals. The latter from a utilitiarian perspective, murder would generally be wrong, even if fun. There is the odd circumstance where it might be right, but it it is so difficult to game the future that it is probably better just to outlaw it altogether than raise the odds of anarchy. For instance, in Canada a head-of-state and abortionists have been targetted (though our head of state was ready to cave in the potential assassin's skull before the police finally apprehended him). In many developing countries it is much worse. Presumably the carnivore civilization would need a lot of luck just to industrialize; would be more prosperous by fighting their murder urges. Don't call them carnivores, call them Mugabe's Zimbabwe. We have an applied example of a militarily-weak government in the process of becoming a tyranny, raping women and initiating anarchy. There are lessons that could be learned here, Britian has just proposed to 2000 strong rapid-response military force, under what circumstances should they be used (I like regression from democracy plus plausible model of something better plus lower quality-of-living plus military weakness plus invasion acceptance of military alliance; if the African Union says no regime change, does that constitute a military alliance?). Does military weakness as a precursor condition do more harm than good by gaming nations to up-arm?

In Canada, there is a problem how to deal with youths, at what age should they be treated as mental competant adults. Brain science seems to show humans don't fully mature until about 25, so to me that is an argument to treat the onset of puberty to 25 or so as an in-between category when judging. Is alcohol and/or alcoholism analogous to mental health problems? I'd guess no, but maybe childhood trauma is a mitigating factor to consider. How strong does mental illness have to be before using it is a consideration? In Canada, an Afghanistan veteran used post-traumatic stress disorder as a mitigating factor in a violent crime. Is not following treatment or the absence fo treatment something to consider? Can a mentally ill individual sue a government or claim innocence for initiating $10 billion in tax cuts rather than a mental health programme? I'd guess only if it became clear how important such a program was, say, if it worked very successfully in another nation and the government had the fiscal means to do so. Should driving drunk itself be a crime? If so, why not driving with radio, infant, cellphone...as intersection video camera surveillence catches traffic offenders, should the offence fine be dropped proportionately to the increased level of surveillence? See, courts know there are other individuals and that the problems of mental health and children not understanding there are other people, don't prevent healthy adults from knowing other people are real. This reminds me of discussions about geopolitics on the WTA list, with seemingly progressive individuals not being able to condemn torture and the detaining indefinitely of innocent people, simply because the forum was overrepresented with Americans (who still don't score that bad, just not as good as Europe and Canada when it comes to Human Rights).

(robin brandt wrote:)"But we already know why murder seems wrong to us. It's completely explained by a combination of game theory, evolutionary psychology, and memetics."

Sure, but the real question is why murder is wrong, not seems wrong. Murder is wrong because it destroys human brains. Generally, Transhumanists have a big problem (Minsky or Moravec or Vinge religion despite evidence to the contrary) figuring out the human brains are conscious and calculators are not. I have a hard time thinking of any situation when it could be justified by among other things, proving the world is likely to be better off without murder. I guess killing Hitler during the latter years of the Holocaust might have stopped it if it was happening because of his active intervention. But you kill him off too early and Stalin and Hitler don't beat the shit out of eachother. This conversation is stuck at some 6th grade level. Could be talking about the death penalty, or income correlating with sentencing, or terrorism and Human Rights. Or employee dangerous technology Human Rights (will future gene sequencers require a Top Secret level of security clearance?). Right now the baseline is to treat all very potentially dangerous future technologies with a High Level of security clearance, I'm guessing. Does H+ have anything of value to add to existing security protocols? Have they even been analyzed? Nope.

If this is all just to brain storm about how to teach an AGI ethics, no one here is taking it from that angle. I've had a conversation with a Subhan friend as a teenager. If I was blogging about it, I'd do it under a forum titled Ethics for Dummies.

Sorry TGGP I had to do it. Now replace the word "charity" with "taxes".

(Constant quoted from someone:)"What we know about the causal origins of our moral intuitions doesn't obviously give us reason to believe they are correlated with moral truth."

Yes, but to a healthy intelligent individual not under duress, these causal origins (I'm assuming the reptilian or even mammalian brain centres are being referenced here) are much less a factor than is abstract knowledge garnered through education. I may feel on some basic level like killing someone that gives me the evil eye, but these impulses are easily subsumed by social conditioning and my own ideals of myself. Claiming there is a very small chance I'll commit evil is far different than claiming I'm a slave to my reptillian desires. Some people are slaves to those impulses, courts generally adjust for mental illness.

(denis bider wrote:) "Obert seems to be trying to find some external justification for his wants, as if it's not sufficient that they are his wants; or as if his wants depend on there being an external justification, and his mental world would collapse if he were to acknowledge that there isn't an external justification.

To me, this reads to say that if solipsism were true, Obert will have to become a hedonist. Correct. Or are you claiming Obert needs some sort of status? I didn't read that at all. Patriotism doesn't always seek utilitarianism as one's nation is only a small portion of the world's population. Morality does. Denis, are you claiming there is no way to commit acts that make others happy? Or are you claiming such an act is always out of self-interest? The former position is absurd, the latter runs into the problem that people who jump on grenades, die. I'm guessing there is a cognitive bias found in some/many of this blogs readers and thread starters, that because they know they are in a position of power vis-a-vis the average citizen, they are looking for any excuse not to accept moral resposibility. This is wrong. A middle class western individual, all else equal, is morally better by donating conspicious consumption income to charity, than by exercising the Libertarian market behaviour of buying luxury goods. I'm not condemning the purchasing behaviour, I'm condemning the Orwellian justification of trying to take (ego) pleasure in not owning up to your own consumption. If you are smart enough to construct such double-think, you can be smart enough to live with your conscious. Obert does not take Morally correct position just to win the argument with idiot Subhan. There are far deeper issues that can be debated on this blog about these issues, further up the Moral ladder. For instance, there are active legal precidents being formed in real world law right now, that could be influenced were this content to avoid retracing what is already known.

"I think the meaning of "it is (morally) right" may be easiest to explain through game theory."

Game theory may be useful here, but it is only a low-level efficient means to an ends. It might explain social heirachies on our past or in other species and it might explain the evolution of law, and it might be the highest up the Moral ladder some stupid or mentally impaired individuals can achieve. For instance, a higher Morality system than waiting for individuals to turn selfish before punishing them, is to ensure parents aren't abusive and childhood cognitive development opportunities exist. A basic pre-puberty or pre-25 social safety net is an improvement on game theory to reaching that tiled max-morality place. This no-morality line of reasoning might have some relavence if that happy place is a whole volume of diferent states. There are likely trade-offs between novel experiences and known preferences quite apart from harvesting unknown/dangerous energy resources. I know someone who likes cop shows and takes sleeping pills. This individuals can sometimes watch all his favourite Law + Order show reruns as if they were original. Maybe I'm a little jealous here in that I know every episode of Family Guy off by heart. Just because you don't know if there are Moral consequences doesn't mean there aren't. The key question is if you have the opportunity to easily learn about your moral sphere of influence. An interesting complication mentioned is how to know if what you think is a good act, isn't really bad. In my above forest example, cutting a forest into islands makes those islands more susceptible to invasive species and supressing a natural insect species might make forests less sustainable over the long-term. But that is a quesiton of scientific method and epistimology, not ontology. Ask whether setting fire to an orphanage is Morally equivalent to making a difficult JFK-esque judgement is silly. Assuming they are equivalent assumes because you don't know the answer to any given question, that eveyone else doesn't know either. I'm sure the cover this at some point in the Oxford undergraduate curriculum.

The difference between duty and desire, is that some desires might harm other people while duty (you can weasily change the definition to mean Nazi duty but then you are asking an entirely different question) always helps other people. "Terminal values" as defined, are pretty weak. There are e=mc^2 co-ordinates that have maximized happiness values. Og may only be able to eat tubers, but most people literate are much higher on the ladder, and thus, have a greater duty. In the future presumably, the standards will be even higher. At some point assuming we don't screw it up the universe will be tiled with happy people, depending on the energy resources of the universe and how accurately they can be safely charted. Subhan is at a lower level on the ladder of Morality. All else equal (it never is as uploading is a delusion), Obert has a greater duty.

Wow, what a long post. Subhan doesn't have a clue. Tasting a cheesburger like a salad, isn't Morality. Morality refers to actions in the present that can initiate a future with preferred brain-states (the weasily response would be to ask what these are, as if torture and pleasure weren't known, and initiate a conversation long enough to forget the initial question). So if you hypnotize yourself to make salad taste like cheeseburgers for health reasons, you are exercising Morality. I've got a forestry paper open in the other window. It is very dry, but I'm hoping I can calculate a rate of spread for an invasive species to plan a logging timeline to try to stop it. There is also a football game on. Not a great game, but don't pull a Subhan and try to tell me I'm reading the forestry paper because I like it more than the football game. I'm reading it because I realize there are brainstates of tourists and loggers and AGW-affected people that would rather see the forests intact, than temporarily dead. That's really all it boils down to. After gaining enough expertise over your own pysche sometime in childhood (ie. most 10 years olds would not waste time with this conversation), a developmental psychologist would know just when, you (a mentally healthy individual) realize there are other people who experience similiar brain states. Yes mirror neurons and the like are probably all evolutionary in origin, that doesn't change anything. There really are locally universe configurations that are "happier" in the net, than in other configurations. There is a ladder of morality, certainly not set in stone (torture me and all of a sudden I probably start valuing myself a lot more). I'd guess the whole point of this is to teach an AGI where to draw the line in upgrading human brain architectures (either that or I really do enjoy reading forestry over watching a game, and a really like salad over pizza and Chinese food). I don't see any reason why human development couldn't continue as it does now, voluntarily, the way human psyches are now developed (ie, trying pizza and dirt, and noting the preference for pizza in the future). Everyone arguing against morality-as-given is saying salad tastes better than pizza, as if there weren't some other reason for eating salad. The other reasons (health, dating a vegetarian, personal finances) maybe deserve a conversation, but not one muddled with this. Honestly, you follow Subhan's flawed reasoning methodology (as it seems Transhumanists and Libertarians are more likely to do than average, for whatever reason), you get to the conclusion consciousness doesn't exist. I think the AGI portion of this question depends a lot more on the energy resources of the universe than upon how to train an AGI to be a pyschologist, as unless there is some hurry to give the teaching/counselling reins to an AGI, what's the rush?

Load More