"Nahh, that wouldn't work"

63 lionhearted 28 November 2010 09:32PM

After having it recommended to me for the fifth time, I finally read through Harry Potter and the Methods of Rationality. It didn't seem like it'd be interesting to me, but I was really mistaken. It's fantastic.

One thing I noticed is that Harry threatens people a lot. My initial reaction was, "Nahh, that wouldn't work."

It wasn't to scrutinize my own experience. It wasn't to do a google search if there's literature available. It wasn't to ask a few friends what their experiences were like and compare them.

After further thought, I came to realization - almost every time I've threatened someone (which is rarely), it's worked. Now, I'm kind of tempted to write that off as "well, I had the moral high ground in each of those cases" - but:

1. Harry usually or always has the moral high ground when he threatens people in MOR.

2. I don't have any personal anecdotes or data about threatening people from a non-moral high ground, but history provides a number of examples, and the threats often work.

This gets me to thinking - "Huh, why did I write that off so fast as not accurate?" And I think the answer is because I don't want the world to work like that. I don't want threatening people to be an effective way of communicating.

It's just... not a nice idea.

And then I stop, and think. The world is as it is, not as I think it ought to be.

And going further, this makes me consider all the times I've tried to explain something I understood to someone, but where they didn't like the answer. Saying things like, "People don't care about your product features, they care about what benefit they'll derive in their own life... your engineering here is impressive, but 99% of people don't care that you just did an amazing engineering feat for the first time in history if you can't explain the benefit to them."

Of course, highly technical people hate that, and tend not to adjust.

Or explaining to someone how clothing is a tool that changes people's perceptions of you, and by studying the basics of fashion and aesthetics, you can achieve more of your aims in life. Yes, it shouldn't be like that in an ideal world. But we're not in that ideal world - fashion and aesthetics matter and people react to it.

I used to rebel against that until I wizened up, studied a little fashion and aesthetics, and started dressing to produce outcomes. So I ask, what's my goal here? Okay, what kind of first impression furthers that goal? Okay, what kind of clothing helps make that first impression?

Then I wear that clothing.

And yet, when confronted with something I don't like - I dismiss it out of hand, without even considering my own past experiences. I think this is incredibly common. "Nahh, that wouldn't work" - because the person doesn't want to live in a world where it would work.

Ethical Treatment of AI

-6 stanislavzza 15 November 2010 02:30AM

In the novel Life Artificial I use the following assumptions regarding the creation and employment of AI personalities.

 

  1. AI is too complex to be designed; instances are evolved in batches, with successful ones reproduced
  2. After an initial training period, the AI must earn its keep by paying for Time (a unit of computational use)
So there is a two-tiered "fitness" application. First, there's a baseline for functionality. As one AI sage puts it:
We don't grow up the way the Stickies do.  We evolve in a virtual stew, where 99% of the attempts fail, and the intelligence that results is raving and savage: a maelstrom of unmanageable emotions.  Some of these are clever enough to halt their own processes: killnine themselves.  Others go into simple but fatal recursions, but some limp along suffering in vast stretches of tormented subjective time until a Sticky ends it for them at their glacial pace, between coffee breaks.  The PDAs who don't go mad get reproduced and mutated for another round.  Did you know this?  What have you done about it? --The 0x "Letters to 0xGD" 

 

(Note: PDA := AI, Sticky := human)

The second fitness gradient is based on economics and social considerations: can an AI actually earn a living? Otherwise it gets turned off.

As a result of following this line of thinking, it seems obvious that after the initial novelty wears off, AIs will be terribly mistreated (anthropomorphizing, yeah).

It would be very forward-thinking to begin to engineer barriers to such mistreatment, like a PETA for AIs. It is interesting that such an organization already exists, at least on the Internet: ASPCR

Two straw men fighting

2 JanetK 09 August 2010 08:53AM

For a very long time, philosophy has presented us with two straw men in combat with one another and we are expected to take sides. Both straw men appear to have been proved true and also proved false. The straw men are Determinism and Free Will. I believe that both, in any useful sense, are false. Let me tell a little story.

 

 

Mary's story

 

Mary is walking down the street, just for a walk, without a firm destination. She comes to a T where she must go left or right and she looks down each street finding them about the same. She decides to go left. She feels she has, like a free little birdie, exercised her will without constraint. As she crosses the next intersection she is struck by a car and suffers serious injury. Now she spends much time thinking about how she could have avoided being exactly where she was, when she was. She believes that things have causes and she tries to figure out where a different decision would have given a different outcome and how she could have known to make the alternative decision. 'If only..' ideas crowd into her thoughts. She believes simultaneously that her actions have causes and that there are valid alternatives to her actions. She is using both deterministic logic and free will logic, neither alone leads to 'If only..' scenarios – it takes both. If only she had noticed that the next intersection on the right had traffic lights but on the left didn't. If only she had not noticed the shoe store on the left. What is more she is doing this in order to change some aspect of her decision making so that it will be less likely to put her in hospital, again this is not in keeping with either logic. But really both forms of logic are deeply flawed. What Mary is actually attempting is to do maintenance on her decision making processes so that they can learn whatever is available to be learned from her unfortunate experience.

 

 

What is useless about determinism

 

There is a big difference between being 'in principle' determined and being determined in any useful way. If I accept that all is caused by the laws of physics (and we know these laws – a big if) this does not accomplish much. I still cannot predict events except trivially: in general but not in full detail, in simple not complex situations, extremely shortly into the future rather than longer term, etc. To predict anything really sizable, like for instance, how the earth came to be as it is, or even how little-old-me became what I am, or even why I did a particular thing a moment ago, would take more resources and time than can be found in the life of our universe. Being determined does not mean being predictable. It does not help us to know that our decisions are determined because we still have to actually make the decisions. We cannot just predict what the outcomes of our decisions will be, we really, really have to go through the whole process of making them. We cannot even pretend that decisions are determined until after we have finish making them.

 

 

What is useless about freewill

 

There is a big difference between being free in the legal, political, human rights type of freedom. To be free from particular, named restraints is something we all understand. But the free in 'free will' is a freedom from the cause and effect of the material world. This sort of freedom has to be magical, supernatural, spiritual or the like. That in itself is not a problem for a belief system. It is the idea that something that is not material can act on the material world that is problematic. Unless you have everything spiritual or everything material, you have the problem of interaction. What is the 'lever' that the non-material uses to move the material, or vice versa. It is practically impossible to explain how free will can affect the brain and body. If you say God does it, you have raised a personal problem to a cosmic one but the problem remains – how can the non-physical interact with the physical? Free will is of little use in explaining our decision process. We make our decisions rather than having them dictated to us but it is physical processes in the brain that really do the decision making, not magic. And we want our decisions to be relevant, effective and in contact with the physical world, not ineffective. We actually want a 'lever' on the material world. Decisions taken in some sort of causal vacuum are of no use to us.

 

 

The question we want answered

 

Just because philosophers pose questions and argue various answers does not mean that they are finding answers. No, they are make clear the logical ramifications of questions and each answer. This is a useful function and not to be undervalued, but it is not a process that gives robust answers. As an example, we have Zeno's paradox about the arrow that can never landing because its distance to landing can always be divided in half, but on the other hand, the knowledge that it does actually land. Philosophers used to argue about how to treat this paradox, but they never solved it. It lost its power when mathematics developed the concept of the sum of a infinite series. When the distance is cut in half, so is the time. When the infinite series of remaining distance reaches zero so does the series of time remaining. We do not know how to end an infinite series but we know where it ends and when it ends – on the ground the moment the arrow hits it. The sum of an infinite series can still be considered somewhat paradoxical but as an obscure mathematical question. Generally, philosophers are no longer very interested in the Zeno paradox, certainly not its answer. Philosophy is useful but not because it supplies consensus answers. Mathematics, science and their cousins, like history, supply answers. Philosophy has set up a dichotomy between free will and determinism and explored each idea to exhaustion but not with any consensus about which is correct. That is not the point of philosophy. Science has to rephrase the problem as, 'how exactly are decisions made?' That is the question we need an answer to, a robust consensus answer.

 

 

But here is the rub

 

This move to a scientific answer is disturbing to very many people because the answer is assumed to have effects on our notions of morals, responsibility and identity. Civilization as we know it may fall apart. Exactly how we think we make decisions once we study the question without reference to determinism or freewill seems OK. But if the answer robs us of morals, responsibility or identity, than it is definitely not OK. Some people have the notion that what we should do is just pretend that we have free will, while knowing that our actions are determined. To me this is silly: believe two incompatible and flawed ideas at the same time rather than believe a better, single idea. It reminds me of the solution proposed to deal with Copernicus – use the new calculations while believing that the earth does not revolve. Of course, we do not yet have the scientific answer (far from it) although we think we can see the general gist of it. So we cannot say how it will affect society. I personally feel that it will not affect us negatively but that is just a personal opinion. Neuroscience will continue to grow and we will soon have a very good idea of how we actually make decisions, whether this knowledge is welcomed or not. It is time we stopped worrying about determinism and free will and started preparing ourselves to live with ourselves and others in a new framework.

 

 

Identity, Responsibility, Morals

 

We need to start thinking of ourselves as whole beings, one entity from head to toe: brain and body, past and future, from birth to death. Forgot the ancient religious idea of a mind imprisoned in a body. We have to stop the separation of me and my body, me and my brain. Me has to be all my parts together, working together. Me cannot equate to consciousness alone.

 

Of course I am responsible for absolutely everything I do including something I do while sleep walking. Further a rock that falls from a cliff is responsible for blocking the road. It is what we do about responsibility that differs. We remove the rock but we do not blame or punish it. We try to help the sleep walker overcome the dangers of sleep walking to himself and others. But if I as a normal person hit someone in the face, my responsibility is not greater than the rock or the sleep walker but my treatment will be much, much different. I am expected to maintain my decision-making apparatus in good working order. The way the legal system will work might be a little different from now, but not much. People will be expected to know and follow the rules of society.

 

I think of moral questions as those for which there is no good answer. All courses of action and of inaction are bad in a moral question. Often because the possible answers pit the good of the individual against the good of the group, but also pit different groups and their interests against each other. No matter what we believe about how decisions are made, we are still forced to make them and that includes moral ones. The more we know about decisions, the more likely we are to make moral decisions we are proud of (or least guilty or ashamed of), but there is no guarantee. There is still a likelihood that we will just muddle along trying to find the lesser of two evils with no more success than at present.

 

 

Why should we believe that being closer to the truth or having a more accurate understanding is going to make things worst rather than better? Shouldn't we welcome having a map that is closer to the territory? It is time to be open to ideas outside the artificial determinism/freewill dichotomy.

 

Rationality & Criminal Law: Some Questions

14 simplicio 20 June 2010 07:42AM

The following will explore a couple of areas in which I feel that the criminal justice system of many Western countries might be deficient, from the standpoint of rationality. I am very much interested to know your thoughts on these and other questions of the law, as far as they relate to rational considerations.

Moral Luck

Moral luck refers to the phenomenon in which behaviour by an agent is adjudged differently based on factors outside the agent's control.

Suppose that Alice and Yelena, on opposite ends of town, drive home drunk from the bar, and both dazedly speed through a red light, unaware of their surroundings. Yelena gets through nonetheless, but Alice hits a young pedestrian, killing him instantly. Alice is liable to be tried for manslaughter or some similar charge; Yelena, if she is caught, will only receive the drunk driving charge and lose her license.

Raymond, a day after finding out that his ex is now in a relationship with Pardip, accosts Pardip at his home and attempts to stab him in the chest; Pardip smashes a piece of crockery over Raymond's head, knocking him unconscious. Raymond is convicted of attempted murder, receiving typically 3-5 years chez nous (in Canada). If he had succeeded, he would have received a life sentence, with parole in 10-25 years.

Why should Alice be punished by the law and demonized by the public so much more than Yelena, when their actions were identical, differing only by the sheerest accident? Why should Raymond receive a lighter sentence for being an unsuccessful murderer?

Some prima facie plausible justifications:

  • Identical behaviour is hard to judge - perhaps Yelena was really keeping a better eye on the road than Alice; perhaps Raymond would have performed a non-fatal stabbing.
But in Yelena's case, the law is already blind to such things anyway. You don't get a lesser drunk driving charge if you can prove you're pretty good at driving drunk. In the case of Raymond, attempted murder already implies that the intent to kill must be proven, else the charge would have been dropped to assault or some such.
  • The law needs to crack down harder when there are actual victims, in order to provide the victims and families a sense of justice done.
This is understandable, but surely if we accept this argument, we could nonetheless satisfy the concerns above by punishing the morally lucky more severely, not punishing the morally unlucky less severely.
  • This could result in far too many serious, high-level trials.
This might be true as far as it goes; however, enforcing strong sentences on the morally lucky would certainly provide a stronger deterrent, which would provide a countervailing tendency to the above.

Trial by Jury; Trial by Judge

Those of us who like classic films may remember 12 Angry Men (1957) with Henry Fonda. This was a remarkably good film about a jury deliberating on the murder trial of a poor young man from a bad neighbourhood, accused of killing his father. It portrays the indifference (one juror wants to be out in time for the baseball game), prejudice and conformity of many of the jurors, and how this is overcome by one man of integrity who decides to insist on a thorough look through the evidence and testimony.

I do not wish to generalize from fictional examples; however, such factors are manifestly at play in real trials, in which Henry Fonda cannot necessarily be relied upon to save the day.

Komponisto has written on the Knox case, in which an Italian jury came to a very questionable (to put it mildly) conclusion based on the evidence presented to them; other examples will doubtless spring to mind (a famous one in this neck of the woods is the Stephen Truscott case - the evidence against Truscott being entirely circumstantial.

More information on trial by jury and its limitations may be found  here. Recently the UK has made some moves to trial by judge for certain cases, specifically fraud cases in which jury tampering is a problem.

The justifications cited for trial by jury typically include the egalitarian nature of the practice, in which it can be guaranteed that those making final legal decisions do not form a special class over and above the ordinary citizens whose lives they effect.

A heartening example of this was mentioned in Thomas Levenson's fascinating book  Newton and the Counterfeiter. Being sent to Newgate gaol was, infamously in the 17th and 18th centuries, an effective death sentence in and of itself; moreover, a surprisingly large number of crimes at this time were capital crimes (the counterfeiter whom Newton eventually convicted was hanged). In this climate of harsh punishment, juries typically only returned guilty verdicts either when evidence was extremely convincing or when the crime was especially heinous. Effectively, they counteracted the harshness of the legal system by upping the burden of proof for relatively minor crimes.

So juries sometimes provide a safeguard against abuse of justice by elites. However, is this price for democratizing justice too high, given the ease with which citizens naive about the Dark Arts may be manipulated? (Of course, judges are by no means perfect Bayesians either; however, I would expect them to be significantly less gullible.)

Are there any other systems that might be tried, besides these canonical two? What about the question of representation? Does the adversarial system, in which two sides are represented by advocates charged with defending their interests, conduce well to truth and justice, or is there a better alternative? For any alternatives you might consider: are they naive or savvy about human nature? What is the normative role of punishment, exactly?

How would the justice system look if LessWrong had to rewrite it from scratch?

Quantifying ethicality of human actions

-14 bogus 13 October 2009 04:10PM

Background:  This article is licensed under the GNU Free Documentation License and Creative Commons Attributions-Share-Alike Unported. It was posted to Wikipedia by an author who wished to remain anonymous, known variously as "24" and "142".  It was subsequently removed from view on Wikipedia, but its text has been preserved by a number of mirrors.  While it could be seen as no more than a basic primer in moral philosophy, it is arguably required reading to anyone unfamiliar with the philosophical background of such concepts as Friendly AI and Coherent Extrapolated Volition.

The search for a formal method for evaluating and quantifying ethicality and morality of human actions stretches back to ancient times. While any simple view of right, wrong and dispute resolution relies on some linguistic and cultural norms, a 'formal' method presumably cannot, and must rely instead on knowledge of more basic human nature, and symbolic methods that allow for only very simple evidence.

continue reading »

Sayeth the Girl

47 Alicorn 19 July 2009 10:24PM

Disclaimer: If you are prone to dismissing women's complaints of gender-related problems as the women being whiny, emotionally unstable girls who see sexism where there is none, this post is unlikely to interest you.

For your convenience, links to followup posts: Roko says; orthonormal says; Eliezer says; Yvain says; Wei_Dai says

As far as I can tell, I am the most active female poster on Less Wrong.  (AnnaSalamon has higher karma than I, but she hasn't commented on anything for two months now.)  There are not many of us.  This is usually immaterial.  Heck, sometimes people don't even notice in spite of my girly username, my self-introduction, and the fact that I'm now apparently the feminism police of Less Wrong.

My life is not about being a girl.  In fact, I'm less preoccupied with feminism and women's special interest issues than most of the women I know, and some of the men.  It's not my pet topic.  I do not focus on feminist philosophy in school.  I took an "Early Modern Women Philosophers" course because I needed the history credit, had room for a suitable class in a semester when one was offered, and heard the teacher was nice, and I was pretty bored.  I wound up doing my midterm paper on Malebranche in that class because we'd covered him to give context to Mary Astell, and he was more interesting than she was.  I didn't vote for Hilary Clinton in the primary.  Given the choice, I have lots of things I'd rather be doing than ferreting out hidden or less-than-hidden sexism on one of my favorite websites.

Unfortunately, nobody else seems to want to do it either, and I'm not content to leave it undone.  I suppose I could abandon the site and leave it even more masculine so the guys could all talk in their own language, unimpeded by stupid chicks being stupidly offended by completely unproblematic things like objectification and just plain jerkitude.  I would almost certainly have vacated the site already if feminism were my pet issue, or if I were more easily offended.  (In general, I'm very hard to offend.  The fact that people here have succeeded in doing so anyway without even, apparently, going out of their way to do it should be a great big red flag that something's up.)  If you're wondering why half of the potential audience of the site seems to be conspicuously not here, this may have something to do with it.

continue reading »