Optimizing Fuzzies And Utilons: The Altruism Chip Jar

95 orthonormal 01 January 2011 06:53PM

Related: Purchase Fuzzies and Utilons Separately

We genuinely want to do good in the world; but also, we want to feel as if we're doing good, via heuristics that have been hammered into our brains over the course of our social evolution. The interaction between these impulses (in areas like scope insensitivity, refusal to quantify sacred values, etc.) can lead to massive diminution of charitable impact, and can also suck the fun out of the whole process. Even if it's much better to write a big check at the end of the year to the charity with the greatest expected impact than it is to take off work every Thursday afternoon and volunteer at the pet pound, it sure doesn't feel as rewarding. And of course, we're very good at finding excuses to stop doing costly things that don't feel rewarding, or at least to put them off.

But if there's one thing I've learned here, it's that lamenting our irrationality should wait until one's properly searched for a good hack. And I think I've found one.

Not just that, but I've tested it out for you already.

This summer, I had just gone through the usual experience of being asked for money for a nice but inefficient cause, turning them down, and feeling a bit bad about it. I made a mental note to donate some money to a more efficient cause, but worried that I'd forget about it; it's too much work to make a bunch of small donations over the year (plus, if done by credit card, the fees take a bigger cut that way) and there's no way I'd remember that day at the end of the year.

Unless, that is, I found some way to keep track of it.

So I made up several jars with the names of charities I found efficient (SIAI and VillageReach) and kept a bunch of poker chips near them. Starting then, whenever I felt like doing a good deed (and especially if I'd passed up an opportunity to do a less efficient one), I'd take a chip of an appropriate value and toss it in the jar of my choice. I have to say, this gave me much more in the way of warm fuzzies than if I'd just waited and made up a number at the end of the year.

And now I've added up and made my contributions: $1,370 to SIAI and $566 to VillageReach.

continue reading »

Defecting by Accident - A Flaw Common to Analytical People

86 lionhearted 01 December 2010 08:25AM

Related to: Rationalists Should WinWhy Our Kind Can't Cooperate, Can Humanism Match Religion's Output?, Humans Are Not Automatically Strategic, Paul Graham's "Why Nerds Are Unpopular"

The "Prisoner's Dilemma" refers to a game theory problem developed in the 1950's. Two prisoners are taken and interrogated separately. If either of them confesses and betrays the other person - "defecting" - they'll receive a reduced sentence, and their partner will get a greater sentence. However, if both defect, then they'll both receive higher sentences than if neither of them confessed.

This brings the prisoner to a strange problem. The best solution individually is to defect. But if both take the individually best solution, then they'll be worst off overall. This has wide ranging implications for international relations, negotiation, politics, and many other fields.

Members of LessWrong are incredibly smart people who tend to like game theory, and debate and explore and try to understand problems like this. But, does knowing game theory actually make you more effective in real life?

I think the answer is yes, with a caveat - you need the basic social skills to implement your game theory solution. The worst-case scenario in an interrogation would be to "defect by accident" - meaning that you'd just blurt out something stupidly because you didn't think it through before speaking. This might result in you and your partner both receiving higher sentences... a very bad situation. Game theory doesn't take over until basic skill conditions are met, so that you could actually execute any plan you come up with.

The Purpose of This Post: I think many smart people "defect" by accident. I don't mean in serious situations like a police investigation. I mean in casual, everyday situations, where they tweak and upset people around them by accident, due to a lack of reflection of desired outcomes.

Rationalists should win. Defecting by accident frequently results in losing. Let's examine this phenomenon, and ideally work to improve it.

Contents Of This Post

  • I'll define "defecting by accident."
  • I'll explain a common outcome of defecting by accident.
  • I'll give some recent, mild examples of accidental defections.
  • I'll give examples of how to turn accidental defections into cooperation.
  • I'll give some examples of how this can make you more successful at your goals.
  • I'll list some books I recommend if you decide to learn more on the topic.
continue reading »

Two straw men fighting

2 JanetK 09 August 2010 08:53AM

For a very long time, philosophy has presented us with two straw men in combat with one another and we are expected to take sides. Both straw men appear to have been proved true and also proved false. The straw men are Determinism and Free Will. I believe that both, in any useful sense, are false. Let me tell a little story.

 

 

Mary's story

 

Mary is walking down the street, just for a walk, without a firm destination. She comes to a T where she must go left or right and she looks down each street finding them about the same. She decides to go left. She feels she has, like a free little birdie, exercised her will without constraint. As she crosses the next intersection she is struck by a car and suffers serious injury. Now she spends much time thinking about how she could have avoided being exactly where she was, when she was. She believes that things have causes and she tries to figure out where a different decision would have given a different outcome and how she could have known to make the alternative decision. 'If only..' ideas crowd into her thoughts. She believes simultaneously that her actions have causes and that there are valid alternatives to her actions. She is using both deterministic logic and free will logic, neither alone leads to 'If only..' scenarios – it takes both. If only she had noticed that the next intersection on the right had traffic lights but on the left didn't. If only she had not noticed the shoe store on the left. What is more she is doing this in order to change some aspect of her decision making so that it will be less likely to put her in hospital, again this is not in keeping with either logic. But really both forms of logic are deeply flawed. What Mary is actually attempting is to do maintenance on her decision making processes so that they can learn whatever is available to be learned from her unfortunate experience.

 

 

What is useless about determinism

 

There is a big difference between being 'in principle' determined and being determined in any useful way. If I accept that all is caused by the laws of physics (and we know these laws – a big if) this does not accomplish much. I still cannot predict events except trivially: in general but not in full detail, in simple not complex situations, extremely shortly into the future rather than longer term, etc. To predict anything really sizable, like for instance, how the earth came to be as it is, or even how little-old-me became what I am, or even why I did a particular thing a moment ago, would take more resources and time than can be found in the life of our universe. Being determined does not mean being predictable. It does not help us to know that our decisions are determined because we still have to actually make the decisions. We cannot just predict what the outcomes of our decisions will be, we really, really have to go through the whole process of making them. We cannot even pretend that decisions are determined until after we have finish making them.

 

 

What is useless about freewill

 

There is a big difference between being free in the legal, political, human rights type of freedom. To be free from particular, named restraints is something we all understand. But the free in 'free will' is a freedom from the cause and effect of the material world. This sort of freedom has to be magical, supernatural, spiritual or the like. That in itself is not a problem for a belief system. It is the idea that something that is not material can act on the material world that is problematic. Unless you have everything spiritual or everything material, you have the problem of interaction. What is the 'lever' that the non-material uses to move the material, or vice versa. It is practically impossible to explain how free will can affect the brain and body. If you say God does it, you have raised a personal problem to a cosmic one but the problem remains – how can the non-physical interact with the physical? Free will is of little use in explaining our decision process. We make our decisions rather than having them dictated to us but it is physical processes in the brain that really do the decision making, not magic. And we want our decisions to be relevant, effective and in contact with the physical world, not ineffective. We actually want a 'lever' on the material world. Decisions taken in some sort of causal vacuum are of no use to us.

 

 

The question we want answered

 

Just because philosophers pose questions and argue various answers does not mean that they are finding answers. No, they are make clear the logical ramifications of questions and each answer. This is a useful function and not to be undervalued, but it is not a process that gives robust answers. As an example, we have Zeno's paradox about the arrow that can never landing because its distance to landing can always be divided in half, but on the other hand, the knowledge that it does actually land. Philosophers used to argue about how to treat this paradox, but they never solved it. It lost its power when mathematics developed the concept of the sum of a infinite series. When the distance is cut in half, so is the time. When the infinite series of remaining distance reaches zero so does the series of time remaining. We do not know how to end an infinite series but we know where it ends and when it ends – on the ground the moment the arrow hits it. The sum of an infinite series can still be considered somewhat paradoxical but as an obscure mathematical question. Generally, philosophers are no longer very interested in the Zeno paradox, certainly not its answer. Philosophy is useful but not because it supplies consensus answers. Mathematics, science and their cousins, like history, supply answers. Philosophy has set up a dichotomy between free will and determinism and explored each idea to exhaustion but not with any consensus about which is correct. That is not the point of philosophy. Science has to rephrase the problem as, 'how exactly are decisions made?' That is the question we need an answer to, a robust consensus answer.

 

 

But here is the rub

 

This move to a scientific answer is disturbing to very many people because the answer is assumed to have effects on our notions of morals, responsibility and identity. Civilization as we know it may fall apart. Exactly how we think we make decisions once we study the question without reference to determinism or freewill seems OK. But if the answer robs us of morals, responsibility or identity, than it is definitely not OK. Some people have the notion that what we should do is just pretend that we have free will, while knowing that our actions are determined. To me this is silly: believe two incompatible and flawed ideas at the same time rather than believe a better, single idea. It reminds me of the solution proposed to deal with Copernicus – use the new calculations while believing that the earth does not revolve. Of course, we do not yet have the scientific answer (far from it) although we think we can see the general gist of it. So we cannot say how it will affect society. I personally feel that it will not affect us negatively but that is just a personal opinion. Neuroscience will continue to grow and we will soon have a very good idea of how we actually make decisions, whether this knowledge is welcomed or not. It is time we stopped worrying about determinism and free will and started preparing ourselves to live with ourselves and others in a new framework.

 

 

Identity, Responsibility, Morals

 

We need to start thinking of ourselves as whole beings, one entity from head to toe: brain and body, past and future, from birth to death. Forgot the ancient religious idea of a mind imprisoned in a body. We have to stop the separation of me and my body, me and my brain. Me has to be all my parts together, working together. Me cannot equate to consciousness alone.

 

Of course I am responsible for absolutely everything I do including something I do while sleep walking. Further a rock that falls from a cliff is responsible for blocking the road. It is what we do about responsibility that differs. We remove the rock but we do not blame or punish it. We try to help the sleep walker overcome the dangers of sleep walking to himself and others. But if I as a normal person hit someone in the face, my responsibility is not greater than the rock or the sleep walker but my treatment will be much, much different. I am expected to maintain my decision-making apparatus in good working order. The way the legal system will work might be a little different from now, but not much. People will be expected to know and follow the rules of society.

 

I think of moral questions as those for which there is no good answer. All courses of action and of inaction are bad in a moral question. Often because the possible answers pit the good of the individual against the good of the group, but also pit different groups and their interests against each other. No matter what we believe about how decisions are made, we are still forced to make them and that includes moral ones. The more we know about decisions, the more likely we are to make moral decisions we are proud of (or least guilty or ashamed of), but there is no guarantee. There is still a likelihood that we will just muddle along trying to find the lesser of two evils with no more success than at present.

 

 

Why should we believe that being closer to the truth or having a more accurate understanding is going to make things worst rather than better? Shouldn't we welcome having a map that is closer to the territory? It is time to be open to ideas outside the artificial determinism/freewill dichotomy.

 

Is it rational to be religious? Simulations are required for answer.

-13 Aleksei_Riikonen 11 August 2010 03:20PM

What must a sane person1 think regarding religion? The naive first approximation is "religion is crap". But let's consider the following:

Humans are imperfectly rational creatures. Our faults include not being psychologically able to maximally operate according to our values. We can e.g. suffer from burn-out if we try to push ourselves too hard.

It is thus important for us to consider, what psychological habits and choices contribute to our being able to work as diligently for our values as we want to (while being mentally healthy). It is a theoretical possibility, a hypothesis that could be experimentally studied, that the optimal2 psychological choices include embracing some form of Faith, i.e. beliefs not resting on logical proof or material evidence.

In other words, it could be that our values mean that Occam's Razor should be rejected (in some cases), since embracing Occam's Razor might mean that we miss out on opportunities to manipulate ourselves psychologically into being more what we want to be.

To a person aware of The Simulation Argument, the above suggests interesting corollaries:

  1. Running ancestor simulations is the ultimate tool to find out, what (if any) form of Faith is most conducive to us being able to live according to our values.
  2. If there is a Creator and we are in fact currently in a simulation being run by that Creator, it would have been rather humorous of them to create our world thus that the above method would yield "knowledge" of their existence.

 


1: Actually, what I've written here assumes we are talking about humans. Persons-in-general may be psychologically different, and theoretically capable of perfect rationality.

2: At least for some individuals, not necessarily all.

Applying Behavioral Psychology on Myself

53 John_Maxwell_IV 20 June 2010 06:25AM

In which I attempt to apply findings from behavioral psychology to my own life.

Behavioral Psychology Finding #1: Habituation

The psychological process of "extinction" or "habituation" occurs when a stimulus is administered repeatedly to an animal, causing the animal's response to gradually diminish.  You can imagine that if you were to eat your favorite food for breakfast every morning, it wouldn't be your favorite food after a while.  Habituation tends to happen the fastest when the following three conditions are met:

  • The stimulus is delivered frequently
  • The stimulus is delivered in small doses
  • The stimulus is delivered at regular intervals

Source is here.

Applied Habituation

I had a project I was working on that was really important to me, but whenever I started working on it I would get demoralized.  So I habituated myself to the project: I alternated 2 minutes of work with 2 minutes of sitting in the yard for about 20 minutes.  This worked.

continue reading »

Only humans can have human values

34 PhilGoetz 26 April 2010 06:57PM

Ethics is not geometry

Western philosophy began at about the same time as Western geometry; and if you read Plato you'll see that he, and many philosophers after him, took geometry as a model for philosophy.

In geometry, you operate on timeless propositions with mathematical operators.  All the content is in the propositions.  A proof is equally valid regardless of the sequence of operators used to arrive at it.  An algorithm that fails to find a proof when one exists is a poor algorithm.

The naive way philosophers usually map ethics onto mathematics is to suppose that a human mind contains knowledge (the propositional content), and that we think about that knowledge using operators.  The operators themselves are not seen as the concern of philosophy.  For instance, when studying values (I also use "preferences" here, as a synonym differing only in connotation), people suppose that a person's values are static propositions.  The algorithms used to satisfy those values aren't themselves considered part of those values.  The algorithms are considered to be only ways of manipulating the propositions; and are "correct" if they produce correct proofs, and "incorrect" if they don't.

But an agent's propositions aren't intelligent.  An intelligent agent is a system, whose learned and inborn circuits produce intelligent behavior in a given environment.  An analysis of propositions is not an analysis of an agent.

I will argue that:

  1. The only preferences that can be unambiguously determined are the preferences people implement, which are not always the preferences expressed by their beliefs.
  2. If you extract a set of propositions from an existing agent, then build a new agent to use those propositions in a different environment, with an "improved" logic, you can't claim that it has the same values.
  3. Values exist in a network of other values.  A key ethical question is to what degree values are referential (meaning they can be tested against something outside that network); or non-referential (and hence relative).
  4. Supposing that values are referential helps only by telling you to ignore human values.
  5. You cannot resolve the problem by combining information from different behaviors, because the needed information is missing.
  6. Today's ethical disagreements are largely the result of attempting to extrapolate ancestral human values into a changing world.
  7. The future will thus be ethically contentious even if we accurately characterize and agree on present human values.
continue reading »

Single Point of Moral Failure

14 Alexandros 06 April 2010 10:44PM

I have been recently entertaining myself with a 3-day non-stop binge of Theist vs. Atheist debates, On the atheist side: Richard Dawkins, Christopher Hitchens, Daniel Denett, Sam Harris, P.Z. Myers. On the theist corner: Dinesh D'Souza, William Lane Craig, Alistair McGrath, Tim Keller, and (unfortunately) Nassim Nicholas Taleb. One of the interesting points that comes up, often by Hitchens, is what I call the "Bodycount Argument". The atheist will claim: "Look at all the deaths caused by religion: Crusades, Inquisition, Islamic fundamentalism, Japanese militarism, Conquests of the New World" and the list goes on and on. Then the Theist will claim: "Well, look at the Nazis, the Fascists, the Soviets, the Khmer Rouge...". the Atheist then tries to reverse some of that, e.g. the Fascists were the catholic right wing, the SS were mostly confessing Catholics and Hitler had churches pray for him on his birthday, and, most tenuously, that the Soviets had the support of the orthodox church and used the pre-existing structures set up by the Czar to establish their power.

Some of that retort is convincing, some is not so much. You cannot really blame Soviet, Cambodian and Chinese massacres solely on religion. While they do at least manage to bring it to a tie, I suspect that the atheists follow this argument up suboptimally. My instinctive reaction would be "ok, so you proved that except for religion, communism leads to mass slaughter too. I have no problem doing away with both". But the Theists have a stronger form of their argument in which they claim that the crimes of communism are -because- of atheism, so a simple one-line retort won't work in all cases. We need to lay a deeper foundation for that claim to be convincing.

Enter single points of failure. The rudimentary definition, usually given in terms of computer networks, is that a single point of failure is that component which takes down the entire system when it fails. While the term has originated in computer science as far as I can tell, it can be applied to human networks as well. The strategy of Alexander the Great, at the battle of Issus, was instead of trying to defeat the entire, vastly ournumbering, Persian army in combat, to attack the Persian king Darius directly. When he was able to make him flee, the entire Persian army fell into disarray, with one side executing an orderly retreat, but the left flank completely disintegrated while being pursued by Alexander's cavalry. So while the term is new, the concept has been long known and has been used to great effect.

What I want to argue, is that all the examples cited by Theists and Atheists alike, are instances of a single point of -moral- failure. Here, instead of the system disintegrating or stopping to operate, it goes into a sequence of actions that when examined by an outside human observer, or even the participants themselves at a latter date, seem to be immoral, irrational, and akin to madness. The common point in all the examples is that a central organization, supported by a specific fanaticizing ideology, ordered the massacres to occur, and the people at the lower ranks, implemented those orders, despite perhaps individually knowing better.

My explanation of this, is that the lower-ranks had in effect outsourced their moral sense to their leadership. As with all centralised structures, when things go well, they go -really- well (assuming aligned incentives, greedy algorithms generally will not be as optimal as top-down ones), but when they go bad, they can be disastrous. The bigger the power of the network, the bigger the consequences. It is not hard to imagine why the outsourcing happened. Humans are tribal. I think very few, having observed the weekly rituals called 'football games' (whatever your definition of football is) would disagree. But humans are also moral. We have a rough set of rules that we tend to follow relatively consistently. What is of interest in these cases, is that an individual's tribalism completely overrode that individual's personal morality. And this happened repeatedly and reliably, throughout the ranks of each of these human networks.

Coming back to the original argument, if indeed tribalism trumps morality, and the above give us good reason to believe it does, then the theist argument that god put morality inside us comes into question. It does not explain why god saw fit to make our morality less powerful a motivator than our tribal instincts. But the biological explanation stands confirmed: If morality is a mechanism that was useful for intra-tribe interactions, then it would -have- to be suspended when the tribe was facing another. One can imagine the pacifist tribe being annihilated by the non-pacifist tribes around it or, lest I be accused of arguing for group selection, the individual pacifists being attacked both by their own tribe or the enemy tribe. Tribalists may disagree about who gets to live and who gets the resources, but they don't disagree about tribalism.

The mathematical universe: the map that is the territory

68 ata 26 March 2010 09:26AM

This post is for people who are not familiar with the Level IV Multiverse/Ultimate Ensemble/Mathematical Universe Hypothesis, people who are not convinced that there’s any reason to believe it, and people to whom it appears believable or useful but not satisfactory as an actual explanation for anything.

I’ve found that while it’s fairly easy to understand what this idea asserts, it is more difficult to get to the point where it actually seems convincing and intuitively correct, until you independently invent it for yourself. Doing so can be fun, but for those who want to skip that part, I’ve tried to write this post as a kind of intuition pump (of the variety, I hope, that deserves the non-derogatory use of that term) with the goal of leading you along the same line of thinking that I followed, but in a few minutes rather than a few years.


Once upon a time, I was reading some Wikipedia articles on physics, clicking links aimlessly, when I happened upon a page then titled “Ultimate Ensemble”. It described a multiverse of all internally-consistent mathematical structures, thereby allegedly explaining our own universe — it’s mathematically possible, so it exists along with every other possible structure.

Now, I was certainly interested in the question it was attempting to answer. It’s one that most young aspiring deep thinkers (and many very successful deep thinkers) end up at eventually: why is there a universe at all? A friend of mine calls himself an agnostic because, he says, “Who created God?” and “What caused the Big Bang?” are the same question. Of course, they’re not quite the same, but the fundamental point is valid: although nothing happened “before” the Big Bang (as a more naïve version of this query might ask), saying that it caused the universe to exist still requires us to explain what brought about the laws and circumstances allowing the Big Bang to happen. There are some hypotheses that try to explain this universe in terms of a more general multiverse, but all of them seemed to lead to another question: “Okay, fine, then what caused that to be the case?”

The Ultimate Ensemble, although interesting, looked like yet another one of those non-explanations to me. “Alright, so every mathematical structure ‘exists’. Why? Where? If there are all these mathematical structures floating around in some multiverse, what are the laws of this multiverse, and what caused those laws? What’s the evidence for it?” It seemed like every explanation would lead to an infinite regress of multiverses to explain, or a stopsign like “God did it” or “it just exists because it exists and that’s the end of it” (I’ve seen that from several atheists trying to convince themselves or others that this is a non-issue) or “science can never know what lies beyond this point” or “here be dragons”. This was deeply vexing to my 15-year-old self, and after a completely secular upbringing, I suffered a mild bout of spirituality over the following year or so. Fortunately I made a full recovery, but I gave in and decided that Stephen Hawking was right that “Why does the universe bother to exist?” would remain permanently unanswerable.

Last year, I found myself thinking about this question again — but only after unexpectedly making my way back to it while thinking about the idea of an AI being conscious. And the path I took actually suggested an answer this time. 

continue reading »

More thoughts on assertions

22 Yvain 25 March 2010 01:39AM

Response to: The "show, don't tell" nature of argument

Morendil says not to trust simple assertions. He's right, for the certain class of simple assertions he's talking about. But in order to see why, let's look at different types of assertions  and see how useful it is to believe them.

Summary:
- Hearing an assertion can be strong evidence if you know nothing else about the proposition in question.
- Hearing an assertion is not useful evidence if you already have a reasonable estimate of how many people do or don't believe the proposition.
- An assertion by a leading authority is stronger than an assertion by someone else.
- An assertion plus an assertion that there is evidence makes no factual difference, but is a valuable signal.

continue reading »

Necessary, But Not Sufficient

44 pjeby 23 March 2010 05:11PM

There seems to be something odd about how people reason in relation to themselves, compared to the way they examine problems in other domains.

In mechanical domains, we seem to have little problem with the idea that things can be "necessary, but not sufficient".  For example, if your car fails to start, you will likely know that several things are necessary for the car to start, but not sufficient for it to do so.  It has to have fuel, ignition, and compression, and oxygen...  each of which in turn has further necessary conditions, such as an operating fuel pump, electricity for the spark plugs, electricity for the starter, and so on.

And usually, we don't go around claiming that "fuel" is a magic bullet for fixing the problem of car-not-startia, or argue that if we increase the amount of electricity in the system, the car will necessarily run faster or better.

For some reason, however, we don't seem to apply this sort of necessary-but-not-sufficient thinking to systems above a certain level of complexity...  such as ourselves.

When I wrote my previous post about the akrasia hypothesis, I mentioned that there was something bothering me about the way people seemed to be reasoning about akrasia and other complex problems.  And recently, with taw's post about blood sugar and akrasia, I've realized that the specific thing bothering me is the absence of causal-chain reasoning there.

continue reading »

View more: Prev | Next