Suppose HBD is True

-12 OrphanWilde 21 April 2016 01:34PM

Suppose, for the purposes of argument, HBD (Human bio-diversity, the claim that distinct populations (I will be avoiding using the word "race" here insomuch as possible) of humans exist and have substantial genetical variance which accounts for some difference in average intelligence from population to population) is true, and that all its proponents are correct in accusing the politicization of science for burying this information.

I seek to ask the more interesting question: Would it matter?

1. Societal Ramifications of HBD: Eugenics

So, we now have some kind of nice, tidy explanation for different characters among different groups of people.  Okay.  We have a theory.  It has explanatory power.  What can we do with it?

Unless you're willing to commit to eugenics of some kind (be it restricting reproduction or genetic alteration), not much of anything.  And even given you're willing to commit to eugenics, HBD doesn't add anything  HBD doesn't actually change any of the arguments for eugenics - below-average people exist in every population group, and insofar as we regard below-average people a problem, the genetic population they happen to belong to doesn't matter.  If the point is to raise the average, the population group doesn't matter.  If the point is to reduce the number of socially dependent individuals, the population group doesn't matter.

Worse, insofar as we use HBD as a determinant in eugenics, our eugenics are less effective.  HBD says your population group has a relationship with intelligence; but if we're interested in intelligence, we have no reason to look at your population group, because we can measure intelligence more directly.  There's no reason to use the proxy of population group if we're interested in intelligence, and indeed, every reason not to; it's significantly less accurate and politically and historically problematic.

Yet still worse for our eugenics advocate, insomuch as population groups do have significant genetic diversity, using population groups instead of direct measurements of intelligence is far more likely to cause disease transmission risks.  (Genetic diversity is very important for population-level disease resistance.  Just look at bananas.)

2. Social Ramifications of HBD: Social Assistance

Let's suppose we're not interested in eugenics.  Let's suppose we're interested in maximizing our societal outcomes.

Well, again, HBD doesn't offer us anything new.  We can already test intelligence, and insofar as HBD is accurate, intelligence tests are more accurate.  So if we aim to streamline society, we don't need HBD to do so.  HBD might offer an argument against affirmative action, in that we have different base expectations for different populations, but affirmative action already takes different base expectations into account (if you live in a city of 50% black people and 50% white people, but 10% of local lawyers are black, your local law firm isn't required to have 50% black lawyers, but 10%).  We might desire to adjust the way we engage in affirmative action, insofar as affirmative action might not lead to the best results, but if you're interested in the best results, you can argue on the basis of best results without needing HBD.

I have yet to encounter someone who argues HBD who also argues we should do something with regard to HELPING PEOPLE on the basis of this, but that might actually be a more significant argument: If there are populations of people who are going to fall behind, that might be a good argument to provide additional resources to these populations of people, particularly if there are geographic correspondences - that is, if HBD is true, and if population groups are geographically segregated, individuals in these population groups will suffer disproportionately relative to their merits, because they don't have the local geographic social capital that equal-advantage people of other population groups would have.  (An average person in a poor region will do worse than an average person in a rich region.)  So HBD provides an argument for desegregation.

Curiously, HBD advocates have a tendency to argue that segregation would lead to the best outcome.  I'd welcome arguments that concentrating an -absence- of social capital is a good idea.

3. Scientific Ramifications of HBD

Well, if HBD were true, it would mean science is politicized.  This might be news to somebody, I guess.

4. Political Ramifications of HBD

We live in a meritocracy.  It's actually not an ideal thing, contrary to the views of some people, because it results in a systematic merit segregation that has completely deprived the lower classes of intellectual resources; talk to older people sometime, who remember, when they worked in the coal mines (or whatever), the one guy who you could trust to be able to answer your questions and provide advice.  Our meritocracy has advanced to the point where we are systematically stripping everybody of value from the lower classes and redistributing them to the middle and upper classes.

HBD might be meaningful here.  Insofar as people take HBD to its absurd extremes, it might actually result in an -improvement- for some lower-class groups, because if we stop taking all the intelligent people out of poor areas, there will still be intelligent people in those poor areas.  But racism as a force of utilitarian good isn't something I care to explore in any great detail, mostly because if I'm wrong it would be a very bad thing, and also because none of its advocates actually suggest anything like this, more interesting in promoting segregation than desegregation.

It doesn't change much else, either.  With HBD we continually run into the same problem - as a theory, it's the product of measuring individual differences, and as a theory, it doesn't add anything to our information that we don't already have with the individual differences.

5. The Big Problem: Individuality

Which is the crucial fault with HBD, iterated multiple times here, in multiple ways: It literally doesn't matter if HBD is true.  All the information it -might- provide us with, we can get with much more accuracy using the same tests we might use to arrive at HBD.  Anything we might want to do with the idea, we can do -better- without it.

HBD might predict we get fewer IQ-115, IQ-130, and IQ-145 people from particular population groups, but it doesn't actually rule them out.  Insofar as this kind of information is useful, it's -more- useful to have more accurate information.  HBD doesn't say "Black people are stupid", instead it says "The average IQ of black people is slightly lower than the average IQ of white people".  But since "black people" isn't a thing that exists, but rather an abstract concept referring to a group of "black persons", and HBD doesn't make any predictions at the individual level we couldn't more accurately obtain through listening to a person speak for five seconds, it doesn't actually make any useful predictions.  It adds literally nothing to our model of the world.

It's not the most important idea of the century.  It's not important at all.

If you think it's true - okay.  What does it -add- to your understanding of the world?  What useful predictions does it make?  How does it permit you to improve society?  I've heard people insist it's this majorly important idea that the scientific and political establishment is suppressing.  I'd like to introduce you to the aether, another idea that had explanatory power but made no useful predictions, and which was abandoned - not because anybody thought it was wrong, but because it didn't even rise to the level of wrong, because it was useless.

And that's what HBD is.  A useless idea.

And even worse, it's a useless idea that's hopelessly politicized.

Making My Peace with Belief

14 OrphanWilde 03 December 2015 08:36PM

I grew up in an atheistic household.

Almost needless to say, I was relatively hostile towards religion for most of my early life.  A few things changed that.

First, the apology of a pastor.  A friend of mine was proselytizing at me, and apparently discussed it with his pastor; the pastor apologized to my parents, and explained to my friend he shouldn't be trying to convert people.  My friend apologized to me after considering the matter.  We stayed friends for a little while afterwards, although I left that school, and we lost contact.

I think that was around the time that I realized that religion is, in addition to being a belief system, a way of life, and not necessarily a bad one.

The next was actually South Park's Mormonism episode, which pointed out that a belief system could be desirable on the merits of the way of life it represented, even if the beliefs themselves are stupid.  This tied into Douglas Adam's comment on Feng Shui, that "...if you disregard for a moment the explanation that's actually offered for it, it may be there is something interesting going on" - which is to say, the explanation for the belief is not necessarily the -reason- for the belief, and that stupid beliefs may actually have something useful to offer - which then requires us to ask whether the beliefs are, in fact, stupid.

Which is to say, beliefs may be epistemically irrational while being instrumentally rational.

The next peace I made with belief actually came from quantum physics, and reading about how there were several disparate and apparently contradictory mathematical systems, which all predicted the same thing.  It later transpired that they could all be generalized into the same mathematical system, but I hadn't read that far before the isomorphic nature of truth occurred to me; you can have multiple contradictory interpretations of the same evidence that all predict the same thing.

Up to this point, however, I still regarded beliefs as irrational, at least on an epistemological basis.

The next peace came from experiences living in a house that would have convinced most people that ghosts are real, which I have previously written about here.  I think there are probably good explanations for every individual experience even if I don't know them, but am still somewhat flummoxed by the fact that almost all the bizarre experiences of my life all revolve around the same physical location.  I don't know if I would accept money to live in that house again, which I guess means that I wouldn't put money on the bet that there wasn't something fundamentally odd about the house itself - a quality of the house which I think the term "haunted" accurately conveys, even if its implications are incorrect.

If an AI in a first person shooter dies every time it walks into a green room, and experiences great disutility for death, how many times must it walk into a green room before it decides not to do that anymore?  I'm reasonably confident on a rational level that there was nothing inherently unnatural about that house, nothing beyond explanation, but I still won't "walk into the green room."

That was the point at which I concluded that beliefs can be -rational-.  Disregard for a moment the explanation that's actually offered for them, and just accept the notion that there may be something interesting going on underneath the surface.

If we were to hold scientific beliefs to the same standard we hold religious beliefs - holding the explanation responsible rather than the predictions - scientific beliefs really don't come off looking that good.  The sun isn't the center of the universe; some have called this theory "less wrong" than an earth-centric model of the universe, but that's because the -predictions- are better; the explanation itself is still completely, 100% wrong.

Likewise, if we hold religious beliefs to the same standard we hold scientific beliefs - holding the predictions responsible rather than the explanations - religious beliefs might just come off better than we'd expect.

Dark Arts: Defense in Reputational Warfare

1 OrphanWilde 03 December 2015 03:03PM

First, the Dark Arts are, as the name implies, an art, not a science.  Likewise, defending against them is.  An artful attacker can utilize expected defenses against you; if you can be anticipated, you can be defeated.  The rules, therefore, are guidelines.  I'm going to stage the rules in a narrative form; they don't need to be, however, because life doesn't follow a narrative.  The narrative exists to give them context, to give the reader a sense of the purpose of each rule.

Rule #0: Never follow the rules if they would result in a worse outcome.  

 


 

Now, generally, the best defense is to never get attacked in the first place.  Security through obscurity is your first line of defense.  Translations of Sun Tzu vary somewhat, but your ideal form is to be formless, by which I mean, do not be a single point of attack, or defense.  If there's a mob in your vicinity, the ideal place is neither outside it, nor leading it, but a faceless stranger among it.  Even better is to be nowhere near a mob.  This is the fundamental basis of not being targeted; the other two rules derive from this one.

Rule #1: Do not stand out.

 

Sometimes you're picked out.  There's a balancing art with this next piece; you don't want to stand out, to be a point of attack, but if somebody is picking faces, you want to look slightly more dangerous than your neighbor, you want to look like a hard target.  (But not when somebody is looking for hard targets.  Obviously.)

Rule #2: Look like an unattractive target.

 

The third aspect of this is somewhat simpler, and I'll borrow the phrasing from HPMoR:

Rule #3: "I will not go around provoking strong, vicious enemies" - http://hpmor.com/chapter/19

 

The first triplet of rules, by and large, are about -not- being attacked in the first place.  These are starting points; Rule #1, for example, culminates in not existing at all.  You can't attack what doesn't exist.  Rule #1 is the fundamental strategy of Anonymous.  Rule #2 is about encouraging potential attackers to look elsewhere; Rule #1 is passive, and this is the passive-aggressive form of Rule #1.  It's the fundamental strategy of home security - why else do you think security companies put signs in the yard saying the house is protected?  Rule #3 is obvious.  Don't make enemies in the first place, and particularly don't make dangerous enemies.  It has critical importance beyond its obvious nature, however - enemies might not care if they get hurt in the process of hurting you.  That limits your strategies for dealing with them considerably.

 


 

You've messed up the first three rules.  You're under attack.  What now?  Manage the Fight.  Your attacker starts with the home field advantage - they attacked you under the terms they are most comfortable in.  Change the terms, immediately.  Do not concede that advantage.  Like Rule #1, Rule #4 is the basis of your First Response, and Rule #5 and Rule #6.  The simplest approach is the least obvious - immediate surrender, but on your terms.  If you're accused of something, admit to the weakest and least harmful version of that which is true (be specific, and deny as necessary), and say you're aware of your problem and working on improving.  This works regardless of whether there's an audience or not, but works best if there is an audience.

Rule #4: Change the terms of the fight to favor yourself, or disfavor your opponent.

 

Sometimes, the best response to an attack is no response at all.  Is anybody (important) going to take it seriously?  If not, then the very worst thing you can do is to respond, because that validates the attack.  If you do need to respond, respond as lightly as possible; do not respond as if the accusation is serious or matters, because that lends weight to the accusation.  If there's no audience, or a limited audience, responding gives your attacker an opportunity to continue the attack.  If there's a risk of them physically assaulting you, ignoring them is probably a bad idea; a polite non-response is ideal in that situation.  (For crowds that pose a risk of physically assault you... you need more rules than I'm going to write here.)

Rule #5: Use the minimum force necessary to respond.

 

It's tempting to attack back: Don't.  You're going to escalate the situation, and escalation is going to favor the person who is better at this; worse, in a public Dark Arts battle, even the better person is going to take some hits.  Nobody wins.  Instead, mine the battlefield, and make sure your opponent sees you mining the battlefield.  If you're accused of something, suggest that both you and your opponent know the accused thing isn't as uncommon as generally represented.  Hint at shared knowledge.  Make it clear you'll take them out with you.  If they're actually good at this, they'll get the hint.  (This is why it's critically important not to make enemies.  You really, really don't want somebody around who doesn't mind going down with you, and your use of this strategy becomes difficult.)

Rule #6: Make escalation prohibitively costly.

 

You might recognize some elements of martial arts here.  There are similarities, enough that one is useful to the other, but they are not the same.

 


 

You're in a fight, and your opponent is persistent, or you messed up and now things are serious.  What now?  First, continue to Manage the Fight.  Your goal now is to end the fight; the total damage you're going to suffer is a function of both the amplitude of escalation and the length of the fight.  You've failed to manage the amplitude; manage the length.

Rule #7: End fights fast.

 

At this point you've been reasonable and defensive, and that hasn't worked.  Now you need to go on the offensive.  Your defense should be light and easy, continuing to react with the lightest necessary touch, continuing to ignore anything you don't need to react to; your attack should be brutal, and put your opponent on the defensive immediately.  Attack them on the basis of their harassment of you, first, and then build up to any personal attacks you've been holding back on - your goal is to impart a tone of somebody who has been put-upon and had enough.

Rule #8: Hit hard.

 

And immediately stop.  If you've pulled off your counterattack right, they'll offer up defenses.  Just quit the battle.  Do not be tempted by a follow-up attack; you were angry, you vented your anger, you're done.  By not following up on the attack, by not attacking their defenses, you're leaving them no reasonable way to respond.  Any continuing attacks can be safely ignored; they will look completely pathetic going forward.

Rule #9: Recognize when you've won, and stop.

 

Defense follows different rules than attack.  In defense, you aren't trying to inflict wounds, you're trying to avoid them.  Ending the fight quickly is paramount to this.

Omega's Idiot Brother, Epsilon

3 OrphanWilde 25 November 2015 07:57PM

Epsilon walks up to you with two boxes, A and b, labeled in rather childish-looking handwriting written in crayon.

"In box A," he intones, sounding like he's trying to be foreboding, which might work better when he hits puberty, "I may or may not have placed a million of your human dollars."  He pauses for a moment, then nods.  "Yes.  I may or may not have placed a million dollars in this box.  If I expect you to open Box B, the million dollars won't be there.  Box B will contain, regardless of what you do, one thousand dollars.  You may choose to take one box, or both; I will leave with any boxes you do not take."

You've been anticipating this.  He's appeared to around twelve thousand people so far.  Out of eight thousand people who accepted both boxes, eighty found the million dollars missing, and walked away with $1,000; the other seven thousand nine hundred and twenty people walked away with $1,001,000 dollars.  Out of the four thousand people who opened only box A, only four found it empty.

The agreement is unanimous: Epsilon is really quite bad at this.  So, do you one-box, or two-box?


There are some important differences here with the original problem.  First, Epsilon won't let you open either box until you've decided whether to open one or both, and will leave with the other box.  Second, while Epsilon's false positive rate on identifying two-boxers is quite impressive, making mistakes about one-boxers only .1% of the time, his false negative rate is quite unimpressive - he catches 1% of everybody who engages in it.  Whatever heuristic he's using, clearly, he prefers to let two-boxers slide than to accidentally punish one-boxers.

I'm curious to know whether anybody would two-box in this scenario and why, and particularly curious in the reasoning of anybody whose answer is different between the original Newcomb problem and this one.

The Winding Path

6 OrphanWilde 24 November 2015 09:23PM

The First Step

The first step on the path to truth is superstition.  We all start there, and should acknowledge that we start there.

Superstition is, contrary to our immediate feelings about the word, the first stage of understanding.  Superstition is the attribution of unrelated events to a common (generally unknown or unspecified) cause - it could be called pattern recognition. The "supernatural" component generally included in the definition is superfluous, because supernatural merely refers to that which isn't part of nature - which means reality -, which is an elaborate way of saying something whose relationship to nature is not yet understood, or else nonexistent.  If we discovered that ghosts are real, and identified an explanation - overlapping entities in a many-worlds universe, say - they'd cease to be supernatural and merely be natural.

Just as the supernatural refers to unexplained or imaginary phenomena, superstition refers to unexplained or imaginary relationships, without the necessity of cause.  If you designed an AI in a game which, after five rounds of being killed whenever it went into rooms with green-colored walls, started avoiding rooms with green-colored walls, you've developed a good AI.  It is engaging in superstition, it has developed an incorrect understanding of the issue.  But it hasn't gone down the wrong path - there is no wrong path in understanding, there is only the mistake of stopping.  Superstition, like all belief, is only useful if you're willing to discard it.

The Next Step

Incorrect understanding is the first - and necessary - step to correct understanding.  It is, indeed, every step towards correct understanding.  Correct understanding is a path, not an achievement, and it is pursued, not by arriving at the correct conclusion in the first place, but by testing your ideas and discarding those which are incorrect.

No matter how much intelligent you are, you cannot skip the "incorrect understanding" step of knowledge, because that is every step of knowledge.  You must come up with wrong ideas in order to get at the right ones - which will always be one step further.  You must test your ideas.  And again, the only mistake is stopping, in assuming that you have it right now.

Intelligence is never your bottleneck.  The ability to think faster isn't necessarily the ability to arrive at the right answer faster, because the right answer requires many wrong ones, and more importantly, identifying which answers are indeed wrong, which is the slow part of the process.

Better answers are arrived at by the process of invalidating wrong answers.

The Winding Path

The process of becoming Less Wrong is the process of being, in the first place, wrong.  It is the state of realizing that you're almost certainly incorrect about everything - but working on getting incrementally closer to an unachievable "correct".  It is a state of anti-hubris, and requires a delicate balance between the idea that one can be closer to the truth, and the idea that one cannot actually achieve it.

The art of rationality is the art of walking this narrow path.  If ever you think you have the truth - discard that hubris, for three steps from here you'll see it for superstition, and if you cannot see that, you cannot progress, and there your search for truth will end.  That is the path of the faithful.

But worse, the path is not merely narrow, but winding, with frequent dead ends requiring frequent backtracking.  If ever you think you're closer to the truth - discard that hubris, for it may inhibit you from leaving a dead end, and there your search for truth will end.  That is the path of the crank.

The path of rationality is winding and directionless.  It may head towards beauty, then towards ugliness; towards simplicity, then complexity.  The correct direction isn't the aesthetic one; those who head towards beauty may create great art, but do not find truth.  Those who head towards simplicity might open new mathematical doors and find great and useful things inside - but they don't find truth, either.  Truth is its own path, found only by discarding what is wrong.  It passes through simplicity, it passes through ugliness; it passes through complexity, and also beauty.  It doesn't belong to any one of these things.

The path of rationality is a path without destination.

 


 

Written as an experiment in the aesthetic of Less Wrong.  I'd appreciate feedback into the aesthetic interpretation of Less Wrong, rather than the sense of deep wisdom emanating from it (unless the deep wisdom damages the aesthetic).

In Defense of the Fundamental Attribution Error

9 OrphanWilde 03 June 2015 06:46PM

The Fundamental Attribution Error

Also known, more accurately, as "Correspondence Bias."

http://lesswrong.com/lw/hz/correspondence_bias/

The "more accurately" part is pretty important; bias -may- result in error, but need not -necessarily- do so, and in some cases may result in reduced error.

A Simple Example

Suppose I write a stupid article that makes no sense and rambles on without any coherent point.  There might be a situational cause of this; maybe I'm tired.  Correcting for correspondence bias means that more weight should be given to the situational explanation than the dispositional explanation, that I'm the sort of person who writes stupid articles that ramble on.  The question becomes, however, whether or not this increases the accuracy of your assessment of me; does correcting for this bias make you, in fact, less wrong?

In this specific case, no, it doesn't.  A person who belongs to the class of people who write stupid articles is more likely to write stupid articles than a person who doesn't belong to that class - I'd be surprised if I ever saw Gwern write anything that wasn't well-considered, well-structured, and well-cited.  If somebody like Gwern or Eliezer wrote a really stupid article, we have sufficient evidence that he's not a member of that class of people to make that conclusion a poor one; the situational explanation is better, he's having some kind of off day.  However, given an arbitrary stupid article written by somebody for which we have no prior information, the distribution is substantially different.  We have different priors for "Randomly chosen person X writes article" and "Article is bad" implies "X is a bad writer of articles" than we do for "Well-known article author Y writes article" and "Article is bad" implies "Y is a bad writer of articles".

Getting to the Point

The FAE is putting emphasis on internal factors rather than external.  It's jumping first to the conclusion that somebody who just swerved is a bad driver, rather than first considering the possibility that there was an object in the road they were avoiding, given only the evidence that they swerved.  Whether or not the FAE is an error - whether it is more wrong - depends on whether or not the conclusion you jumped to was correct, and more importantly, whether, on average, that conclusion would be correct.

It's very easy to produce studies in which the FAE results in people making incorrect judgements.  This is not, however, the same as the FAE resulting in an average of more incorrect judgements in the real world.

Correspondence Bias as Internal Rationalization

I'd suggest the major issue with correspondence bias is not, as commonly presented, incorrectly interpreting the behavior of other people - rather, the major issue is with incorrectly interpreting your own behavior.  The error is not in how you interpret other peoples' behaviors, but in how you interpret your own.

Turning to Eliezer's example in the linked article, if you find yourself kicking vending machines, maybe the answer is that -you- are a naturally angry person, or, as I would prefer to phrase it, you have poor self-control.  The "floating history" Eliezer refers to sounds more to me like rationalizations for poor behavior than anything approaching "good" reasons for expressing your anger through violence directed at inanimate objects.  I noticed -many- of those rationalizations cropping up when I quit smoking - "Oh, I'm having a terrible day, I could just have one cigarette to take the edge off."  I don't walk by a smoker and assume they had a terrible day, however, because those were -excuses- for a behavior that I shouldn't be engaging in.

It's possible, of course, that Eliezer's example was simply a poorly chosen one; the examples in studies certainly seem better, such as assuming the authors of articles held the positions they wrote about.  But the examples used in those studies are also extraordinarily artificial, at least in individualistic countries, where it's assumed, and generally true, that people writing articles do have the freedom to write what they agree with, and infringements of this (say, in the context of a newspaper asking a columnist to change a review to be less hostile to an advertiser) are regarded very harshly.

Collectivist versus Individualist Countries

There's been some research done, comparing collectivist societies to individualist societies; collectivist societies don't present the same level of effect from the correspondence bias.  A point to consider, however, is that in collectivist societies, the artificial scenarios used in studies are more "natural" - it's part of their society to adjust themselves to the circumstances, whereas individualist societies see circumstance as something that should be adapted to the individual.  It's -not- an infringement, or unexpected, for the state-owned newspaper to require everything written to be pro-state.

Maybe the differing levels of effect are less a matter of "Collectivist societies are more sensitive to environment" so much as that, in both cultures, the calibration of a heuristic is accurate, but it's simply calibrated to different test cases.

Conclusion

I don't have anything conclusive to say, here, merely a position: The Correspondence Bias is a bias that, on the whole, helps people arrive at more accurate, rather than less accurate, conclusions, and should be corrected with care to improving accuracy and correctness, rather than the mere elimination of bias.

Visions and Mirages: The Sunk Cost Dilemma

-8 OrphanWilde 20 May 2015 08:56PM

Summary

How should a rational agent handle the Sunk Cost Dilemma?

Introduction

You have a goal, and set out to achieve it.  Step by step, iteration by iteration, you make steady progress towards completion - but never actually get any closer.  You're deliberately not engaging in the sunk cost fallacy - at no point does the perceived cost of completion get higher.  But at each step, you discover another step you didn't originally anticipate, and had no priors for anticipating.

You're rational.  You know you shouldn't count sunk costs in the total cost of the project.  But you're now into twice as much effort as you would have originally invested, and have done everything you originally thought you'd need to do, but have just as much work ahead of you as when you started.

Worse, each additional step is novel; the additional five steps you discovered after completing step 6 didn't add anything to predict the additional twelve steps you added after completing step 19.  And after step 35, when you discovered another step, you updated your priors with your incorrect original estimate - and the project is still worth completing.  Over and over.  All you can conclude is that your original priors were unreliable.  Each update to your priors, however, doesn't change the fact that the remaining cost is always worth paying to complete the project.

You are starting to feel like you are caught in a penny auction for your time.

When do you give up your original goal as a mirage?  At what point do you give up entirely?

Solutions

The trivial option is to just keep going.  Sometimes this is the only viable strategy; if your goal is mandatory, and there are no alternative solutions to consider.  There's no guarantee you'll finish in any finite amount of time, however.

One option is to precommit; set a specific level of effort you're willing to engage in before stopping progress, and possibly starting over from scratch if relevant.  When bugfixing someone else's code on a deadline, my personal policy is to set aside enough time at the end of the deadline to write the code from scratch and debug that (the code I write is not nearly as buggy as that which I'm usually working on).  Commitment of this sort can work in situations in which there are alternative solutions or when the goal is disposable.

Another option is to discount sunk costs, but include them; updating your priors is one way of doing this, but isn't guaranteed to successfully navigate you through the dilemma.

Unfortunately, there isn't a general solution.  If there were, IT would be a very different industry.

Summary

The Sunk Cost Fallacy is best described as a frequently-faulty heuristic.  There are game-theoretic ways of extracting value from those who follow a strict policy of avoiding engaging in the Sunk Cost Fallacy which happen all the time in IT - frequent requirement changes to fixed-cost projects are a good example (which can go both ways, actually, depending on how the contract and requirements are structured).  It is best to always have an exit policy prepared.

Related Less Wrong Post Links

http://lesswrong.com/lw/at/sunk_cost_fallacy/ - A description of the Sunk Cost Fallacy

http://lesswrong.com/lw/9si/is_sunk_cost_fallacy_a_fallacy/ - Arguments that the Sunk Cost Fallacy may be misrepresented

http://lesswrong.com/lw/9jy/sunk_costs_fallacy_fallacy/ - The Sunk Cost Fallacy can be easily used to rationalize giving up

ETA: Post Mortem

Since somebody has figured out the game now, an explanation: Everybody who spent time writing a comment insisting you -could- get the calculations correct, and the imaginary calculations were simply incorrect?  I mugged you.  The problem is in doing the calculations -instead of- trying to figure out what was actually going on.  You forgot there was another agent in the system with different objectives from your own.  Here, I mugged you for a few seconds or maybe minutes of your time; in real life, that would be hours, weeks, months, or your money, as you keep assuming that it's your own mistake.

Maybe it is a buggy open-source library that has a bug-free proprietary version you pay for - get you in the door, then charge you money when it's more expensive to back out than to continue.  Maybe it's somebody who silently and continually moves work to your side of the fence on a collaborative project, when it's more expensive to back out than to continue.  Not counting all your costs opens you up to exploitative behaviors which add costs at the back-end.

In this case I was able to mug you in part because you didn't like the hypothetical, and fought it.  Fighting the hypothetical will always reveal something about yourself - in this case, fighting the hypothetical revealed that you were exploitable.

In real life I'd be able to mug you because you'd assume someone had fallen prone to the Planning Fallacy, as you assumed must have happened in the hypothetical.  In the case of the hypothetical, an evil god - me - was deliberately manipulating events so that the project would never be completed (Notice what role the -author- of that hypothetical played in that hypothetical, and what role -you- played?).  In real life, you don't need evil gods - just other people who see you as an exploitable resource, and will keep mugging you until you catch on to what they're doing.

Subsuming Purpose, Part II: Solving the Solution

5 OrphanWilde 14 May 2015 07:25PM

Summary: It's easy to get caught up in solving the wrong problems, solving the problems with a particular solution instead of solving the actual problem.  You should pay very careful attention to what you are doing and why.

I'll relate a seemingly purposeless story about a video game to illustrate:

I was playing Romance of the Three Kingdoms some years ago, and was trying to build the perfect city.  (The one city I ruled, actually.)  Enemies kept attacking, and the need to recruit troops was slowing my population growth (not to mention deliberate sabotage by my enemies), so eventually I came to the conclusion that I would have to conquer the map in order to finish the job.  So I conquered the map.  And then the game ending was shown, after which, finally, I could return to improving cities.

The game ending, however, startled me out of continuing to play: My now emperor was asked by his people to improve the condition of things (as things were apparently terrible), and his response was that he needed to conquer the rest of Asia first, to ensure their security.

My initial response was outrage at how the game portrayed events, but I couldn't find a fault in "his" response; it was exactly what I had been doing.  Given the rest of Asia, indeed the rest of the world, that would be exactly what I would have done had the game continued past that point, given that threats to the peace I had established still existed.  I had already conquered enemies who had never offered me direct threat, on the supposition that they would, and the fact that they held tactically advantageous positions.

It was an excellent game which managed to point out that I have failed in my original purpose in playing the game.  My purpose was subsumed by itself, or more particularly, a subgoal.  I didn't set out to conquer the map.  I lost the game.  I achieved the game's victory conditions, yes, but failed my own.  The ending, the exact description of exactly how I had failed and how my reasoning led to a conclusion I would have dismissed as absurd when I began, was so memorable it still sticks in my mind, years later.

My original purpose was subsumed.  By what, exactly, however?

By the realities of the game I was playing, I could say, if I were to rationalize my behavior; I wanted to improve all the cities I owned, but at no point until I had conquered the entire map could I afford to.  At each point in the game, there was always one city that couldn't be reliably improved.  The AI didn't share my goals; responding to force with force, to sabotage with sabotage, offered no penalties to the AI or its purposes, only to mine.  But nevertheless, I had still abandoned my original goals.  The realities of the game didn't subsume my purpose, which was still achievable within its constraints.

The specific reasons my means subsumed my ends may be illustrative: I inappropriately generalized.  I reasoned as if my territory were an atomic unit.  The risks incurred at my borders were treated as being incurred across the whole of my territory.  I devoted my resources - in particular my time - into solving a problem which afflicted an ever-decreasing percentage of that territory.  But even realizing that I was incorrectly generalizing wouldn't have stopped me; I'd have reasoned that the edge cities would still be under the same threat, and I couldn't actually finish my task until I finished my current task first.

Maybe, once my imaginary video game emperor had finally finished conquering the world, he'd have finally turned to the task of improving things.  Personally, I imagine he tripped and died falling down a flight of stairs shortly after conquering imaginary-China, and all of his work was undone in the chaos that ensued, because it seems the more poetic end to me.

A game taught me a major flaw in my goal-oriented reasoning.

I don't know the name for this error, if it has a name; internally, I call it incidental problem fixation, getting caught up in solving the sub-problems that arise in trying to solve the original problem.  Since playing, I've been very careful, each time a new challenge comes up in the course of solving an overall issue, to re-evaluate my priorities, and to consider alternatives to my chosen strategy.  I still have something of an issue with this; I can't count the number of times I've spent a full workday on a "correct" solution to a technical issue (say, a misbehaving security library) that should have taken an hour.  But when I notice that I'm doing this, I'll step away, and stop working on the "correct" solution, and return to solving the problem I'm actually trying to solve, instead of getting caught up in all the incidental problems that arose in the attempt to implement the original solution.

ETA: Link to part 1: http://lesswrong.com/lw/e12/subsuming_purpose_part_1/

Does the Utility Function Halt?

3 OrphanWilde 28 January 2015 04:08AM

Suppose, for a moment, that somebody has written the Utility Function.  It takes, as its input, some Universe State, runs it through a Morality Modeling Language, and outputs a number indicating the desirability of that state relative to some baseline, and more importantly, other Universe States which we might care to compare it to.

Can I feed the Utility Function the state of my computer right now, as it is executing a program I have written?  And is a universe in which my program halts superior to one in which my program wastes energy executing an endless loop?

If you're inclined to argue that's not what the Utility Function is supposed to be evaluating, I have to ask what, exactly, it -is- supposed to be evaluating?  We can reframe the question in terms of the series of keys I press as I write the program, if that is an easier problem to solve than what my computer is going to do.

Emotional Basilisks

-2 OrphanWilde 28 June 2013 09:10PM

Suppose it is absolutely true that atheism has a negative impact on your happiness and lifespan.  Suppose furthermore that you are the first person in your society of relatively happy theists who happened upon the idea of atheism, and moreover found absolute proof of its correctness, and quietly studied its effects on a small group of people kept isolated from the general population, and you discover that it has negative effects on happiness and lifespan.  Suppose that it -does- free people from a considerable amount of time wasted - from your perspective as a newfound atheist - in theistic theater.

Would you spread the idea?

This is, in our theoretical society, the emotional equivalent of a nuclear weapon; the group you tested it on is now comparatively crippled with existentialism and doubt, and many are beginning to doubt that the continued existence of human beings is even a good thing.  This is, for all intents and purposes, a basilisk, the mere knowledge of which causes its knower severe harm.  Is it, in fact, a good idea to go around talking about this revolutionary new idea, which makes everybody who learns it slightly less happy?  Would it be a -better- idea to form a secret society to go around talking to bright people likely to discover it themselves to try to keep this new idea quiet?

(Please don't fight the hypothetical here.  I know the evidence isn't nearly so perfect that atheism does in fact cause harm, as all the studies I've personally seen which suggest as much have some methodical flaws.  This is merely a question of whether "That which can be destroyed by the truth should be" is, in fact, a useful position to take, in view of ideas which may actually be harmful.)

View more: Next