You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Newcomb versus dust specks

-1 Post author: ike 12 May 2016 03:02AM

You're given the option to torture everyone in the universe, or inflict a dust speck on everyone in the universe. Either you are the only one in the universe, or there are 3^^^3 perfect copies of you (far enough apart that you will never meet.) In the latter case, all copies of you are chosen, and all make the same choice. (Edit: if they choose specks, each person gets one dust speck. This was not meant to be ambiguous.)

As it happens, a perfect and truthful predictor has declared that you will choose torture iff you are alone.

What do you do?

How does your answer change if the predictor made the copies of you conditional on their prediction?

How does your answer change if, in addition to that, you're told you are the original?

Comments (104)

Comment author: gjm 12 May 2016 11:05:46AM *  -1 points [-]

Meta: I thought posting in Main was disabled?

[EDITED: so far as I know, it still is; this post is in fact in Discussion rather than Main; I was misled by a quirk of the LW user interface. I think following that link may also illustrate what confused me.]

Comment author: ike 12 May 2016 11:46:35AM 0 points [-]

I posted this to discussion.

Comment author: gjm 12 May 2016 02:23:46PM -1 points [-]

Hmm, so you did. It turns out that

  • for any Discussion-section post under /r/discussion/lw/[code]/[title], the exact same post appears at /lw/[code]/[title] (and, when viewed there, looks as if it is in Main because the "MAIN" at the top is boldfaced and the "DISCUSSION" is not);
  • when you get a reply to a comment on a post in the Discussion section, in your inbox the "Context" link attached to that reply goes to the "Main-ified" version.

... which is how I came to think your post was in Main rather than Discussion. Sorry about that.

Comment author: Furcas 13 May 2016 03:51:44PM 0 points [-]

IMO since people are patterns (and not instances of patterns), there's still only one person in the universe regardless of how many perfect copies of me there are. So I choose dust specks. Looks like the predictor isn't so perfect. :P

Comment author: woodchopper 17 May 2016 08:59:17AM *  1 point [-]

This doesn't seem very coherent.

As it happens, a perfect and truthful predictor has declared that you will choose torture iff you are alone.

OK. Then that means if I choose torture, I am alone. If I choose the dust specks, I am not alone. I don't want to be tortured, and don't really care about 3 ^^^ 3 people getting dust specks in their eyes, even if they're all 'perfect copies of me'. I am not a perfect utilitarian.

A perfect utilitarian would choose torture though, because one person getting tortured is technically not as bad from a utilitarian point of view as 3 ^^^ 3 dust specks in eyes.

Comment author: OrphanWilde 12 May 2016 05:25:05PM 6 points [-]

3^^^3 dust specks in everybody's eye?

So basically we're talking about turning all sentient life into black holes, or torturing everybody?

I mean, it depends on how good the torture we're talking about is, and how long it will last. If it's permanent and unchanging, eventually people will get used to it/evolve past it and move on. If it's short-term, eventually people will get past it. So in either of those cases, torture is the obvious choice.

If, on the other hand, it's permanent and adaptive such that all life is completely and totally miserable for perpetuity, and there is nothing remotely good about living, oblivion seems the obvious choice.

Comment author: HungryHobo 12 May 2016 10:36:33AM 3 points [-]

This seems like a weird mishmash of other hypotheticals on the site, I'm not really seeing the point of parts of your scenario.

Comment author: gjm 12 May 2016 11:02:06AM *  -1 points [-]

I think the point may be: LW orthodoxy, in so far as there is such a thing, says to choose SPECKS over TORTURE [EDITED to add:] ... no, wait, I mean the exact opposite, TORTURE over SPECKS ... and ONE BOX over TWO BOXES, and that combining these in ike's rather odd scenario leads to the conclusion that we should prefer "torture everyone in the universe" over "dust-speck everyone in the universe" in that scenario, which might be a big enough bullet to bite to make some readers reconsider their adherence to LW orthodoxy.

My own view on this, for what it's worth, is that all my ethical intuitions -- including the one that says "torture is too awful to be outweighed by any number of dust specks" and the one that says "each of these vastly-many transitions from which we get from DUST SPECKS to TORTURE is a strict improvement" -- have been formed on the basis of experiences (my own, my ancestors', earlier people in the civilization I'm a part of) that come nowhere near to this sort of scenario, and I don't trust myself to extrapolate. If some incredibly weird sequence of events actually requires me to make such a choice for real then of course I'll have to make it (for what it's worth, I think I would choose TORTURE and ONE BOX in the separate problems and DUST SPECKS in this one, the apparent inconsistency notwithstanding, not least because I don't think I could ever actually have enough evidence to know something was a truly perfect truthful predictor) but I think its ability to tell me anything insightful about my values, or about the objective moral structure of the universe if it has one, is very very very doubtful.

Comment author: polymathwannabe 12 May 2016 09:49:38PM *  1 point [-]

LW orthodoxy, in so far as there is such a thing, says to choose SPECKS over TORTURE

No, Eliezer and Hanson are anti-specks.

Comment author: gjm 12 May 2016 10:58:09PM -1 points [-]

Wow, did I really write that? It's the exact opposite of what I meant. Will fix.

Comment author: 9eB1 12 May 2016 03:28:30PM 1 point [-]

I think your explanation may be correct, but I don't understand why torture would be the intuitive answer even so. First, if I select torture, everyone in the universe gets tortured, which means I get tortured. If instead I select dust speck, I get a dust speck, which is vastly preferable. Second, I would prefer a universe with a bunch of me to one with just me, because I'm pretty awesome so more me is pretty much just better. Basically I just fail to see a downside to the dust speck scenario.

Comment author: gjm 12 May 2016 05:34:17PM -1 points [-]

The downside to the dust speck scenario is that lots and lots and lots of you get dust-specked. But yes, I think the thought experiment is seriously impaired by the fact that the existence of more copies of you is likely a bigger deal than whether they get dust-specked.

Perhaps we can fix it, as follows: Omega has actually set up two toy universes, one with 3^^^^3 of you who may or may not get dust-specked, one with just one of you who may or may not get tortured. Now Omega tells you the same as in ike's original scenario, except that it's "everyone sharing your toy universe" who will be either tortured or dust-specked.

Comment author: ike 12 May 2016 06:28:08PM 0 points [-]

But yes, I think the thought experiment is seriously impaired by the fact that the existence of more copies of you is likely a bigger deal than whether they get dust-specked.

The idea was that your choice doesn't change the number of people, so this shouldn't affect the answer.

Comment author: gjm 12 May 2016 07:58:09PM -1 points [-]

That seems, if you don't mind my saying so, an odd thing to say when discussing a version of Newcomb's problem. ("Your choice doesn't change what's in the boxes, so ...")

Comment author: ike 12 May 2016 09:02:59PM *  0 points [-]

In the first version, there's no causal relation between your choice and the number of people in the world. In the third, there is, and in the middle one, anthropics must also be considered.

I gave multiple scenarios to make this point.

If the predictor in newcomb doesn't touch the boxes but just tells you that they predict your choice is the same as what's in the box, it turns into the smoking lesions scenario.

Comment author: ike 12 May 2016 03:41:23PM *  0 points [-]

Specks is supposed to be the intuitive answer.

Second, I would prefer a universe with a bunch of me to one with just me, because I'm pretty awesome so more me is pretty much just better.

That's why I gave scenarios where your choice doesn't cause the number of people, which is where Newcomblike scenarios come in.

Comment author: ArisKatsaris 12 May 2016 07:04:39AM 2 points [-]

Well I personally don't want to be tortured, so I choose the dust speck.

Even if I wasn't personally involved, and I was to decide on morality alone rather than personal interest, average utilitarianism tells me that I should choose the dust speck. (Better that 100% of all people suffer from a dust speck, than 100% of all people suffer from torture)

Comment author: gjm 12 May 2016 11:05:14AM -1 points [-]

Do you generally endorse average utilitarianism? E.g., if you can press a button to create a new world, completely isolated from all others, containing 10^10 people 10x happier than typical present-day Americans, do you press it if what currently exists is a world with 10^10 people only 9x happier than typical present-day Americans and refrain from pressing it if it's 11x instead?

Comment author: ArisKatsaris 13 May 2016 07:45:20PM *  0 points [-]

The answer is complex

  • First of all, the creation of people is a complex moral decision. Whether you espouse average utilitarianism or total utilitarianism or whatever other decision theory, if you ask someone "Would you press a button that would create a person", they'd normally be HESITANT, no matter whether you said it would be a very happy person or a moderately happy person. We tend to think of creating people as a big deal, that brings a big responsibility.

  • Secondly, my average utilitarianism is about the satisfaction of preferences, not happiness. This may seem a nitpick, though.

  • Thirdly, I can't help but notice that you're using the example of the creation of a world that in reality would increase average utility, even as you're using a hypothetical that states that in that particular case it would decrease average utility. This feels as a scenario designed to confuse the moral intuition into giving the wrong answer.

So using the current reality instead (rather than the one where people are 9x happier): Would I choose to create another universe happier than this one? In general, yes. Would I choose to create another universe, half as happy as this one? I general, no, not unless there's some additional value that the presence of that universe would provide to us, enough so that it would make up for the loss in average utility.

Comment author: gjm 13 May 2016 11:31:54PM -1 points [-]

the creation of people is a complex moral decision

True enough. But it seems to me that hesitation in such cases is usually because of uncertainty either about whether the new people would really have good lives or about their effect on others around them. In the scenarios I described, everyone involved gets a good life when ask their interactions with others are taken into account. So yeah, creating livres is complex, but I don't see that that invalidates my question at all.

preferences, not happiness

That happens to be my, er, preference too. I think I do think it's a nitpick; we can just take "10x happier" as a sort of shorthand for some corresponding statement about preferences.

designed to confuse the moral intuition

I promise I had absolutely no such intention. I took the levels higher than typical ones in our world to avoid distracting digressions about whether the typical life in our world is in fact better than nothing. (Note that this isn't the same question as whether it's worth continuing such a life once it's already in progress.)

Your example of a world half as happy as this seems like it has a similar but opposite problem: depending on what "half as happy" actually means, you might be describing a change that would be rejected by total utilitarianism as well as average. That's the problem I was trying to avoid.

Comment author: Jiro 13 May 2016 08:51:55PM 1 point [-]

Would I choose to create another universe happier than this one? In general, yes.

Okay, Now I reveal that just yesterday, we've discovered yet another universe which already exists and is a lot happier than the one you would choose to create. In fact it's so much happier that creating that universe would now drive the average down instead of up.

If you're using average utility, then whether this discovery has been made affects whether you want to create that other universe. Is that correct?

Comment author: ArisKatsaris 14 May 2016 02:52:12PM 0 points [-]

If you're using average utility, then whether this discovery has been made affects whether you want to create that other universe. Is that correct?

With the standard caveats, yes that seems reasonable. Given the existence of that ultrahappy universe an average human life will be more likely to exist in happier circumstances than the ones in the multiversal reality I'd create if I chose to add that less-than-averagely-happy universe.

Same way as I'd not take 20% of actual existing happy people and force them to live less happy lives.

Think about all sentient lives as if they were part of a single mind, called "Sentience". We design portions of Sentience's life. We want as much a proportion of Sentience's existence to be as happy as possible, satisfying Sentience's preferences.

Comment author: RowanE 14 May 2016 04:53:38PM 0 points [-]

The way the problem reads to me, choosing dust specks means I live in a universe where 3^^^3 of me exist, and choosing torture means 1 of me exist. I prefer that more of myself exist than not, so I should choose specks in this case.

In a choice between "torture for everyone in the universe" and "specks for everyone in the universe", the negative utility of the former obviously outweighs that of the latter, so I should choose specks.

I don't see any incongruity or reason to question my beliefs? I suppose it's meant to be implied that it's other selves that exist because of the size of the universe, so there's either one of "everyone in the universe" or 3^^^3 copies of everyone, but in that case my other selves are too far outside my light-cone for "iff you are alone" to be a prediction that makes sense.

Comment author: OrphanWilde 13 May 2016 03:34:19PM 0 points [-]

For the case that dust specks aren't additive, assuming we treat copies of me as distinct entities with distinct moral weight, 3^^^3 copies of me is either a net negative - as a result of 3^^^3 lives not worth living - or a net positive - as a result of an additional 3^^^3 lives worth living. The point of the dust speck is that it has only a negligible effect; the weight of the dust speck moral issue is completely subsumed by the weight of the duplicate people issue.

If we don't treat them as distinct moral entities, well, the duplication and the dust speck doesn't enter into it.

I don't think your conceptual problem sufficiently isolates whatever moral quandary you're trying to express; there's just too much going on here.

Comment author: ike 13 May 2016 05:10:06PM -1 points [-]

3^^^3 copies of me is either a net negative - as a result of 3^^^3 lives not worth living - or a net positive - as a result of an additional 3^^^3 lives worth living. The point of the dust speck is that it has only a negligible effect; the weight of the dust speck moral issue is completely subsumed by the weight of the duplicate people issue.

If you smoke in the smoking lesions scenario, then you shouldn't choose your action here based on how many people would exist, because they would exist anyway. (At least in the first of three cases.)

Comment author: OrphanWilde 13 May 2016 05:33:05PM -1 points [-]

Either you misunderstand the smoking lesions scenario and the importance between the difference between a correlation and a perfect predictor, or you're just trolling the board by throwing every decision theory edge case you can think of into a single convoluted mess.

Comment author: ike 13 May 2016 07:33:27PM *  0 points [-]

I may be misunderstanding something, but isn't the standard LW position on smoking to smoke even if the gene's correlation to smoking and cancer is 1?

As long as the predictor doesn't cause anything but merely informs, they're equivalent to the gene. The reason why one-boxing is correct is because your choice causes the money, while the reason smoking is correct is because your choice doesn't cause cancer.

Comment author: entirelyuseless 14 May 2016 12:42:45AM -2 points [-]

We've had this discussion before. When you one-box, your choice does not cause the money. The money is already there or it is not. Causality does not go backwards in time.

In other words, Newcomb and the smoking lesion are identical in logical form.

Comment author: ArisKatsaris 14 May 2016 02:37:17PM 0 points [-]

When you one-box, your choice does not cause the money.

Your decision algorithm will cause the choice. The prediction of that choice, by someone knowing your decision algorithm, will have caused money.

If you want the money you should therefore be a decision algorithm that makes the choice whose prediction will cause the money.

Comment author: entirelyuseless 14 May 2016 03:20:59PM -1 points [-]

You cannot make yourself into a certain decision algorithm, just as you cannot make yourself have or not have a lesion.

Comment author: ArisKatsaris 15 May 2016 11:56:42AM 1 point [-]

You cannot make yourself into a certain decision algorithm

What, is this some sort of objection where you believe that determinism means we don't make 'real' choices'?

You could be convinced by my words and make yourself into a person who chooses to one-box. Or you could refuse to be convinced and remain a person who chooses to two-boxes.

Granted, by being "convinced" or "not convinced" it means that you're already the decision algorithm that would make that choice. So what? Whether you'll be convinced or not still affects your decision algorithm from then on.

Comment author: entirelyuseless 15 May 2016 02:18:29PM -1 points [-]

No, I don't believe that determinism means we don't make real choices. But it is also true, as you note yourself, that if I am convinced by your words, then I was already the kind of person who would be convinced, and I did not make myself into that sort of person. And likewise for the opposite case.

But I am consistent: I believe we make real choices even if Omega predicts our actions, and I also believe we make real choices even if a lesion causes them. The people arguing against my position are saying we don't make real choices in the second case, so they are the ones raising the determinism objection.

Comment author: ArisKatsaris 17 May 2016 07:29:52PM 0 points [-]

Okay, can you just state clearly whether you one-box or two-box, and whether you smoke or not-smoke in the smoking lesion problem, so that I understand what your position is, before trying to understand why it is?

Comment author: ike 14 May 2016 12:47:51AM 0 points [-]

I'm referring to TDT, which disagrees.

Comment author: entirelyuseless 14 May 2016 02:35:11PM *  -1 points [-]

Eliezer disagrees, but no formal decision theory disagrees, because the two situations are formally identical.

Comment author: ike 14 May 2016 05:24:29PM 0 points [-]

They're formally identical only if you consider the choice to not counterfactually affect the outcome. Asserting that counterfactuals don't go backwards in time makes the choice not affect it, but that's just question begging.

It hasn't been formalized because we don't know how to deal with logical uncertainty fully yet.

Comment author: entirelyuseless 14 May 2016 09:25:49PM *  0 points [-]

If I have the 100% version of the lesion, it is true to say, "If I had decided not to smoke, I would not have had the lesion," because that is the only way I could have decided not to smoke, in the same way that in Newcomb it is true to say, "If I had picked one-box, I would have been a one-boxer," because that is the only way I could have picked one box.

Comment author: ike 14 May 2016 09:54:27PM 0 points [-]

In one there's counterfactual dependence and in the other there isn't. If your model doesn't take into account counterfactuals then you can't even tell the difference between smoking lesions and the case where smoking really does cause cancer.

Comment author: OrphanWilde 16 May 2016 04:03:10PM 0 points [-]

I may be misunderstanding something, but isn't the standard LW position on smoking to smoke even if the gene's correlation to smoking and cancer is 1?

If the mutual correlation to both is 1, you will smoke if and only if you have the gene, and you will have the gene if and only if you smoke, and in which case you shouldn't smoke. At the point at which the gene is a perfect predictor, if you have a genetic test and you don't have the gene, and then smoke - then the genetic test produced a false negative. Perfect predictors necessarily make a mess of causality.

Comment author: ike 16 May 2016 05:09:27PM 0 points [-]

you will smoke if and only if you have the gene, and you will have the gene if and only if you smoke, and in which case you shouldn't smoke

This implicitly assumes EDT.

At the point at which the gene is a perfect predictor, if you have a genetic test and you don't have the gene, and then smoke

But that's not what CDT counterfactuals do. You cut off previous nodes. As the choice to smoke doesn't causally affect the gene, smoking doesn't counterfactually contradict the prediction. If you would actually smoke, then yes, but counterfactuals don't imply there's any chance of it happening in reality.

Comment author: OrphanWilde 16 May 2016 06:30:43PM 2 points [-]

This implicitly assumes EDT.

No it doesn't. It assumes a "perfect predictor" is what it is. I don't give a damn about evidence - we're specifying properties of a universe here.

But that's not what CDT counterfactuals do.

CDT assumes causality makes sense in the universe. Your hypotheticals don't take place in a universe with the kind of causality causal decision theory depends upon.

You cut off previous nodes. As the choice to smoke doesn't causally affect the gene, smoking doesn't counterfactually contradict the prediction.

In the case of a perfect predictor, yes, smoking specifies which gene you have. You don't get to say "Everybody who smokes has this gene" as a property of the universe, and then pretend to be an exception to a property of the universe because you have a bizarre and magical agency that gets to bypass properties of the universe. You're a part of the universe; if the universe has a law (which it does, in our hypotheticals), the law applies to you, too.

We have a perfect predictor. We do something the perfect predictor predicted we wouldn't. There is a contradiction there, in case you didn't notice; either it's not, in fact, the perfect predictor we specified, or we didn't do the thing. One or the other. And our hypothetical universe is constructed such that the perfect predictor is a perfect predictor; therefore, we don't get to violate its predictions.

Comment author: ike 16 May 2016 06:41:31PM 0 points [-]

No it doesn't. It assumes a "perfect predictor" is what it is. I don't give a damn about evidence - we're specifying properties of a universe here.

You said "you shouldn't smoke", which is a decision-theoretical claim, not a specification. It's consistent with EDT, but not CDT.

You don't get to say "Everybody who smokes has this gene" as a property of the universe, and then pretend to be an exception to a property of the universe because you have a bizarre and magical agency that gets to bypass properties of the universe.

In other words, you're denying the exact thing that CDT asserts.

There is a contradiction there

Which is what a counterfactual is.

Whatever your theory is, it is denying core claims that CDT makes, so you're denying CDT (and implicitly assuming EDT as the method for making decisions, your arguments literally map directly onto EDT arguments).

Comment author: OrphanWilde 16 May 2016 07:20:13PM 2 points [-]

You said "you shouldn't smoke", which is a decision-theoretical claim, not a specification. It's consistent with EDT, but not CDT.

No it isn't, it's a statement about the universe: If you smoke, you'll get lesions. It's written into the specification of the universe; what decision theory you use doesn't change the characteristics of the universe.

In other words, you're denying the exact thing that CDT asserts.

No. You don't get to specify a universe without the kind of causality that the kind of CDT we use in our universe depends on, and then claim that this says something significant about decision theory. Causality in our hypothetical works differently.

Which is what a counterfactual is.

No it isn't.

Whatever your theory is, it is denying core claims that CDT makes, so you're denying CDT (and implicitly assuming EDT as the method for making decisions, your arguments literally map directly onto EDT arguments).

No it isn't. In terms of CDT, we can say that smoking causes the gene; this isn't wrong, because, according to the universe, anybody who smokes has the gene; if they didn't, they do now, because the correlation is guaranteed by the laws of the universe. No matter how much work you prepared to ensure you didn't have the gene in advance of smoking, the law of the universe says you have it now. No matter how many tests you ran, they were all wrong.

It may seem unintuitive and bizarre, because our own universe doesn't behave this way - but when you find yourself in an alien universe, stomping your foot and insisting that the laws of physics should behave the way you're used to them behaving is a fast way to die. Once you introduce a perfect predictor, the universe must bend to ensure the predictions work out.

Comment author: ike 16 May 2016 08:18:36PM 0 points [-]

You don't get to specify a universe without the kind of causality that the kind of CDT we use in our universe depends on, and then claim that this says something significant about decision theory.

What kind of causality is this, given that you assert that the correct thing to do in smoking lesions is refrain from smoking, and smoking lesions is one of the standard things where CDT says to smoke?

"A causes B, therefore B causes A" is a fallacy no matter what arguments you put forward.

In terms of CDT, we can say that smoking causes the gene

CDT asserts the opposite, and so if you claim this then you disagree with CDT.

You don't understand what counterfactuals are.

Comment author: entirelyuseless 16 May 2016 04:23:36PM 0 points [-]

"If you have a genetic test and you don't have the gene, and then smoke - then the genetic test produced a false negative."

If Omega makes the mistake of telling someone else that he predicted that you will one-box, and that person tells you, so you then take both boxes, knowing that the million is already there, then Omega's prediction was wrong.

Omega can be a perfect predictor, but he cannot tell you his prediction, at least not if you work the way normal humans do. Likewise, a gene could be a perfect predictor, but not if you know about it, at least not if you work the way normal humans do.

Comment author: OrphanWilde 16 May 2016 06:14:01PM 0 points [-]

Trial problem:

Omega appears before you, and gives you a pencil. He tells you that, in universes in which you break this pencil in half in the next twenty seconds, the universe ends immediately. Not as a result of your breaking the pencil - it's pure coincidence that all universes in which you break the pencil, the universe ends, and in all universes in which you don't, it doesn't.

Do you break the pencil in half? It's not like you're changing anything by doing so, after all; some set of universes will end, some set won't, and you aren't going to change that.

You're just deciding which set of universes you happen to occupy. Which implies something.

Comment author: entirelyuseless 16 May 2016 07:54:44PM 0 points [-]

I don't break the pencil. But I already pointed out in Newcomb and in the Smoking Lesion that I don't care if I can change anything or not. So I don't care here either.

Comment author: Luke_A_Somers 12 May 2016 10:00:01PM *  0 points [-]

It makes a huge difference whether the dust speck choices add up or not. If they do, OrphanWilde's objection applies and the only path to survival is to be tortured.

If they don't, so each one of me gets one dust speck total, then dust specks for sure. All of the copies of me (whether there are one or 3^^^3 of us) are experiencing what amounts to a choice between individually being dust-specked or individually being tortured. We get what we ask for either way, and no one else is actually impacted by the choice.

There's no need to drag average utilitarianism in.

Comment author: HungryHobo 13 May 2016 01:06:35PM 0 points [-]

Computational theory of identity so some large number of exact copies of the same individual experiencing the same thing don't sum, they only count as once instance?

Comment author: Luke_A_Somers 13 May 2016 04:55:03PM 0 points [-]

That too. But my reasoning holds in the more general case, where instead of it being 3^^^3 copies of me, it was 3^^^3 entities from the pool of people who would choose specks.

Comment author: CronoDAS 12 May 2016 04:57:06PM 0 points [-]

I choose torture if and only if I'm alone. Otherwise the predictor would be wrong, contrary to the assumptions of the hypothetical. But I'd rather be in the world where dust specks gets chosen.

Comment author: ike 12 May 2016 05:34:39PM 1 point [-]

You don't know whether you're alone.

Comment author: CronoDAS 14 May 2016 04:04:24PM 0 points [-]

Doesn't matter - I'll still end up doing it, regardless of what algorithm I try to implement!

Comment author: entirelyuseless 14 May 2016 09:42:40PM 0 points [-]

It is true in general that you will end up implementing the algorithm that you actually are. That doesn't mean you don't have to make a decision.