Comment author: Houshalter 17 July 2015 06:31:36AM 0 points [-]

I guess I meant it's not iid from the distribution you really wanted to sample. The hypothetical training set of all possible pictures of tanks, but you just sampled the ones that were during daytime.

Comment author: ThisSpaceAvailable 17 July 2015 08:05:15PM 1 point [-]

I'm not sure you understand what "iid" means. I t means that each is drawn from the same distribution, and each sample is independent of the others. The term "iid" isn't doing any work in your statement; you could just same "It's not from the distribution you really want to sample", and it would be just as informative.

Comment author: Houshalter 17 July 2015 03:31:05AM 3 points [-]

This isn't an example of overfitting, but of the training set not being iid. You wanted a random sample of pictures of tanks, but you instead got a highly biased sample that is drawn from a different distribution than the test set.

Comment author: ThisSpaceAvailable 17 July 2015 06:07:16AM 1 point [-]

"This isn't an example of overfitting, but of the training set not being iid."

Upvote for the first half of that sentence, but I'm not sure how the second applies. The set of tanks is iid, the issue that the creators of the training set allowed tank/not tank to be correlated to an extraneous variable. It's like having a drug trial where the placebos are one color and the real drug is another.

Comment author: buybuydandavis 16 July 2015 11:38:27PM 2 points [-]

I see this failure in analysis all the time.

When people want to change the behavior of others, they find some policy and incentive that would encourage the change they desire, but never stop to ask how else people might react to that change in incentives.

Anyone ever come across any catchy name or formulation for this particular failure mode?

Comment author: ThisSpaceAvailable 17 July 2015 05:55:54AM 7 points [-]

Perverse incentives.

Comment author: ThisSpaceAvailable 13 June 2015 02:55:56AM 6 points [-]

I realize that no analogy is perfect, but I don't think your sleeper cell hypothetical is analogous to AI. It would be a more accurate analogy if someone were to point out that, gee, a sleeper cell would be quite effective, and it's just a matter of time before the enemy realizes this and establishes one. There is a vast amount of Knightian uncertainty that exists in the case of AI, and does not exist in your hypothetical.

Comment author: OrphanWilde 05 June 2015 01:06:04AM -2 points [-]

Wait. You paid a karma toll to comment on one of my most unpopular posts yet to... move the goalposts from "You don't know what you're talking about" to "The only correct definition of what you're talking about is the populist one"? Well, I guess we'd better redefine evolution to mean "Spontaneous order arising out of chaos", because apparently that's how we're doing things now.

Let's pull up the definition you offered.

in fact in refers the opposite, of subtracting expenses already paid from future expected expenses.

You're not even getting the -populist- definition of the fallacy right. Your version, as-written, implies that the cost for a movie ticket to a movie I later decide I don't want to see is -negative- the cost of that ticket. See, I paid $5, and I'm not paying anything else later, so 0 - 5 = -5, a negative cost is a positive inlay, which means: Yay, free money?

Why didn't I bring that up before? Because I'm not here to score points in an argument. Why do I bring it up now? Because I'm a firm believer in tit-for-tat - and you -do- seem to be here to score points in an argument, a trait which I think is overemphasized and over-rewarded on Less Wrong. I can't fix that, but I can express my disdain for the behavior: Your games of trivial social dominance bore me.

I believe it's your turn. You're slated to deny that you're playing any such games. Since I've called your turn, I've changed it, of course; it's a chaotic system, after all. I believe the next standard response is to insult me. Once I've called that, usually -my- turn is to reiterate that it's a game of social dominance, and that this entire thing is what monkeys do, and then to say that by calling attention to it, I've left you in confusion as to what game you're even supposed to be playing against me.

We could, of course, skip -all- of that, straight to: What exactly do you actually want out of this conversation? To impart knowledge? To receive knowledge? Or do you merely seek dominance?

Comment author: ThisSpaceAvailable 06 June 2015 04:53:27AM 1 point [-]

You paid a karma toll to comment on one of my most unpopular posts yet

My understanding is that the karma toll is charged only when responding to downvoted posts within a thread, not when responding to the OP.

to... move the goalposts from "You don't know what you're talking about" to "The only correct definition of what you're talking about is the populist one"?

I didn't say that the only correct definition is the most popular one; you are shading my position to make it more vulnerable to attack. My position is merely that if, as you yourself said, "everybody" uses a different definition, then that is the definition. You said "everybody is silently ignoring what the fallacy actually refers to". But what a term "refers to" is, by definition, what people mean when they say it. The literal meaning (and I don't take kindly to people engaging in wild hyperbole and then accusing me of being hyperliteral when I take them at their word, in case you're thinking of trying that gambit) of your post is that in the entire world, you are the only person who knows the "true meaning" of the phrase. That's absurd. At the very least, your use is nonstandard, and you should acknowledge that.

Now, as to "moving the goalposts", the thing that I suspected you of not knowing what you were talking about was knowing the standard meaning of the phrase "sunk cost fallacy", so the goalposts are pretty much where they were in the beginning, with the only difference being that I have gone from strongly suspecting that you don't know what you're talking about to being pretty much certain.

Well, I guess we'd better redefine evolution to mean "Spontaneous order arising out of chaos", because apparently that's how we're doing things now.

I don't know of any mainstream references defining evolution that way. If you see a parallel between these two cases, you should explain what it is.

You're not even getting the -populist- definition of the fallacy right.

Ideally, if you are going to make claims, you would actually explain what basis you see for those claims.

Your version, as-written, implies that the cost for a movie ticket to a movie I later decide I don't want to see is -negative- the cost of that ticket. See, I paid $5, and I'm not paying anything else later, so 0 - 5 = -5, a negative cost is a positive inlay, which means: Yay, free money?

Presumably, your line of thought is that what you just presented is absurd, and therefore it must be wrong. I have two issues with that. The first is that you didn't actually present what your thinking was. That shows a lack of rigorous thought, as you failed to make explicit what your argument is. This leaves me with both articulating your argument and mine, which is rather rude. The second problem is that your syllogism "This is absurd, therefore it is false" is severely flawed. It's called the Sunk Cost Fallacy. The fact that it is illogical doesn't disqualify it from being a fallacy; being illogical is what makes it a fallacy.

Typical thinking is, indeed, that if one has a ticket for X that is priced at $5, then doing X is worth $5. For the typical mind, failing to do X would mean immediately realizing a $5 loss, while doing X would avoid realizing that loss (at least, not immediately). Therefore, when contemplating X, the $5 is considered as being positive, with respect to not doing X (that is, doing X is valued higher than not doing X, and the sunk cost is the cause of the differential).

Why didn't I bring that up before? Because I'm not here to score points in an argument.

And if you were here to score points, you would think that "You just described X as being a fallacy, and yet X doesn't make sense. Hah! Got you there!" would be a good way of doing so? I am quite befuddled.

Why do I bring it up now? Because I'm a firm believer in tit-for-tat - and you -do- seem to be here to score points in an argument

I sincerely believe that you are using the phrase "sunk cost fallacy" that is contrary to the standard usage, and that your usage impedes communication. I attempted to inform you of my concerns, and you responded by accusing me of simply trying "score points". I do not think that I have been particularly rude, and absent prioritizing your feelings over clear communication, I don't see how I could avoid you accusing me of playing "games of trivial social dominance".

"Once I've called that, usually -my- turn is to reiterate that it's a game of social dominance, and that this entire thing is what monkeys do"

Perceiving an assertion of error as being a dominance display is indeed something that the primate brain engages in. Such discussions cannot help but activate our social brains, but I don't think that means that we should avoid ever expressing disagreement.

We could, of course, skip -all- of that, straight to: What exactly do you actually want out of this conversation? To impart knowledge? To receive knowledge? Or do you merely seek dominance?

My immediate motive is to impart knowledge. I suppose if one follows the causal chain down, it's quite possible that humans' desire to impart knowledge stems from our evolution as social beings, but that strikes me as overly reductionist.

Comment author: Lumifer 02 June 2015 04:57:34AM 0 points [-]

You can't do an exhaustive search on an infinite set.

I haven't seen any infinite sets in reality.

Comment author: ThisSpaceAvailable 04 June 2015 10:30:42PM -1 points [-]

The set of possible Turing Machines is infinite. Whether you consider that to satisfy your personal definition of "seen" or "in reality" isn't really relevant.

Comment author: OrphanWilde 02 June 2015 01:08:49PM *  0 points [-]

Taking them into account is exactly what the sunk cost fallacy is; including sunk costs with prospective costs for the purposes of making decisions.

I think you confuse the most commonly used examples of the sunk cost fallacy with the sunk cost fallacy itself.

(And it would be e.g. there, strictly speaking.)

ETA: So if I'm arguing against a straw man, it's because everybody is silently ignoring what the fallacy actually refers to in favor of something related to the fallacy but not the fallacy entire.

Comment author: ThisSpaceAvailable 04 June 2015 10:28:39PM 1 point [-]

If you think that everyone is using a term for something other than what it refers to, then you don't understand how language works. And a discussion of labels isn't really relevant to the question of whether it's a straw man. Also, your example shows that what you're referring to as a sunk cost fallacy is not, in fact, a fallacy.

Comment author: benkuhn 03 June 2015 10:48:52PM 2 points [-]

To increase p'-p, prisons need to incarcerate prisoners which are less prone to recidivism than predicted. Given that past criminality is an excellent predictor of future criminality, this leads to a perverse incentive towards incarcerating those who were unfairly convicted (wrongly convicted innocents or over-convinced lesser offenders).

If past criminality is a predictor of future criminality, then it should be included in the state's predictive model of recidivism, which would fix the predictions. The actual perverse incentive here is for the prisons to reverse-engineer the predicted model, figure out where it's consistently wrong, and then lobby to incarcerate (relatively) more of those people. Given that (a) data science is not the core competency of prison operators; (b) prisons will make it obvious when they find vulnerabilities in the model; and (c) the model can be re-trained faster than the prison lobbying cycle, it doesn't seem like this perverse incentive is actually that bad.

Comment author: ThisSpaceAvailable 04 June 2015 10:20:31PM 2 points [-]

(a) Prison operators are not currently incentivized to be experts in data science (b) Why? And will that fix things? There are plenty of examples of industries taking advantage of vulnerabilities, without those vulnerabilities being fixed. (c) How will it be retrained? Will there be a "We should retrain the model" lobby group, and will it act faster than the prison lobby?

Perhaps we should have a futures market in recidivism. When a prison gets a new prisoner, they buy the associated future at the market rate, and once the prisoner has been out of prison sufficiently long without committing further crimes, the prison can redeem the future. And, of course, there would be laws against prisons shorting their own prisoners.

Comment author: Houshalter 01 June 2015 09:33:01PM 0 points [-]

I don't really understand the question. Or what "sense" means in this context.

Are you asking how close the approximation is the ideal? I would say that depends on the amount of computing power available, and that it approaches it in the limit. But it gives reasonable answers on realistic computers, whereas SI does not.

There is also some loss based on the information required to build a finite turing machine of the right size. As opposed to the infinite number of other structures you can build with logic gates. E.g. a machine that is exactly like a finite turing machine, but the 15th memory cell is corrupted, etc.

I don't think this problem is unsolvable though. There are for example Neural Turing Machines which give it access to an infinite differentiable memory.

Comment author: ThisSpaceAvailable 02 June 2015 04:08:31AM 1 point [-]

An example of a sense would be to define some quantification of how good an algorithm is, and then show that a particular algorithm has a large value for that quantity, compared to SI. In order to rigorously state that X approaches Y "in the limit", you have to have some index n, and some metric M, such that |M(Xn)-M(Yn)| -> 0. Otherwise, you're simply making a subjective statement that you find X to be "good". So, for instance, if you can show that the loss in utility in using your algorithm rather than SI goes to zero as the size of the dataset goes to infinity, that would be an objective sense in which your algorithm approximates SI.

Comment author: Lumifer 02 June 2015 02:48:35AM 0 points [-]

It's true that it's only possible to find local optima, but that's true with any algorithm.

Whaaaat? Exhaustive search is an algorithm, it will find you the global optimum anywhere. For many structures of the search space it's not hard to find the global optimum with appropriate algorithms.

Everything approximates Bayesian inference, it's just a matter of how ideal the approximation is. If you have enough data, the maximum liklihood approaches bayesian inference.

Huh?

Comment author: ThisSpaceAvailable 02 June 2015 04:00:44AM 1 point [-]

You can't do an exhaustive search on an infinite set.

View more: Prev | Next