Comment author: HungryHobo 18 April 2016 02:30:22PM 0 points [-]

The first theory is falsifiable as long as you're willing to let enough people die. Collect blood samples from everyone before they take the medecine. Sequence their full exome and put it on file.

once you have a few thousand dead and a few thousand survivors you should be able to narrow candidates down to a few dozen genes.

Make predictions about who will die out of the next few hundred who take the pill, bam.

Turns out it's an eye color gene having some weird effect on a vital pathway that the drug is linked to.

Alternatively if it's not genetic at all, if single members of pairs of twins taking the drug died at rates inconsistent with the expected numbers of mutations between twins then we could be pretty sure it's not genetic.

or perhaps it's only partially genetic, again twins and siblings would let us work this out.

Seems pretty falsifiable.

Comment author: CynicalOptimist 22 April 2016 10:24:32PM *  0 points [-]

Yes, that's definitely true. If you know a little, or a lot, about genetics, then the theory is falsifiable.

I think it still works just fine as an example though. The goal was to explain the meaning and the importance of falsifiability. Spotiswood's theory, as presented and as it was being used, wasn't making any useful predictions. No one was looking at familial comparisons, and i implied that Spotiswood wasn't making any effort to identify the gene, so the only observations that were coming in were "person lives", or "person dies". Within that context, Spotiswood's theory can explain any observation, and makes no useful predictions.

If that's not an example of an unfalsifiable theory, then it's still an example that helps explain the key elements of unfalsifiability, and helps explain why they're important.

If an audience member should then point out what you pointed out? Then that's brilliant. We can agree with the audience member, and talk about how this new consideration shows that the theory can be falsifiable after all.

But then we also get to point out how this falsifiability is what makes a theory much more useful... and the example still works because (QED) that's exactly the point we were trying to demonstrate.

Comment author: lerjj 06 April 2015 08:37:12PM 0 points [-]

In reality, not very surprised. I'd probably be annoyed/infuriated depending on whether the actual stakes are measured in billions of human lives.

Nevertheless, that merely represents the fact that I am not 100% certain about my reasoning. I do still maintain that rationality in this context definitely implies trying to maximise utility (even if you don't literally define rationality this way, any version of rationality that doesn't try to maximise when actually given a payoff matrix is not worthy of the term) and so we should expect that Clippy faces a similar decision to us, but simply favours the paperclips over human lives. If we translate from lives and clips to actual utility, we get the normal prisoner's dilemma matrix - we don't need to make any assumptions about Clippy.

In short, I feel that the requirement that both agents are rational is sufficient to rule out the asymmetrical options as possible, and clearly sufficient to show (C,C) > (D,D). I get the feeling this is where we're disagreeing and that you think we need to make additional assumptions about Clippy to assure the former.

Comment author: CynicalOptimist 17 April 2016 04:02:22PM 0 points [-]

It's an appealing notion, but i think the logic doesn't hold up.

In simplest terms: if you apply this logic and choose to cooperate, then the machine can still defect. That will net more paperclips for the machine, so it's hard to claim that the machine's actions are irrational.

Although your logic is appealing, it doesn't explain why the machine can't defect while you co-operate.

You said that if both agents are rational, then option (C,D) isn't possible. The corollary is that if option (C,D) is selected, then one of the agents isn't being rational. If this happens, then the machine hasn't been irrational (it receives its best possible result). The conclusion is that when you choose to cooperate, you were being irrational.

You've successfully explained that (C, D) and (D, C) arw impossible for rational agents, but you seem to have implicitly assumed that (C, C) was possible for rational agents. That's actually the point that we're hoping to prove, so it's a case of circular logic.

Comment author: mamert 14 April 2016 11:12:14AM 0 points [-]

Breaking #24 of the Evil Overlord List makes me wince, too, even if it's a jester doing it. Not sure if that's the main point, though, but then, none of the proposed explanation for how the king could pull his "riddle" off without at any point lying feel entirely right to me, so, unless someone offers to help me, I shall have to take your advice and not let myself get entangled in the "complex and detailed logic", when the answer might as well be "BS".

Comment author: CynicalOptimist 17 April 2016 02:46:00PM 1 point [-]

There's a lot of value in that. Sometimes it's best not to go down the rabbit hole.

Whatever the technicalities might be, the jester definitely followed the normal, reasonable rules of this kind of puzzle, and by those rules he got the right answer. The king set it up that way, and set the jester up to fail.

If he'd done it to teach the jester a valuable lesson about the difference between abstract logic and real life, then it might have been justified. But he's going to have the jester executed, so that argument disappears.

I think we can all agree, The King is definitely a dick.

Comment author: johnlawrenceaspden 14 April 2016 05:01:37PM 0 points [-]

The world is fair.

Hmm. Then we shouldn't be able to find someone who was rubbish at maths and music at the same time. Or good at both. Easily falsifiable.

Comment author: CynicalOptimist 17 April 2016 02:08:43PM 0 points [-]

I don't really recommend talking to a bunch of children and deliberately spreading the message "some of you just suck at most things".

There are positive and valuable ways to teach the lesson that people aren't all equally "good at stuff", but it's a tough one to communicate well. It's not a good thing to bring up casually as an example when you're talking about something else.

Comment author: johnlawrenceaspden 14 April 2016 04:36:18PM 0 points [-]

Couldn't Spottiswood make a gene-detector by feeding the medicine in tiny tiny amount and seeing whether you just died a bit? Could be way useful.

Comment author: CynicalOptimist 17 April 2016 01:58:03PM 0 points [-]

Incidentally, i think that you're proposing a test for susceptibility to the medicine. The relevant theory here is that any person who would be killed by a full dose, would be also be harmed but not killed, by a much smaller dose. That's a perfectly testable, falsifiable theory, but i don't think it would directly test the claim that the cause is genetic.

A better test for genetic causes, is to look at family relationships. If we believe the cause is genetic, then we predict that people who are more closely related to each other, are more likely to have the same reaction to the medicine. And we predict that identical twins would always have the exact same reaction to the medicine.

The original poster was looking for a very easy example that children could follow, without needing to understand any maths or probability theory, so I wanted to keep it simple. That's why i didn't mention the idea of improving the original scientist's theory.

Comment author: johnlawrenceaspden 14 April 2016 04:36:18PM 0 points [-]

Couldn't Spottiswood make a gene-detector by feeding the medicine in tiny tiny amount and seeing whether you just died a bit? Could be way useful.

Comment author: CynicalOptimist 17 April 2016 01:48:52PM 0 points [-]

Absolutely.

If the first scientist can come up with a way to test his theory, then it would probably make his theory more useful. It would also make it more falsifiable.

Comment author: CynicalOptimist 13 April 2016 07:30:41PM 7 points [-]

I think it would be great to start with a theory that sounds very scientific, but is unfalsifiable, and therefore useless. Then we modify the theory to include an element that is falisfiable, and the theory becomes much more useful.

For example, we have a new kind of medicine, and it is very good for some people, but when other people take the medicine it kills them. Naturally, we want to know who would be killed by the medicine, and who would be helped by it.

A scientist has a theory. He believes there is a gene that he calls the "Spottiswood gene". Anyone who has the proper form of the Spottiswood gene will be safe, they can take the medicine freely. But some people have a broken version of the Spottiswood gene, and they die when then they take the medicine. Unfortunately the scientist has no way of detecting the Spottiswood gene, so he can't tell you whether you have the gene or not.

Now this theory sounds very scientific and it's got lots of scientific words in it, but it isn't very useful. The scientist doesn't know how to detect the gene, so he can't tell you whether you are going to live or whether you are going to die. He can't tell you whether it is safe to take the medicine. If you take the pill and you survive, then the scientist will say that you had the working version of the gene. If you take the pill and you die, the scientist will say that you have the broken version of the gene. But he cannot say what will happen to you until after it has already happened, so his theory is useless. He can explain anything, but he can't make predictions in advance.

Now another scientist has a different theory. She thinks that the medicine is related to eye color. She thinks anyone with blue eyes will die if they take the medicine, and she thinks that anyone with brown eyes will be okay. She's not sure why this happens, but she plans to do more research and find out. Even if she doesn't do any more research, her theory is much more useful than than the first scientist's theory. If she's right, then blue-eyed people will know that they should avoid the medicine, and brown eyed people will know that they can take the medicine safely. She has made predictions. She predicts that no brown eyed person will die after taking the medicine, and she predicts that no blue eyed person will live.

Of course, the second scientist might be wrong. But the interesting thing is that if she's wrong, then we can prove that she's wrong. She predicted that no one with brown eyes will die after taking the medicine, so if lots of people with brown eyes die, then we will know that she's wrong.

If her theory is wrong, then we should be able to prove that it's wrong. And then if the results don't prove that she's wrong, we accept that she's probably right. That's called falsifiability.

But the first scientist doesn't have falsifiability. We know that even If he's wrong, we'll never be able to prove it - and that means we'll never know if he's wrong or right. More importantly, even he is right, his theory still wouldn't do anybody any good.

Comment author: Dagon 11 April 2016 06:40:02PM 0 points [-]

I'm not sure this is a single fallacy. It's more a mix of affective fallacy (things I don't like are false) and strawmanning an argument so you can disagree with the easy part.

Mixed in with the human tribal instinct to reinforce their own conclusions rather than looking for reasons to change (confirmation bias), this leads to making the un-persuasive arguments. This is because politics isn't about policy - most people making these bad arguments aren't actually planning or even hoping to persuade. They're hoping to reinforce their position.

Hmm. Maybe I'm saying "this isn't a fallacy". It's not an actual false belief that anyone has - almost nobody has a reflective belief that this is the reason someone on the other side disagrees. It's more a bias - a mode of behavior based on heuristics and unstated goals, rather than a common reasoning falsehood.

Comment author: CynicalOptimist 13 April 2016 06:46:09PM 1 point [-]

I think you're saying that all the cases described above, could be expressed as a mix of other fallacies, therefore it's not distinct fallacy in its own right?

I think a better question is "If we think of class of mistake as a specific named fallacy, will it help us to spot errors of reasoning that we would otherwise have missed? Or alternatively, help us to talk about errors of reasoning that we've noticed."

If it can be expressed in terms of other fallacies, but these mistakes aren't immediately obvious as examples of those fallacies, then it can be worth giving them their own label as philh suggests.

Ultimately, different people will find that different tools and explanations work well for them. While two explanations might be logically equivalent, some people will find that one makes more sense to them, and some people will find that the other makes more sense.

It seems like a useful fallacy to me (so to speak), and I intend to keep an eye out for it.

Comment author: kephasp 30 March 2016 01:19:07PM 2 points [-]

We have a pretty stupid banking system if you can cancel a transaction after the target has had time to make a transaction back to you. Or it should be straitghforward and fee-less to cancel that second transaction as a consequence.

Comment author: CynicalOptimist 13 April 2016 06:19:42PM 3 points [-]

"We have a pretty stupid banking system if you can..."

Yes, we do.

It's a complicated system that developed slowly, piece by piece, influenced by legislation, commercial pressures, other (contradictory) commercial pressures, and customers' needs. The need for backwards compatibility makes it impossible to rip up the old system and start again, and no one person is in charge of designing it. Naturally it's messed up and has inconsistencies.

---Meta comment: At first I was writing this with the intention of saying, basically: "Duh! isn't that obvious?". Now I realize that that's really unkind and unfair.

You've encountered something that you hadn't known before, and you "noticed you were surprised". That's a good thing, and it's good that you expressed it so that other people can realize the same thing.

Comment author: Viliam_Bur 30 March 2016 07:44:38PM *  0 points [-]

I am not an American, and the American ways of transferring money are mysterious to me. When I want to send money from point A to point B, I log into a web page, fill in the required data, confirm the data, and in a day or two the money is there. If I understand it correctly, the American way to do this is to personally go to the bank, take a paper form, write the data on the paper, deliver the paper to the target, and the target must take the paper to their bank.

It was a huge surprise to learn this, because I automatically assumed that the American ways of dealing with money must be more advanced and more convenient, just because of having more experience with internet and capitalism in general. But now I guess that the American system is simply a victim of its own inertia: these methods were invented and became a norm before the internet, and now people are resistent to the change, because no one wants to experiment with the new methods when their own money is involved.

Still, I agree that the second transaction should be cancellable after the first transaction was cancelled. Not sure what is the trick here. Maybe the scammer wants the part of their money returned using a different method (one that does not allow cancelling, or has shorter deadlines). Maybe the plan is that most people will not notice the cancelling of the first transaction, or be busy enough that they miss the deadline for cancelling the second one. Maybe there is some psychological trick preventing the victim from cancelling. Really, I don't know (and not being familiar with the American system, even if I read an explanation, there is a chance I would misunderstand it).

Comment author: CynicalOptimist 13 April 2016 06:10:13PM 2 points [-]

Suffice to say: There are many different methods for sending money. Some of them will involve paper forms, some will not. Some of them involve the internet, some will not. And each one has its own rules.

"Maybe the scammer wants the part of their money returned using a different method (one that does not allow cancelling, or has shorter deadlines)"

This is essentially correct. I've read about similar scams, and I believe this was how they worked.

View more: Prev | Next