How to provide a simple example to the requirement of falsifiability in the scientific method to a novice audience?

5 Val 11 April 2016 09:26PM

(I once posted this question on academia.stackexchange, but it was deemed to be off topic there. I hope it would be more on-topic here)


I would like to introduce the basics of the scientific method to an audience unfamiliar with the real meaning of it, without making it hard to understand.

As the suspected knowledge level of the intended audience is of the type which commonly thinks that to "prove something scientifically" is the same as "use modern technological gadgets to measure something, afterwards interpret the results as we wish", my major topic would be the selection of an experimental method and the importance of falsifiability. Wikipedia lists the "all swans are white" as an example for a falsifiable statement, but it is not practical enough. To prove that all swans are white would require to observe all the swans in the world. I'm searching of a simple example which uses the scientific method to determine the workings of an unknown system, starting by forming a good hypothesis.

A good example I found is the 2-4-6 game, culminating in the very catchy phrase "if you are equally good at explaining any outcome, you have zero knowledge". This would be one of the best examples to illustrate the most important part of the scientific method which a lot of people imagine incorrectly, it has just one flaw: for best effect it has to be interactive. And if I make it interactive, it has some non-negligible chance to fail, especially if done with a broader audience.

Is there any simple, non-interactive example to illustrate the problem underlying the 2-4-6 game? (for example, if we had taken this naive method to formulate our hypothesis, we would have failed)

I know, the above example is mostly used in the topic of fallacies, like the confirmation bias, but nevertheless it seems to me as a good method in grasping the most important aspects of the scientific method.

I've seen several good posts about the importance of falsifiability, some of them in this very community, but I did not yet see any example which is simple enough so that people unfamiliar with how scientists work, can also understand it. A good working example would be one, where we want to study a familiar concept, but by forgetting to take falsifiability into account, we arrive to an obviously wrong (and preferably humorous) conclusion.

(How I imagine such an example to work? My favorite example in a different topic is the egg-laying dog. A dog enters the room where we placed ten sausages and ten eggs, and when it leaves the room, we observe that the percentage of eggs relative to the sausages increased, so we conclude that the dog must have produced eggs. It's easy to spot the mistake in this example, because the image of a dog laying eggs is absurd. However, let's replace the example of the dog with an effective medicine against heart diseases where someone noticed that the chance of dying of cancer in the next ten years increased for those patients who were treated with it, so they declared the medicine to be carcinogenic even though it wasn't (people are not immortal, so if they didn't die in one disease, they died later in another one). In this case, many people will accept that it's carcinogenic without any second thought. This is why the example of the egg-laying dog can be so useful in illustrating the problem. Now, the egg-laying dog is not a good example to raise awareness for the importance of falsifiability, I presented it as a good and useful style for an effective example any laymen can understand)

 

Comment author: SquirrelInHell 09 April 2016 01:59:00AM 0 points [-]

One of my friends, whose meta beliefs about religion etc. match pretty closely with mine, goes on calling herself "Christian". There's literally nothing Christian about her, just the label.

And it works.

She is getting all the social benefits of actually being Christian, without believing any of the bullshit.

This blows my mind, and yet it is how social groups work.

Comment author: Val 10 April 2016 09:14:02AM *  0 points [-]

Not necessarily. One might sincerely believe in the core values promoted by Christianity (Do unto others as you would have them do unto you) without being a biblical literalist. Christianity includes a wide spectrum of views, not only what how some people define it, which might even be just a parody of Christianity.

To summarize it, I don't know her so I cannot judge whether she's just lying for a social benefit or not, but I find it plausible that she might not be lying, or might not behave like this solely as a facade for a social benefit.

Comment author: gjm 06 April 2016 01:51:50PM *  1 point [-]

When you write "polyhacking", do you actually mean "bihacking"? If not, what you say you fear seems to me a very odd thing to fear.

Actually, I would be quite surprised if (within, let's say, the next 40 years, and assuming no huge technological changes that would affect this) heterosexuality + unwillingness to try to become bi were enough to get anyone widely labelled as homophobic. (I'm sure there are already people who would apply that label, but not enough to have much impact.)

[EDITED to add:] Just to clarify, the point of the second paragraph is that I find Val's fear not-terribly-plausible even if "bihacking" is what s/he meant.

Comment author: Val 06 April 2016 07:58:21PM *  1 point [-]

You are right, I meant bihacking, my mistake.

My concern was based on the observation how the word phobia (especially in cases of homophobia and xenophobia) is increasingly applied to cases of mild dislike, or even to cases of failing to show open support.

Comment author: Fluttershy 04 April 2016 07:48:10AM 1 point [-]

Many people are aware of Alicorn's post on polyhacking. There are a few things which have been written on bihacking, though I haven't seen bihacking discussed within the rationalist community as widely as polyhacking has been. Bihacking is the process of actively trying to become bisexual.

First, there are a couple sources which suggest that people can have "epiphanies", after which they become bisexual, or perhaps just recognize their latent bisexuality. This may be due to the fact that they are able to tell themselves different stories about their feelings towards others after having an epiphany. Here are two relevant links:

  • Ozy's Notes on the Success of Bihacking is the first post I'd recommend to anyone interested in bihacking.
  • This discussion also supports the idea that the stories people tell themselves about their feelings are more important than their feelings are in determining attraction.

Secondly, some people have had mild successes with working towards bisexuality by slowly starting to explore new experiences:

  • This highly upvoted comment strongly encourages this strategy.
  • This comment does too.
  • Both of the above two links focused on bihacking with online material. However, it may be easier to bihack via establishing a comfortable level of intimacy with your dispreferred gender of people (e.g. via cuddling a whole bunch of people), than it is to bihack via material.
Comment author: Val 04 April 2016 03:01:21PM *  4 points [-]

I fear a time will come when people who don't want to try <del>polyhacking</del> bihacking will be labeled as homophobic. And that will just further dilute the term.

Comment author: Huluk 26 March 2016 12:55:37AM *  26 points [-]

[Survey Taken Thread]

By ancient tradition, if you take the survey you may comment saying you have done so here, and people will upvote you and you will get karma.

Let's make these comments a reply to this post. That way we continue the tradition, but keep the discussion a bit cleaner.

Comment author: Val 01 April 2016 02:33:15PM *  23 points [-]

Besides saying that I have taken the survey...

I would also like to mention that the predictions of probabilities of unobservable concepts was the hardest one for me. Of course, there are some in which i believe more than in some others, but still, any probability besides 0% or 100% seems really strange for me. For something like being in a simulation, if I would believe it but have some doubts, saying 99%, or if I would not believe but being open to it and saying 1%, these seem so arbitrary and odd for me. 1% is really huge in the scope of very probable or very improbable concepts which cannot be tested yet (and some may never ever be).

... before losing my sanity in trying to choose the percentages I would find plausible at least a few minutes later, I had to fill them based on my current gut feelings instead of Fermi estimation-like calculations.

Comment author: Val 01 April 2016 02:23:02PM 1 point [-]

Please explain what you mean by saying "it is easier to...".

Judging by the examples, for me the opposite seems to be much easier, if we define easiness as how easy it is to identify with a view, select a view, or represent a view among other people.

Do you instead use the term as "it will be more useful for me"? For the average person, it is much easier to identify oneself with a label, because it signifies a loyalty to a well-defined group of people, which can lead to benefits within that group.

Saying "I'm a democrat" or "I'm a liberal" or "I'm a conservative" makes it much easier for other people who also identify with that group to give you recognition, while saying "I am a seeker of accurate world-models, whatever those turn out to be" will probably lead to confusion or even misunderstandings.

Even if we are not talking about expressing your views to others, but to formulate your views for yourself, for most people it seems that labels are still much easier than to come up with their own definitions of beliefs. If we talk about easiness, it's much easier to choose from existing templates than define a custom one.

However, it might happen that I just misunderstood you because of how we interpret the meaning of "easiness".

Comment author: Val 31 March 2016 10:12:36PM *  7 points [-]

Insurance for small consumer products are not rational for the buyer, for the very reasons which were presented in the question. If you can afford the loss of the item, it's better to not buy insurance and just buy the item again in the case it is lost or destroyed. Why insurance companies are still making money out of extended warranties for consumer products, is because they have good marketing and people are not perfectly rational. Gambling, lottery, etc. exist for the same reasons, despite having a negative expected value.

However, if you cannot afford the loss, it is advantageous to buy insurance. There are things which people own but cannot replace on short notice, and suffer greatly if they do lose it. For example, houses, or business-crucial items. You can afford to pay the insurance, but cannot afford losing the item in question. Taking a loan to replace it might be much more expensive than the insurance.

There are situations when losing something might cost you much more than its monetary value. Losing your house might make you homeless. Losing you car, if you require it for your job, might cost you your job. Having an expensive machine you make your living out of, losing it might put you out of business. Not having enough money to afford an expensive operation might cost you your life if you don't have the health insurance which would have paid for it.

Comment author: Lyyce 14 March 2016 11:35:30AM *  1 point [-]

One major difference between left and right is the stance on personal responsibility.

Leftist intellectuals (tends to) think society influence trumps individual capabilities, so people are not responsible for their misfortunes and deserve to be helped. Whereas Rightist have the opposite view (related).

This seems trivial, especially in hindsight. But I hardly ever see it mentioned and in most discussions the right side treat the left as foolish and irrational and the left thinks right people are self-interested and evil rather than simply having a different philosophical opinion.

I guess this is part of the bigger picture on political discourse, it is always easier to dehumanise an opponent than to admit is point is as valid as ours.

Comment author: Val 15 March 2016 04:35:27AM 1 point [-]

Still, it would be very wrong to describe rightists as thinking that everyone should starve who can't support themselves. Many people on the political right also practice and/or believe in charity.

Comment author: elharo 06 March 2016 01:08:28AM -2 points [-]

It is comfortable for richer people to think they are richer because of the moral failings of the poor. And that justifies a paternalistic approach to poverty relief using vouchers and in-kind support. But the big reason poor people are poor is because they don’t have enough money, and it shouldn’t come as a huge surprise that giving them money is a great way to reduce that problem—considerably more cost-effectively than paternalism.

-- Charles Kenney, "For Fighting Poverty, Cash Is Surprisingly Effective", Bloomberg News, June 3, 2013

Comment author: Val 07 March 2016 03:58:03PM 3 points [-]

For a counter-example, see the story of almost every lottery winner ever, who was poor before winning the lottery, and ended up poor again soon enough.

Comment author: Val 04 March 2016 06:18:01PM 0 points [-]

Let's assume such an AI could be created perfectly.

Wouldn't there be a danger of freezing human values forever to the values of the society which created it?

Imagine somehow the Victorian people (using steampunk or whatever) managed to build such an AI, and that AI would forever enforce their values. Would you be happy with every single value it enforced?

View more: Prev | Next