Explaining information theoretic vs thermodynamic entropy?
What is the best way to go about explaining the difference between these two different types of entropy? I can see the difference myself and give all sorts of intuitive reasons for how the concepts work and how they kind of relate. At the same time I can see why my (undergraduate) physicist friends would be skeptical when I tell them that no, I haven't got it backwards and a string of all '1's has nearly zero entropy while a perfectly random string is 'maximum entropy'. After all, if your entire physical system degenerates into a mush with no order that you know nothing about then you say it is full of entropy.
How would I make them understand the concepts before nerdy undergraduate arrogance turns off their brains? Preferably giving them the kind of intuitive grasp that would last rather than just persuading them via authoritative speech, charm and appeal to authority. I prefer people to comprehend me than to be able to repeat my passwords. (Except where having people accept my authority and dominance will get me laid in which case I may have to make concessions to practicality.)
Preference For (Many) Future Worlds
Followup to: Quantum Russian Roulette; The Domain of Your Utility Function
The only way to win is cheat
And lay it down before I'm beat
and to another give my seat
for that's the only painless feat.
Suicide is painless
It brings on many changes
and I can take or leave it if I please.
-- M.A.S.H.
Let us pretend, for the moment, that we are rational Expected Utility Maximisers. We make our decisions with the intention of achieving outcomes that we judge to have high utility. Outcomes that satisfy our preferences. Since developments in physics have led us to abandon the notion of a simple single future world our decision making process must now grapple with the notion that some of our decisions will result in more than one future outcome. Not simply the possibility of more than one future outcome but multiple worlds, each of which with different events occurring. In extreme examples we can consider the possibility of staking our very lives on the toss of a quantum die, figuring that we are going to live in one world anyway!
How do preferences apply when making decisions with Many Worlds? The description I’m giving here will be obvious to the extent of being trivial to some, confusing to others and, I expect, considered outright wrong by others. But it is the post that I want to be able to link to whenever the question “Do you believe in quantum immortality?” comes up. Because it is a wrong question!
Is Politics the Mindkiller? An Inconclusive Test
Or is the convention against discussing politics here silly?
I propose a test. I'm going to try to lay down some rules on voting on comments for the test here (not that I can force anybody to abide by them):
1.) Top-level comments should introduce arguments (or ridicule me and/or this test); responses should be responses to those arguments.
2.) Upvote and downvote based on whether or not you find an argument convincing in the context in which it was raised. This means if it's a good argument against the argument it is responding to, not whether or not there's a good/obvious counterargument to it; if you have a good counterargument, raise it. If it's a convincing argument, and the counterargument is also convincing, upvote both. If both arguments are unconvincing, downvote both.
3.) Try not to downvote particular comments excessively, if they're legitimate lines of argument. A faulty line of argument provides opportunity for rebuttal, and so for our test has value even then; that is, I want some faulty lines of argument here. If you disagree, please downvote me, instead of the faulty comments, because this post is what you want less of, not those comments. This necessarily implies, for balance, that we not excessively upvote comments. I'd suggest fairly arbitrary limits of 3/-3?
Edit: 4.) A single argument per comment would be ideal; as MixedNuts points out here, it's otherwise hard to distinguish between one good and one bad argument, which makes the upvoting/downvoting difficult to evaluate. (My apologies about missing this, folks.)
I'm going to try really hard not to get personally involved, except to lay down a leading comment posing an argument against abortion, a position I don't hold, for the record. The core of the argument isn't disingenuous, and I hold that this argument is true, it just doesn't lead to my opposing abortion. I do not hold the moral axiom by which I extend the basic argument to argue against abortion, however; I'm playing the devil's advocate to try to help me from getting sucked into the argument while providing an initial point of discussion.
Which leads me to the next point: If you see a hole in an argument, even if it's an argument for a perspective you agree with, poke through it. The goal is to see whether we can have a constructive political argument here.
The fact that this is a test, and known to be a test, means this isn't a blind study. Uh, try to act as if you're not being tested?
After it's gone on a little while, if this post hasn't been hopelessly downvoted and ridiculed (and thus the premise and test discarded as undesirable to begin with), we can put up a poll to see whether people found the political debates helpful, not helpful, and so on.
The Mere Cable Channel Addition Paradox
The following is a dialogue intended to illustrate what I think may be a serious logical flaw in some of the conclusions drawn from the famous Mere Addition Paradox.
EDIT: To make this clearer, the interpretation of the Mere Addition Paradox this post is intended to criticize is the belief that a world consisting of a large population full of lives barely worth living is the optimal world. That is, I am disagreeing with the idea that the best way for a society to use the resources available to it is to create as many lives barely worth living as possible. Several commenters have argued that another interpretation of the Mere Addition Paradox is that a sufficiently large population with a lower quality of life will always be better than a smaller population with a higher quality of life, even if such a society is far from optimal. I agree that my argument does not necessarily refute this interpretation, but think the other interpretation is common enough that it is worth arguing against.
EDIT: On the advice of some of the commenters I have added a shorter summary of my argument in non-dialogue form at the end. Since it is shorter I do not think it summarizes my argument as completely as the dialogue, but feel free to read it instead if pressed for time.
Bob: Hi, I'm with R&P cable. We're selling premium cable packages to interested customers. We have two packages to start out with that we're sure you love. Package A+ offers a larger selection of basic cable channels and costs $50. Package B offers a larger variety of exotic channels for connoisseurs, it costs $100. If you buy package A+, however, you'll get a 50% discount on B.
Alice: That's very nice, but looking at the channel selection, I just don't think that it will provide me with enough utilons.
Bob: Utilons? What are those?
Alice: They're the unit I use to measure the utility I get from something. I'm really good at shopping, so if I spend my money on the things I usually spend it on I usually get 1.5 utilons for every dollar I spend. Now, looking at your cable channels, I've calculated that I will get 10 utilons from buying Package A+ and 100 utilons from buying Package B. Obviously the total is 110, significantly less than the 150 utilons I'd get from spending $100 on other things. It's just not a good deal for me.
Bob: You think so? Well it so happens that I've met people like you in the past and have managed to convince them. Let me tell you about something called the "Mere Cable Channel Addition Paradox."
Alice: Alright, I've got time, make your case.
Bob: Imagine that the government is going to give you $50. Sounds like a good thing, right?
Alice: It depends on where it gets the $50 from. What if it defunds a program I think is important?
Bob: Let's say that it would defund a program that you believe is entirely neutral. The harms the program causes are exactly outweighed by the benefits it brings, leaving a net utility of zero.
Alice: I can't think of any program like that, but I'll pretend one exists for the sake of the argument. Yes, defunding it and giving me $50 would be a good thing.
Bob: Okay, now imagine the program's beneficiaries put up a stink, and demand the program be re-instituted. That would be bad for you, right?
Alice: Sure. I'd be out $50 that I could convert into 75 utilons.
Bob: Now imagine that the CEO of R&P Cable Company sleeps with an important senator and arranges a deal. You get the $50, but you have to spend it on Package A+. That would be better than not getting the money at all, right?
Alice: Sure. 10 utilons is better than zero. But getting to spend the $50 however I wanted would be best of all.
Bob: That's not an option in this thought experiment. Now, imagine that after you use the money you received to buy Package A+, you find out that the 50% discount for Package B still applies. You can get it for $50. Good deal, right?
Alice: Again, sure. I'd get 100 utilons for $50. Normally I'd only get 75 utilons.
Bob: Well, there you have it. By a mere addition I have demonstrated that a world where you have bought both Package A+ and Package B is better than one where you have neither. The only difference between the hypothetical world I imagined and the world we live in is that in one you are spending money on cable channels. A mere addition. Yet you have admitted that that world is better than this one. So what are you waiting for? Sign up for Package A+ and Package B!
And that's not all. I can keep adding cable packages to get the same result. The end result of my logic, which I think you'll agree is impeccable, is that you purchase Package Z, a package where you spend all the money other than that you need for bare subsistence on cable television packages.
Alice: That seems like a pretty repugnant conclusion.
Bob: It still follows from the logic. For every world where you are spending your money on whatever you have calculated generates the most utilons there exists another, better world where you are spending all your money on premium cable channels.
Alice: I think I found a flaw in your logic. You didn't perform a "mere addition." The hypothetical world differs from ours in two ways, not one. Namely, in this world the government isn't giving me $50. So your world doesn't just differ from this one in terms of how many cable packages I've bought, it also differs in how much money I have to buy them.
Bob: So can I interest you in a special form of the package? This one is in the form of a legally binding pledge. You pledge that if you ever make an extra $50 in the future you will use it to buy Package A+.
Alice: No. In the scenario you describe the only reason buying Package A+ has any value is that it is impossible to get utility out of that money any other way. If I just get $50 for some reason it's more efficient for me to spend it normally.
Bob: Are you sure? I've convinced a lot of people with my logic.
Alice: Like who?
Bob: Well, there were these two customers named Michael Huemer and Robin Hanson who both accepted my conclusion. They've both mortgaged their homes and started sending as much money to R&P cable as they can.
Alice: There must be some others who haven't.
Bob: Well, there was this guy named Derek Parfit who seemed disturbed by my conclusion, but couldn't refute it. The best he could do is mutter something about how the best things in his life would gradually be lost if he spent all his money on premium cable. I'm working on him though, I think I'll be able to bring him around eventually.
Alice: Funny you should mention Derek Parfit. It so happens that the flaw in your "Mere Cable Channel Addition Paradox" is exactly the same as the flaw in a famous philosophical argument he made, which he called the "Mere Addition Paradox."
Bob: Really? Do tell?
Alice: Parfit posited a population he called "A" which had a moderately large population with large amounts of resources, giving them a very high level of utility per person. Then he added a second population, which was totally isolated from the other population. How they were isolated wasn't important, although Parfit suggested maybe they were on separate continents and can't sail across the ocean or something like that. These people don't have nearly as many resources per person as the other population, so each person's level of utility is lower (their lack of resources is the only reason they have lower utility). However, their lives are still just barely worth living. He called the two populations "A+."
Parfit asked if "A+" was a better world than "A." He thought it was, since the extra people were totally isolated from the original population they weren't hurting anyone over there by existing. And their lives were worth living. Follow me so far?
Bob: I guess I can see the point.
Alice: Next Parfit posited a population called "B," which was the same as A+. except that the two populations had merged together. Maybe they got better at sailing across the ocean, it doesn't really matter how. The people share their resources. The result is that everyone in the original population had their utility lowered, while everyone in the second had it raised.
Parfit asked if population "B" was better than "A+" and argued that it was because it had a greater level of equality and total utility.
Bob: I think I see where this is going. He's going to keep adding more people, isn't he?
Alice: Yep. He kept adding more and more people until he reached population "Z," a vast population where everyone had so few resources that their lives were barely worth living. This, he argued, was a paradox, because he argued that most people would believe that Z is far worse than A, but he had made a convincing argument that it was better.
Bob: Are you sure that sharing their resources like that would lower the standard of living for the original population? Wouldn't there be economies of scale and such that would allow them to provide more utility even with less resources per person?
Alice: Please don't fight the hypothetical. We're assuming that it would for the sake of the argument.
Now, Parfit argued that this argument led to the "Repugnant Conclusion," the idea that the best sort of world is one with a large population with lives barely worth living. That confers on people a duty to reproduce as often as possible, even if doing so would lower the quality of their and everyone else's lives.
He claimed that the reason his argument showed this was that he had conducted "mere addition." The populations in his paradox differed in no way other than their size. By merely adding more people he had made the world "better," even if the level of utility per person plummetted. He claimed that "For every population, A, with a high average level of utility there exists another, better population, B, with more people and a lower average level of utility."
Do you see the flaw in Parfit's argument?
Bob: No, and that kind of disturbs me. I have kids, and I agree that creating new people can add utility to the world. But it seems to me that it's also important to enhance the utility of the people who already exist.
Alice: That's right. Normal morality tells us that creating new people with lives worth living and enhancing the utility of people that already exist are both good things to use resources on. Our common sense tells us that we should spend resources on both those things. The disturbing thing about the Mere Addition Paradox is that it seems at first glance to indicate that that's not true, that we should only devote resources to creating more people with barely worthwhile lives. I don't agree with that, of course.
Bob: Neither do I. It seems to me that having a large number of worthwhile lives and a high average utility are both good things and that we should try to increase them both, not just maximize one.
Alice: You're right, of course. But don't say "having a high average utility." Say "use resources to increase the utility of people who already exist."
Bob: What's the difference? They're the same thing, aren't they?
Alice: Not quite. There are other ways to increase average utility than enhancing the utility of existing people. You could kill all the depressed people, for instance. Plus, if there was a world where everyone was tortured 24 hours a day, you could increase average utility by creating some new people who are only tortured 23 hours a day.
Bob: That's insane! Who could possibly be that literal-minded?
Alice: You'd be surprised. The point is, a better way to phrase it is "use resources to increase the utility of people who already exist," not "increase average utility." Of course, that still leaves some stuff out, like the fact that it's probably better to increase everyone's utility equally, rather than focus on just one person. But it doesn't lead to killing depressed people, or creating slightly less tortured people in a Hellworld.
Bob: Okay, so what I'm trying to say is that resources should be used to create people, and to improve people's lives. Also equality is good. And that none of these things should completely eclipse the other, they're each too valuable to maximize just one. So a society that increases all of those values should be considered more efficient at generating value than a society that just maximizes one value. Now that we're done getting our terminology straight, will you tell me what Parfit's mistake was?
Alice: Population "A" and population "A+" differ in two ways, not one. Think about it. Parfit is clear that the extra people in "A+" do not harm the existing people when they are added. That means they do not use any of the original population's resources. So how do they manage to live lives worth living? How are they sustaining themselves?
Bob: They must have their own resources. To use Parfit's example of continents separated by an ocean; each continent must have its own set of resources.
Alice: Exactly. So "A+" differs from "A" both in the size of its population, and the amount of resources it has access to. Parfit was not "merely adding" people to the population. He was also adding resources.
Bob: Aren't you the one who is fighting the hypothetical now?
Alice: I'm not fighting the hypothetical. Fighting the hypothetical consists of challenging the likelihood of the thought experiment happening, or trying to take another option than the ones presented. What I'm doing is challenging the logical coherence of the hypothetical. One of Parfit's unspoken premises is that you need some resources to live a life worth living, so by adding more worthwhile lives he's also implicitly adding resources. If he had just added some extra people to population A without giving them their own continent full of extra resources to live on then "A+" would be worse than "A."
Bob: So the Mere Addition Paradox doesn't confer on us a positive obligation to have as many children as possible, because the amount of resources we have access to doesn't automatically grow with them. I get that. But doesn't it imply that as soon as we get some more resources we have a duty to add some more people whose lives are barely worth living?
Alice: No. Adding lives barely worth living uses the extra resources more efficiently than leaving Parfit's second continent empty for all eternity. But, it's not the most efficient way. Not if you believe that creating new people and enhancing the utility of existing people are both important values.
Let's take population "A+" again. Now imagine that instead of having a population of people with lives barely worth living, the second continent is inhabited by a smaller population with the same very high percentage of resources and utility per person as the population of the first continent. Call it "A++. " Would you say "A++" was better than "A+?"
Bob: Sure, definitely.
Alice: How about a world where the two continents exist, but the second one was never inhabited? The people of the first continent then discover the second one and use its resources to improve their level of utility.
Bob: I'm less sure about that one, but I think it might be better than "A+."
Alice: So what Parfit actually proved was: "For every population, A, with a high average level of utility there exists another, better population, B, with more people, access to more resources and a lower average level of utility."
And I can add my own corollary to that: "For every population, B, there exists another, better population, C, that has the same access to resources as B, but a smaller population and higher average utility."
Bob: Okay, I get it. But how does this relate to my cable TV sales pitch?
Alice: Well, my current situation, where I'm spending my money on normal things is analogous to Parfit's population "A." High utility, and very efficient conversion of resources into utility, but not as many resources. We're assuming, of course, that using resources to both create new people and improve the utility of existing people is more morally efficient than doing just one or the other.
The situation where the government gives me $50 to spend on Package A+ is analogous to Parfit's population A+. I have more resources and more utility. But the resources aren't being converted as efficiently as they could be.
The situation where I take the 50% discount and buy Package B is equivalent to Parfit's population B. It's a better situation than A+, but not the most efficient way to use the money.
The situation where I get the $50 from the government to spend on whatever I want is equivalent to my population C. A world with more access to resources than A, but more efficient conversion of resources to utility than A+ or B.
Bob: So what would a world where the government kept the money be analogous to?
Alice: A world where Parfit's second continent was never settled and remained uninhabited for all eternity, its resources never used by anyone.
Bob: I get it. So the Mere Addition Paradox doesn't prove what Parfit thought it did? We don't have any moral obligation to tile the universe with people whose lives are barely worth living?
Alice: Nope, we don't. It's more morally efficient to use a large percentage of our resources to enhance the lives of those who already exist.
Bob: This sure has been a fun conversation. Would you like to buy a cable package from me? We have some great deals.
Alice: NO!
SUMMARY:
My argument is that Parfit’s Mere Addition Paradox doesn’t prove what it seems to. The argument behind the Mere Addition Paradox is that you can make the world a better place by the “mere addition” of extra people, even if their lives are barely worth living. In other words : "For every population, A, with a high average level of utility there exists another, better population, B, with more people and a lower average level of utility." This supposedly leads to the Repugnant Conclusion, the belief that a world full of people whose lives are barely worth living is better than a world with a smaller population where the people lead extremely fulfilled and happy lives.
Parfit demonstrates this by moving from world A, consisting of a population full of people with lots of resources and high average utility, and moving to world A+. World A+ has an addition population of people who are isolated from the original population and not even aware of the other’s existence. The extra people live lives just barely worth living. Parfit argues that A+ is a better world than A because everyone in it has lives worth living, and the additional people aren’t hurting anyone by existing because they are isolated from the original population.
Parfit them moves from World A+ to World B, where the populations are merged and share resources. This lowers the standard of living for the original people and raises it for the newer people. Parfit argues that B must be better than A+, because it has higher total utility and equality. He then keeps adding people until he reaches Z, a world where everyones’ lives are barely worth living and the population is vast. He argues that this is a paradox because most people would agree that Z is not a desirable world compared to A.
I argue that the Mere Addition Paradox is a flawed argument because it does not just add people, it also adds resources. The fact that the extra people in A+ do not harm the original people of A by existing indicates that their population must have a decent amount of resources to live on, even if it is not as many per person as the population of A. For this reason what the Mere Addition Paradox proves is not that you can make the world better by adding extra people, but rather that you can make it better by adding extra people and resources to support them. I use a series of choices about purchasing cable television packages to illustrate this in concrete terms.
I further argue for a theory of population ethics that values both using resources to create lives worth living, and using resources to enhance the utility of already existing people, and considers the best sort of world to be one where neither of these two values totally dominate the other. By this ethical standard A+ might be better than A because it has more people and resources, even if the average level of utility is lower. However, a world with the same amount of resources as A+, but a lower population and the same, or higher average utility as A is better than A+.
The main unsatisfying thing about my argument is that while it avoids the Repugnant Conclusion in most cases, it might still lead to it, or something close to it, in situations where creating new people and getting new resources are, as one commenter noted, a “package deal.” In other words, a situation where it is impossible to obtain new resources without creating some new people whose utility levels are below average. However, even in this case, my argument holds that the best world of all is one where it would be possible to obtain the resources without creating new people, or creating a smaller amount of people with higher utility.
In other words, the Mere Addition Paradox does not prove that: "For every population, A, with a high average level of utility there exists another, better population, B, with more people and a lower average level of utility." Instead what the Mere Addition Paradox seems to demonstrate is that: "For every population, A, with a high average level of utility there exists another, better population, B, with more people, access to more resources and a lower average level of utility." Furthermore, my own argument demonstrates that: "For every population, B, there exists another, better population, C, which has the same access to resources as B, but a smaller population and higher average utility."
In Defense of Tone Arguments
Suppose, for a moment, you're a strong proponent of Glim, a fantastic new philosophy of ethics that will maximize truth, happiness, and all things good, just as soon as 51% of the population accepts it as the true way; once it has achieved majority status, careful models in game theory show that Glim proponents will be significantly more prosperous and happy than non-proponents (although everybody will benefit on average, according to its models), and it will take over.
Glim has stalled, however; it's stuck at 49% belief, and a new countermovement, antiGlim, has arisen, claiming that Glim is a corrupt moral system with fatal flaws which will destroy the country if it has its way. Belief is starting to creep down, and those who accepted the ideas as plausible but weren't ready to commit are starting to turn away from the movement.
In response, a senior researcher of Glim ethics has written a scathing condemnation of antiGlim as unpatriotic, evil, and determined to keep the populace in a state of perpetual misery to support its own hegemony. He vehemently denies that there are any flaws in the moral system, and refuses to entertain antiGlim in a public debate.
In response to this, belief creeps slightly up, but acceptance goes into a freefall.
You immediately ascertain that the negativity was worse for the movement than the criticisms; you write a response, and are accused of attacking the tone and ignoring the substance of the arguments. Glim and antiGlim leadership proceed into protracted and nasty arguments, until both are highly marginalized, and ignored by the general public. Belief in Glim continues, but when the leaders of antiGlim and Glim finally arrive on a bitterly agreed upon conclusion - the arguments having centered on an actual error in the original formulations of Glim philosophy, they're unable to either get their remaining supports to cooperate, or to get any of the public to listen. Truth, happiness, and all things good never arise, and things get slightly worse, as a result of the error.
Tone arguments are not necessarily logical errors; they may be invoked by those who agree with the substance of an argument who nevertheless may feel that the argument, as posed, is counterproductive to its intended purpose.
I have stopped recommending Dawkin's work to people who are on the fence about religion. The God Delusion utterly destroyed his effectiveness at convincing people against religion. (In a world in which they couldn't do an internet search on his name, it might not matter; we don't live in that world, and I assume other people are as likely to investigate somebody as I am.) It doesn't even matter whether his facts are right or not, the way he presents them will put most people on the intellectual defensive.
If your purpose is to convince people, it's not enough to have good arguments, or good facts; these things can only work if people are receptive to those arguments and those facts. Your first move is your most important - you must try to make that person receptive. And if somebody levels a tone argument at you, your first consideration should not be "Oh! That's DH2, it's a fallacy, I can disregard what this person has to say!" It should be - why are they leveling a tone argument at you to begin with? Are they disagreeing with you on the basis of your tone, or disagreeing with the tone itself?
Or, in short, the categorical assessment of "Responding to Tone" as either a logical fallacy or a poor argument is incorrect, as it starts from an unfounded assumption that the purpose of a tone response is, in fact, to refute the argument. In the few cases I have seen responses to tone which were utilized against an argument, they were in fact ad-hominems, of the formulation "This person clearly hates [x], and thus can't be expected to have an unbiased perspective." Note that this is a particularly persuasive ad-hominem, particularly for somebody who is looking to rationalize their beliefs against an argument - and that this inoculation against argument is precisely the reason you should, in fact, moderate your tone.
What are you counting?
Eliezer's post How To Convince Me That 2 + 2 = 3 has an interesting consideration - if putting two sheep in a field, and putting two more sheep in a field, resulted in three sheep being in the field, would arithmetic hold that two plus two equals three?
I want to introduce another question. What exactly are you counting?
Imagine one sheep in one field, and another sheep in another. Now put them together. Do you now have two sheep?
"Of course!"
Ah, but is that -all- you have?
"What?"
Two sheep are more than twice as complex as a single sheep. It takes more than twice as many bits to describe two sheep than it takes to describe a single sheep, because, in addition to those two sheep, you now also have to describe their relationship to one another.
Or, to phrase it slightly differently, does 1+1=2?
Well, the answer is, it depends on what you're counting.
If you're counting the number of discrete sheep, 1+1=2. However, why is the number of discrete sheep meaningful?
If you're a hunter counting, not herded sheep, but prey - two sheep is, roughly, twice as much meat as one sheep. 1+1=2. If you're a herder, however, two sheep could be a lot more valuable than one - two sheep can turn into three sheep, if one is female and one is male. The value of two sheep can be more than twice the value of a single sheep. And if you're a hypercomputer running Solomonoff Induction to try to describe sheep positional vectors, two sheep will have a different complexity than twice the complexity of a single sheep.
Which is not to say that one plus one does not equal two. It is, however, to say that one plus one may not be meaningful as a concept outside a very limited domain.
Would an alien intelligence have arrived at arithmetic? Depends on what it counts. Is arithmetic correct?
Well, does a set of two sheep contain only two sheep, or does it also contain their interactions? Depends on your problem domain; 1+1 might just equal 2+i.
Challenge: change someone's mind
Pick one (or several) of the following. I used specific examples, therefore anything similar still counts.
1. You have a friendly new acquaintance who is pretty much an average person. He is a theist and doesn't believe Evolution, you have already had a polite debate about that. Convince him to believe in the truth*.
2. One of your friends is very deeply religious - he has devoted his life to already invested a lot of it in religion. Unexpectedly, he is also highly rational (as a personality) and very intelligent, he studies a technical degree (enjoys it), he has read books about critical thinking (he even knows a little about biases) and he says that he will stop believing in religion if you disprove it. Debating with him so far didn't help (also he isn't too good - he isn't aware of expected value and such ideas). For his own good, convince him to change his mind in the direction of the truth. He is wasting a huge potential and that's not only bad for him, but also for humanity. Also, he will feel more comfortable in his new, more sensible beliefs.
3. Your brother dislikes you because of his impression of you that was created several years ago and wasn't updated to reflect the changes in your personality. You easily make impressions to other people that are vastly different from his impression of you. Change his impression, so that he sees you truthfully.
[I have removed 4., because it wasn't about changing the mind of someone who isn't a rationalist, but about coming up with a good psychological mechanism - it deserves an entirely new thread; I suspect that 3 might be too different from 1 and 2, but it's too late to make a so big change to the thread]
I know at least one person for each category. And I haven't been able to change nobody's mind. Have you succeeded in a similar situation? Regardless of whether you have, what strategies do you think would be winning in the 4 situations? If some of them sounds good, I might even try them out and share the results. I'm especially curious about how to approach in #3, because if there is a way, it would come from low-level psychology, which is something I adore.
So, the aim of this thread is for the participants to try and change someone's mind and then tell the story.
(also, I'm willing to accept ideas of other templates for classical situations similar to those, in fact I think I had one or two more ideas, but I can't seem to recall them)
*Needless to say, if at any point, anyone proves to you that his direction is in fact the truth, it would be better to change yourself in that direction instead, but that's outside of the scope of the thread.
Critical Thinking in Global Challenges - free Coursera class
"develop and enhance your ability to think critically, assess information and develop reasoned arguments in the context of the global challenges facing society today."
starts 28 January 2013
cf https://www.coursera.org/course/criticalthinking
see also http://lesswrong.com/lw/dni/a_beginners_guide_to_irrational_behavior_free/
and http://lesswrong.com/lw/d3w/coursera_behavioural_neurology_course/
A Beginner's Guide to Irrational Behavior - free Coursera class
"learn about some of the many ways in which people behave in less than rational ways, and how we might overcome these problems."
starts 25 March 2013
cf https://www.coursera.org/course/behavioralecon
see also http://lesswrong.com/lw/d3w/coursera_behavioural_neurology_course/
Adding up to normality
I think that the idea of ‘adding up to normality’ is incoherent, but maybe I don’t understand it. There is a rule of thumb that, in general, a theory or explanation should ‘save the phenomena’ as much as possible. But Egan’s law is presented in the sequences as something more strict than an exceptionable rule of thumb. I’m going to try to explain and formalize Egan’s law as I understand it so that once it’s been made clear, we can talk about how we would argue for it.
If a theory adds up to normality in the strict sense, then there are no true sentences in normal language which do not have true counterparts in a theory. Thus, if it is true to say that the apple is green, a theory which adds up to normality will contain a sentence which describes the same phenomenon as the normal language sentence, and is true (and false if the normal language sentence is false). For example: if an apple is green, then light of such and such wavelength is predominantly reflected from its surface while other visible wavelengths are predominantly absorbed. Let’s call this the Egan property of a theory. A theory would fail to add up to normality either if it denied the truth of true sentences in normal language (e.g. ‘the apple isn’t really green’) or if it could make nothing of the phenomenon of normal language at all (e.g. nothing really has color).
t has the property E = for all a in n, there is an α in t such that a if and only if α
t is a theoretical language and ‘α ‘is a sentence within it, n is the normal language and ‘a’ is a sentence within it. E is the Egan property. Now that we’ve defined the Egan property of a theory, we can move on to Egan’s law.
The way Egan’s law is articulated in the sequences, it seems to be an a priori necessary but insufficient condition on the truth of a theory. So it is necessary that, if a theory is true, it has the Egan property.
If α1, α2, α3..., then Et.
Or alternatively: If t is true, then Et.
That’s Egan’s law, so far as I understand it. Now, how do we argue for it? There’s an inviting, but I think troublesome Tarskian way to argue for Egan’s law. Tarski’s semantic definition of truth is such that some sentence β is true in language L if and only if b, where b is a sentence is a metalanguage. Following this, we could say that for any theory t to be true, all its sentences α must be true, and what it means for any α to be true is that a, where a is a sentence in the metalanguage we call normal language. But this would mean that a and α are strictly translations of one another in two different languages. If a theory is going to be explanitory of phenomena, then sentences like “light of such and such wavelength is predominantly reflected from the apple’s surface while other visible wavelengths are predominantly absorbed” have to have more content than “the apple is green”. If they mean the same thing, as sentences in Tarski’s definition of truth must, then theories can’t do any explaining.
So how else can we argue for Egan’s law?
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)