Theism, Wednesday, and Not Being Adopted
(Disclaimer: This post is sympathetic to a certain subset of theists. I am not myself a theist, nor have I ever been one. I do not intend to justify all varieties of theism, nor do I intend to justify much in the way of common theistic behavior.)
I'm not adopted. You all believe me, right? How do you think I came by this information, that you're confident in my statement? The obvious and correct answer is that my parents told me so1. Why do I believe them? Well, they would be in a position to know the answer, and they have been generally honest and sincere in their statements to me. A false belief on the subject could be hazardous to me, if I report inaccurate family history to physicians, and I believe that my parents have my safety in mind. I know of the existence of adopted people; the possibility isn't completely absent from my mind - but I believe quite confidently that I am not among those people, because my parents say otherwise.
Building Weirdtopia
Followup to: Eutopia is Scary
"Two roads diverged in the woods. I took the one less traveled, and had to eat bugs until Park rangers rescued me."
—Jim Rosenberg
Utopia and Dystopia have something in common: they both confirm the moral sensibilities you started with. Whether the world is a libertarian utopia of the non-initiation of violence and everyone free to start their own business, or a hellish dystopia of government regulation and intrusion—you might like to find yourself in the first, and hate to find yourself in the second; but either way you nod and say, "Guess I was right all along."
So as an exercise in creativity, try writing them down side by side: Utopia, Dystopia, and Weirdtopia. The zig, the zag and the zog.
I'll start off with a worked example for public understanding of science:
- Utopia: Most people have the equivalent of an undergrad degree in something; everyone reads the popular science books (and they're good books); everyone over the age of nine understands evolutionary theory and Newtonian physics; scientists who make major contributions are publicly adulated like rock stars.
- Dystopia: Science is considered boring and possibly treasonous; public discourse elevates religion or crackpot theories; stem cell research is banned.
- Weirdtopia: Science is kept secret to avoid spoiling the surprises; no public discussion but intense private pursuit; cooperative ventures surrounded by fearsome initiation rituals because that's what it takes for people to feel like they've actually learned a Secret of the Universe and be satisfied; someone you meet may only know extremely basic science, but they'll have personally done revolutionary-level work in it, just like you. Too bad you can't compare notes.
Polyhacking
This is a post about applied luminosity in action: how I hacked myself to become polyamorous over (admittedly weak) natural monogamous inclinations. It is a case history about me and, given the specific topic, my love life, which means gooey self-disclosure ahoy. As with the last time I did that, skip the post if it's not a thing you desire to read about. Named partners of mine have given permission to be named.
1. In Which Motivation is Acquired
When one is monogamous, one can only date monogamous people. When one is poly, one can only date poly people.1 Therefore, if one should find oneself with one's top romantic priority being to secure a relationship with a specific individual, it is only practical to adapt to the style of said individual, presuming that's something one can do. I found myself in such a position when MBlume, then my ex, asked me from three time zones away if I might want to get back together. Since the breakup he had become polyamorous and had a different girlfriend, who herself juggled multiple partners; I'd moved, twice, and on the way dated a handful of people to no satisfactory clicking/sparking/other sound effects associated with successful romances. So the idea was appealing, if only I could get around the annoying fact that I was not, at that time, wired to be poly.
Everything went according to plan: I can now comfortably describe myself and the primary relationship I have with MBlume as poly. <bragging>Since moving back to the Bay Area I've been out with four other people too, one of whom he's also seeing; I've been in my primary's presence while he kissed one girl, and when he asked another for her phone number; I've gossiped with a secondary about other persons of romantic interest and accepted his offer to hint to a guy I like that this is the case; I hit on someone at a party right in front of my primary. I haven't suffered a hiccup of drama or a twinge of jealousy to speak of and all evidence (including verbal confirmation) indicates that I've been managing my primary's feelings satisfactorily too.</bragging> Does this sort of thing appeal to you? Cross your fingers and hope your brain works enough like mine that you can swipe my procedure.
Pinpointing Utility
Following Morality is Awesome. Related: Logical Pinpointing, VNM.
The eternal question, with a quantitative edge: A wizard has turned you into a whale, how awesome is this?
"10.3 Awesomes"
Meditate on this: What does that mean? Does that mean it's desirable? What does that tell us about how awesome it is to be turned into a whale? Explain. Take a crack at it for real. What does it mean for something to be labeled as a certain amount of "awesome" or "good" or "utility"?
What is This Utility Stuff?
Most of agree that the VNM axioms are reasonable, and that they imply that we should be maximizing this stuff called "expected utility". We know that expectation is just a weighted average, but what's this "utility" stuff?
Well, to start with, it's a logical concept, which means we need to pin it down with the axioms that define it. For the moment, I'm going to conflate utility and expected utility for simplicity's sake. Bear with me. Here are the conditions that are necessary and sufficient to be talking about utility:
- Utility can be represented as a single real number.
- Each outcome has a utility.
- The utility of a probability distribution over outcomes is the expected utility.
- The action that results in the highest utility is preferred.
- No other operations are defined.
I hope that wasn't too esoteric. The rest of this post will be explaining the implications of those statements. Let's see how they apply to the awesomeness of being turned into a whale:
- "10.3 Awesomes" is a real number.
- We are talking about the outcome where "A wizard has turned you into a whale".
- There are no other outcomes to aggregate with, but that's OK.
- There are no actions under consideration, but that's OK.
- Oh. Not even taking the value?
Note 5 especially. You can probably look at the number without causing trouble, but if you try to treat it as meaningful for something other than condition 3 and 4, even accidentally, that's a type error.
Unfortunately, you do not have a finicky compiler that will halt and warn you if you break the rules. Instead, your error will be silently ignored, and you will go on, blissfully unaware that the invariants in your decision system no longer pinpoint VNM utility. (Uh oh.)
Unshielded Utilities, and Cautions for Utility-Users
Let's imagine that utilities are radioactive; If we are careful with out containment procedures, we can safely combine and compare them, but if we interact with an unshielded utility, it's over, we've committed a type error.
To even get a utility to manifest itself in this plane, we have to do a little ritual. We have to take the ratio between two utility differences. For example, if we want to get a number for the utility of being turned into a whale for a day, we might take the difference between that scenario and what we would otherwise expect to do, and then take the ratio between that difference and the difference between a normal day and a day where we also get a tasty sandwich. (Make sure you take the absolute value of your unit, or you will reverse your utility function, which is a bad idea.)
So the form that the utility of being a whale manifests as might be "500 tasty sandwiches better than a normal day". We have chosen "a normal day" for our datum, and "tasty sandwiches" for our units. Of course we could have just as easily chosen something else, like "being turned into a whale" as our datum, and "orgasms" for our units. Then it would be "0 orgasms better than being turned into a whale", and a normal day would be "-400 orgasms from the whale-day".
You say: "But you shouldn't define your utility like that, because then you are experiencing huge disutility in the normal case."
Wrong, and radiation poisoning, and type error. You tried to "experience" a utility, which is not in the defined operations. Also, you looked directly at the value of an unshielded utility (also known as numerology).
We summoned the utilities into the real numbers, but they are still utilities, and we still can only compare and aggregate them. The summoning only gives us a number that we can numerically do those operations on, which is why we did it. This is the same situation as time, position, velocity, etc, where we have to select units and datums to get actual quantities that mathematically behave like their ideal counterparts.
Sometimes people refer to this relativity of utilities as "positive affine structure" or "invariant up to a scale and shift", which confuses me by making me think of an equivalence class of utility functions with numbers coming out, which don't agree on the actual numbers, but can be made to agree with a linear transform, rather than making me think of a utility function as a space I can measure distances in. I'm an engineer, not a mathematician, so I find it much more intuitive and less confusing to think of it in terms of units and datums, even though it's basically the same thing. This way, the utility function can scale and shift all it wants, and my numbers will always be the same. Equivalently, all agents that share my preferences will always agree that a day as a whale is "400 orgasms better than a normal day", even if they use another basis themselves.
So what does it mean that being a whale for a day is 400 orgasms better than a normal day? Does it mean I would prefer 400 orgasms to a day as a whale? Nope. Orgasms don't add up like that; I'd probably be quite tired of it by 15. (remember that "orgasms" were defined as the difference between a day without an orgasm and a day with one, not as the utility of a marginal orgasm in general.) What it means is that I'd be indifferent between a normal day with a 1/400 chance of being a whale, and a normal day with guaranteed extra orgasm.
That is, utilities are fundamentally about how your preferences react to uncertainty. For example, You don't have to think that each marginal year of life is as valuable as the last, if you don't think you should take a gamble that will double your remaining lifespan with 60% certainty and kill you otherwise. After all, all that such a utility assignment even means is that you would take such a gamble. In the words of VNM:
We have practically defined numerical utility as being that thing for which the calculus of mathematical expectations is legitimate.
But suppose there are very good arguments that have nothing to do with uncertainty for why you should value each marginal life-year as much as the last. What then?
Well, "what then" is that we spend a few weeks in the hospital dying of radiation poisoning, because we tried to interact with an unshielded utility again (utilities are radioactive, remember? The specific error is that we tried to manipulate the utility function with something other than comparison and aggregation. Touching a utility directly is just as much an error as observing it directly.
But if the only way to define your utility function is with thought experiments about what gambles you would take, and the only use for it is deciding what gambles you would take, then isn't it doing no work as a concept?
The answer is no, but this is a good question because it gets us closer to what exactly this utility function stuff is about. The utility of utility is that defining how you would behave in one gamble puts a constraint on how you would behave in some other related gambles. As with all math, we put in some known facts, and then use the rules to derive some interesting but unknown facts.
For example, if we have decided that we would be indifferent between a tasty sandwich and a 1/500 chance of being a whale for tomorrow, and that we'd be indifferent between a tasty sandwich and a 30% chance of sun instead of the usual rain, then we should also be indifferent between a certain sunny day and a 1/150 chance of being a whale.
Monolithicness and Marginal (In)Dependence
If you are really paying attention, you may be a bit confused, because it seems to you that money or time or some other consumable resource can force you to assign utilities even if there is no uncertainty in the system. That issue is complex enough to deserve its own post, so I'd like to delay it for now.
Part of the solution is that as we defined them, utilities are monolithic. This is the implication of "each outcome has a utility". What this means is that you can't add and recombine utilities by decomposing and recombining outcomes. Being specific, you can't take a marginal whale from one outcome and staple it onto another outcome, and expect the marginal utilities to be the same. For example, maybe the other outcome has no oceans for your marginal whale.
For a bigger example, what we have said so far about the relative value of sandwiches and sunny days and whale-days does not necessarily imply that we are indifferent between a 1/250 chance of being a whale and any of the following:
-
A day with two tasty sandwiches. (Remember that a tasty sandwich was defined as a specific difference, not a marginal sandwich in general, which has no reason to have a consistent marginal value.)
-
A day with a 30% chance of sun and a certain tasty sandwich. (Maybe the tasty sandwich and the sun at the same time is horrifying for some reason. Maybe someone drilled into you as a child that "bread in the sun" was bad bad bad.)
-
etc. You get the idea. Utilities are monolithic and fundamentally associated with particular outcomes, not marginal outcome-pieces.
However, as in probability theory, where each possible outcome technically has its very own probability, in practice it is useful to talk about a concept of independence.
So for example, even though the axioms don't guarantee in general that it will ever be the case, it may work out in practice that given some conditions, like there being nothing special about bread in the sun, and my happiness not being near saturation, the utility of a marginal tasty sandwich is independent of a marginal sunny day, meaning that sun+sandwich is as much better than just sun as just a sandwich is better than baseline, ultimately meaning that I am indifferent between {50%: sunny+sandwich; 50% baseline} and {50%: sunny; 50%: sandwich}, and other such bets. (We need a better solution for rendering probability distributions in prose).
Notice that the independence of marginal utilities can depend on conditions and that independence is with respect to some other variable, not a general property. The utility of a marginal tasty sandwich is not independent of whether I am hungry, for example.
There is a lot more to this independence thing (and linearity, and risk aversion, and so on), so it deserves its own post. For now, the point is that the monolithicness thing is fundamental, but in practice we can sometimes look inside the black box and talk about independent marginal utilities.
Dimensionless Utility
I liked this quote from the comments of Morality is Awesome:
Morality needs a concept of awfulness as well as awesomeness. In the depths of hell, good things are not an option and therefore not a consideration, but there are still choices to be made.
Let's develop that second sentence a bit more. If all your options suck, what do you do? You still have to choose. So let's imagine we are in the depths of hell and see what our theories have to say about it:
Day 78045. Satan has presented me with three options:
Go on a date with Satan Himself. This will involve romantically torturing souls together, subtly steering mortals towards self-destruction, watching people get thrown into the lake of fire, and some very unsafe, very nonconsensual sex with the Adversary himself.
Satan's court wizard will turn me into a whale and release me into the lake of fire, to roast slowly for the next month, kept alive by twisted black magic.
Wat do?
They all seem pretty bad, but "pretty bad" is not a utility. We could quantify paperclipping as a couple hundred billion lives lost. Being a whale in the lake of fire would be awful, but a bounded sort of awful. A month of endless horrible torture. The "date" is having to be on the giving end of what would more or less happen anyway, and then getting savaged by Satan. Still none of these are utilities.
Coming up with actual utility numbers for these in terms of tasty sandwiches and normal days is hard; it would be like measuring the microkelvin temperatures of your physics experiment with a Fahrenheit kitchen thermometer; in principle it might work, but it isn't the best tool for the job. Instead, we'll use a different scheme this time.
Engineers (and physicists?) sometimes transform problems into a dimensionless form that removes all redundant information from the problem. For example, for a heat conduction problem, we might define an isomorphic dimensionless temperature so that real temperatures between 78 and 305 C become dimensionless temperatures between 0 and 1. Transforming a problem into dimensionless form is nearly always helpful, often in really surprising ways. We can do this with utility too.
Back to depths of hell. The date with Satan is clearly the best option, so it gets dimensionless utility 1. The paperclipper gets 0. On that scale, I'd say roasting in the lake of fire is like 0.999 or so, but that might just be scope insensitivity. We'll take it for now.
The advantages with this approach are:
-
The numbers are more intuitive. -5e12 QALYs, -1 QALY, and -50 QALYs from a normal day, or the equivalent in tasty sandwiches, just doesn't have the same feeling of clarity as 0, 1 and .999. (For me at least. And yes I know those numbers don't quite match.)
-
Not having to relate the problem quantities to far-away datums or drastically misappropriate units (tasty sandwiches for this problem) makes the numbers easier and more direct to come up with. Also we have to come up with less of them. The problem is self-contained.
-
If defined right, the connection between probability and utility becomes extra-clear. For example: What chance between a Satan-date and a paperclipper would make me indifferent with a lake-of-fire-whale-month? 0.999! Unitless magic!
-
All confusing redundant information (like negative signs) are removed, which makes it harder to accidentally do numerology or commit a type error.
-
All redundant information is removed, which means you find many more similarities between problems. The value of this in general cannot be understated. Just look at the generalizations made about Reynolds number! "[vortex shedding] occurs for any fluid, size, and speed, provided that Re between ~40 and 10^3". What! You can just say that in general? Magic! I haven't actually done enough utility problems to know that we'll find stuff like that but I trust dimensionless form.
Anyways, it seems that going on that date is what I ought to do. So did we need a concept of awfulness? Did it matter that all the options sucked? Nope; the decision was isomorphic in every way to choosing lunch between a BLT, a turkey club, and a handful of dirt.
There are some assumptions in that lunch bit, and it's worth discussing. It seems counterintuitive or even wrong, to say that your decision-process faced with lunch should be the same as when faced with a decision in involving torture, rape, and paperclips. The latter seems somehow more important. Where does that come from? Is it right?
This may deserve a bigger discussion, but basically, if you have finite resources (thought-power, money, energy, stress) that are conserved or even related across decisions, you get coupling of "different" decisions in a way that we didn't have here. Your intuitions are calibrated for that case. Once you have decoupled the decision by coming up with the actual candidate options. The depths-of-hell decision and the lunch decision really are totally isomorphic. I'll probably address this properly later, if I discuss instrumental utility of resources.
Anyways, once you put the problem in dimensionless form, a lot of decisions that seemed very different become almost the same, and a lot of details that seemed important or confusing just disappear. Bask in the clarifying power of a good abstraction.
Utility is Personal
So far we haven't touched the issue of interpersonal utility. That's because that topic isn't actually about VNM utility! There was nothing in the axioms above about there being a utility for each {person, outcome} pair, only for each outcome.
It turns out that if you try to compare utilities between agents, you have to touch unshielded utilities, which means you get radiation poisoning and go to type-theory hell. Don't try it.
And yet, it seems like we ought to care about what others prefer, and not just our own self-interest. But it seems like that inside the utility function, in moral philosophy, not out here in decision theory.
VNM has nothing to say on the issue of utilitarianism besides the usual preference-uncertainty interaction constraints, because VNM is about the preferences of a single agent. If that single agent cares about the preferences of other agents, that goes inside the utility function.
Conversely, because VNM utility is out here, axiomized for the sovereign preferences of a single agent, we don't much expect it to show up in there, in a discussion if utilitarian preference aggregation. In fact, if we do encounter it in there, it's probably a sign of a failed abstraction.
Living with Utility
Let's go back to how much work utility does as a concept. I've spent the last few sections hammering on the work that utility does not do, so you may ask "It's nice that utility theory can constrain our bets a bit, but do I really have to define my utility function by pinning down the relative utilities of every single possible outcome?".
Sort of. You can take shortcuts. We can, for example, wonder all at once whether, for all possible worlds where such is possible, you are indifferent between saving n lives and {50%: saving 2*n; 50%: saving 0}.
If that seems reasonable and doesn't break in any case you can think of, you might keep it around as heuristic in your ad-hoc utility function. But then maybe you find a counterexample where you don't actually prefer the implications of such a rule. So you have to refine it a bit to respond to this new argument. This is OK; the math doesn't want you to do things you don't want to.
So you can save a lot of small thought experiments by doing the right big ones, like above, but the more sweeping of a generalization you make, the more probable it is that it contains an error. In fact, conceptspace is pretty huge, so trying to construct a utility function without inside information is going to take a while no matter how you approach it. Something like disassembling the algorithms that produce your intuitions would be much more efficient, but that's probably beyond science right now.
In any case, in the current term before we figure out how to formally reason the whole thing out in advance, we have to get by with some good heuristics and our current intuitions with a pinch of last minute sanity checking against the VNM rules. Ugly, but better than nothing.
The whole project is made quite a bit harder in that we are not just trying to reconstruct an explicit utility function from revealed preference; we are trying to construct a utility function for a system that doesn't even currently have consistent preferences.
At some point, either the concept of utility isn't really improving our decisions, or it will come in conflict with our intuitive preferences. In some cases it's obvious how to resolve the conflict, in others, not so much.
But if VNM contradicts our current preferences, why do we think it's a good idea at all? Surely it's not wise to be tampering with our very values?
The reason we like VNM is that we have a strong meta-intuition that our preferences ought to be internally consistent, and VNM seems to be the only way to satisfy that. But it's good to remember that this is just another intuition, to be weighed against the rest. Are we ironing out garbage inconsistencies, or losing valuable information?
At this point I'm dangerously out of my depth. As far as I can tell, the great project of moral philosophy is an adult problem, not suited for mere mortals like me. Besides, I've rambled long enough.
Conclusions
What a slog! Let's review:
-
Maximize expected utility, where utility is just an encoding of your preferences that ensures a sane reaction to uncertainty.
-
Don't try to do anything else with utilities, or demons may fly out of your nose. This especially includes looking at the sign or magnitude, and comparing between agents. I call these things "numerology" or "interacting with an unshielded utility".
-
The default for utilities is that utilities are monolithic and inseparable from the entire outcome they are associated with. It takes special structure in your utility function to be able to talk about the marginal utility of something independently of particular outcomes.
-
We have to use the difference-and-ratio ritual to summon the utilities into the real numbers. Record utilities using explicit units and datum, and use dimensionless form for your calculations, which will make many things much clearer and more robust.
-
If you use a VNM basis, you don't need a concept of awfulness, just awesomeness.
-
If you want to do philosophy about the shape of your utility function, make sure you phrase it in terms of lotteries, because that's what utility is about.
-
The desire to use VNM is just another moral intuition in the great project of moral philosophy. It is conceivable that you will have to throw it out if it causes too much trouble.
-
VNM says nothing about your utility function. Consequentialism, hedonism, utilitarianism, etc are up to you.
Right for the Wrong Reasons
One of the few things that I really appreciate having encountered during my study of philosophy is the Gettier problem. Paper after paper has been published on this subject, starting with Gettier's original "Is Justified True Belief Knowledge?" In brief, Gettier argues that knowledge cannot be defined as "justified true belief" because there are cases when people have a justified true belief, but their belief is justified for the wrong reasons.
For instance, Gettier cites the example of two men, Smith and Jones, who are applying for a job. Smith believes that Jones will get the job, because the president of the company told him that Jones would be hired. He also believes that Jones has ten coins in his pocket, because he counted the coins in Jones's pocket ten minutes ago (Gettier does not explain this behavior). Thus, he forms the belief "the person who will get the job has ten coins in his pocket."
Unbeknownst to Smith, though, he himself will get the job, and further he himself has ten coins in his pocket that he was not aware of-- perhaps he put someone else's jacket on by mistake. As a result, Smith's belief that "the person who will get the job has ten coins in his pocket" was correct, but only by luck.
While I don't find the primary purpose of Gettier's argument particularly interesting or meaningful (much less the debate it spawned), I do think Gettier's paper does a very good job of illustrating the situation that I refer to as "being right for the wrong reasons." This situation has important implications for prediction-making and hence for the art of rationality as a whole.
Simply put, a prediction that is right for the wrong reasons isn't actually right from an epistemic perspective.
If I predict, for instance, that I will win a 15-touch fencing bout, implicitly believing this will occur when I strike my opponent 15 times before he strikes me 15 times, and I in fact lose fourteen touches in a row, only to win by forfeit when my opponent intentionally strikes me many times in the final touch and is disqualified for brutality, my prediction cannot be said to have been accurate.
Where this gets more complicated is with predictions that are right for the wrong reasons, but the right reasons still apply. Imagine the previous example of a fencing bout, except this time I score 14 touches in a row and then win by forfeit when my opponent flings his mask across the hall in frustration and is disqualified for an offense against sportsmanship. Technically, my prediction is again right for the wrong reasons-- my victory was not thanks to scoring 15 touches, but thanks to my opponent's poor sportsmanship and subsequent disqualification. However, I likely would have scored 15 touches given the opportunity.
In cases like this, it may seem appealing to credit my prediction as successful, as it would be successful under normal conditions. However, I we have to resist this impulse and instead simply work on making more precise predictions. If we start crediting predictions that are right for the wrong reasons, even if it seems like the "spirit" of the prediction is right, this seems to open the door for relying on intuition and falling into the traps that contaminate much of modern philosophy.
What we really need to do in such cases seems to be to break down our claims into more specific predictions, splitting them into multiple sub-predictions if necessary. My prediction about the outcome of the fencing bout could better be expressed as multiple predictions, for instance "I will score more points than my opponent" and "I will win the bout." Some may notice that this is similar to the implicit justification being made in the original prediction. This is fitting-- drawing out such implicit details is key to making accurate predictions. In fact, this example itself was improved by tabooing[1] "better" in the vague initial sentence "I will fence better than my opponent."
In order to make better predictions, we must cast out those predictions that are right for the wrong reasons. While it may be tempting to award such efforts partial credit, this flies against the spirit of the truth. The true skill of cartography requires forming both accurate and reproducible maps; lucking into accuracy may be nice, but it speaks ill of the reproducibility of your methods.
[1] I greatly suggest that you make tabooing a five-second skill, and better still recognizing when you need to apply it to your own processes. It pays great dividends in terms of precise thought.
37 Ways That Words Can Be Wrong
Followup to: Just about every post in February, and some in March
Some reader is bound to declare that a better title for this post would be "37 Ways That You Can Use Words Unwisely", or "37 Ways That Suboptimal Use Of Categories Can Have Negative Side Effects On Your Cognition".
But one of the primary lessons of this gigantic list is that saying "There's no way my choice of X can be 'wrong'" is nearly always an error in practice, whatever the theory. You can always be wrong. Even when it's theoretically impossible to be wrong, you can still be wrong. There is never a Get-Out-Of-Jail-Free card for anything you do. That's life.
Besides, I can define the word "wrong" to mean anything I like - it's not like a word can be wrong.
Personally, I think it quite justified to use the word "wrong" when:
- A word fails to connect to reality in the first place. Is Socrates a framster? Yes or no? (The Parable of the Dagger.)
- Your argument, if it worked, could coerce reality to go a different way by choosing a different word definition. Socrates is a human, and humans, by definition, are mortal. So if you defined humans to not be mortal, would Socrates live forever? (The Parable of Hemlock.)
- You try to establish any sort of empirical proposition as being true "by definition". Socrates is a human, and humans, by definition, are mortal. So is it a logical truth if we empirically predict that Socrates should keel over if he drinks hemlock? It seems like there are logically possible, non-self-contradictory worlds where Socrates doesn't keel over - where he's immune to hemlock by a quirk of biochemistry, say. Logical truths are true in all possible worlds, and so never tell you which possible world you live in - and anything you can establish "by definition" is a logical truth. (The Parable of Hemlock.)
- You unconsciously slap the conventional label on something, without actually using the verbal definition you just gave. You know perfectly well that Bob is "human", even though, on your definition, you can never call Bob "human" without first observing him to be mortal. (The Parable of Hemlock.)
- The act of labeling something with a word, disguises a challengable inductive inference you are making. If the last 11 egg-shaped objects drawn have been blue, and the last 8 cubes drawn have been red, it is a matter of induction to say this rule will hold in the future. But if you call the blue eggs "bleggs" and the red cubes "rubes", you may reach into the barrel, feel an egg shape, and think "Oh, a blegg." (Words as Hidden Inferences.)
Don't Build Fallout Shelters
Related: Circular Altruism
One thing that many people misunderstand is the concept of personal versus societal safety. These concepts are often conflated despite the appropriate mindsets being quite different.
Simply put, personal safety is personal.
In other words, the appropriate actions to take for personal safety are whichever actions reduce your chance of being injured or killed within reasonable cost boundaries. These actions are largely based on situational factors because the elements of risk that two given people experience may be wildly disparate.
For instance, if you are currently a young computer programmer living in a typical American city, you may want to look at eating better, driving your car less often, and giving up unhealthy habits like smoking. However, if you are currently an infantryman about to deploy to Afghanistan, you may want to look at improving your reaction time, training your situational awareness, and practicing rifle shooting under stressful conditions.
One common mistake is to attempt to preserve personal safety for extreme circumstances such as nuclear wars. Some individuals invest sizeable amounts of money into fallout shelters, years worth of emergency supplies, etc.
While it is certainly true that a nuclear war would kill or severely disrupt you if it occurred, this is not necessarily a fully convincing argument in favor of building a fallout shelter. One has to consider the cost of building a fallout shelter, the chance that your fallout shelter will actually save you in the event of a nuclear war, and the odds of a nuclear war actually occurring.
Further, one must consider the quality of life reduction that one would likely experience in a post-nuclear war world. It's also important to remember that, in the long run, your survival is contingent on access to medicine and scientific progress. Future medical advances may even extend your lifespan very dramatically, and potentially provide very large amounts of utility. Unfortunately, full-scale nuclear war is very likely to impair medicine and science for quite some time, perhaps permanently.
Thus even if your fallout shelter succeeds, you will likely live a shorter and less pleasant life than you would otherwise. In the end, building a fallout shelter looks like an unwise investment unless you are extremely confident that a nuclear war will occur shortly-- and if you are, I want to see your data!
When taking personal precautionary measures, worrying about such catastrophes is generally silly, especially given the risks we all take on a regular basis-- risks that, in most cases, are much easier to avoid than nuclear wars. Societal disasters are generally extremely expensive for the individual to protect against, and carry a large amount of disutility even if protections succeed.
To make matters worse, if there's a nuclear war tomorrow and your house is hit directly, you'll be just as dead as if you fall off your bike and break your neck. Dying in a more dramatic fashion does not, generally speaking, produce more disutility than dying in a mundane fashion does. In other words, when optimizing for personal safety, focus on accidents, not nuclear wars; buy a bike helmet, not a fallout shelter.
The flip side to this, of course, is that if there is a full-scale nuclear war, hundreds of millions-- if not billions-- of people will die and society will be permanently disrupted. If you die in a bike accident tomorrow, perhaps a half dozen people will be killed at most. So when we focus on non-selfish actions, the big picture is far, far, far more important. If you can reduce the odds of a nuclear war by one one-thousandth of one percent, more lives will be saved on average than if you can prevent hundreds of fatal accidents.
When optimizing for overall safety, focus on the biggest possible threats that you can have an impact on. In other words, when dealing with societal-level risks, your projected impact will be much higher if you try to focus on protecting society instead of protecting yourself.
In the end, building fallout shelters is probably silly, but attempting to reduce the risk of nuclear war sure as hell isn't. And if you do end up worrying about whether a nuclear war is about to happen, remember that if you can reduce the risk of said war-- which might be as easy as making a movie-- your actions will have a much, much greater overall impact than building a shelter ever could.
Find yourself a Worthy Opponent: a Chavruta
You've been on Less Wrong for a while. You've become very good at a lot of stuff. Specifically, arguing. You win arguments. All the time. Effortlessly. And the worst part is, you often win for the wrong reasons. Perhaps there were counters to your propositions. Perhaps you failed to mention a very important, non trivial premise, and your public accepted your shaky proposition with as much enthusiasm as if it had been rock-solid, if not more.
They have failed you. You now know that, if you want to remain objective, to keep your grip on reality, to keep your mind sharp and your guard high, you need a Worthy Opponent, someone who's on the same level as you, who's as different in ideology and character from you as possible, who will not hesitate to point out any and every flaw your propositions would have, and would in fact go out of their way to contradict you, just for fun. This intellectual sparring will strengthen you both, and make you more careful in actual debate, on the public arena, whether you choose to use the Dark Arts or not.
Quoth JoshuaZ: In many forms of Judaisms one often studies with a chavruta, with whom one will debate and engage the same texts. Such individuals are generally chosen to be about the same background level and intelligence, often for precisely the sort of reason you touch upon [I paraphrased that in the two first paragraphs] (as well as it helping encourage them to each try their hardest).
A couple of interesting excerpts from the wikipedia artilce:
Unlike conventional classroom learning, in which a teacher lectures to the student and the student memorizes and repeats the information back in tests, and unlike an academic academy, where students do individual research,[5] chavruta learning challenges the student to analyze and explain the material, point out the errors in his partner's reasoning, and question and sharpen each other's ideas, often arriving at entirely new insights into the meaning of the text.[1][6]
A chavruta helps a student stay awake, keep his mind focused on the learning, sharpen his reasoning powers, develop his thoughts into words, and organize his thoughts into logical arguments.[7] This type of learning also imparts precision and clarity into ideas that would otherwise remain vague.[8] Having to listen to, analyze and respond to another's opinion also inculcates respect for others. It is considered poor manners to interrupt one's chavruta.[9]
In the yeshiva setting, students prepare for and review the shiur (lecture) with their chavrutas during morning, afternoon, and evening study sessions known as sedarim.[2] On average, a yeshiva student spends ten hours per day learning in chavruta.[11] Since having the right chavruta makes all the difference between having a good year and a bad year, class rebbis may switch chavrutas eight or nine times in a class of 20 boys until the partnerships work for both sides.[12] If a chavruta gets stuck on a difficult point or needs further clarification, they can turn to the rabbis, lecturers, or a sho'el u'mashiv (literally, "ask and answer", a rabbi who is intimately familiar with the Talmudic text being studied) who are available to them in the study hall during sedarim. In women's yeshiva programs, teachers are on hand to guide the chavrutas.[13]
Chavruta learning tends to be loud and animated, as the study partners read the Talmudic text and the commentaries aloud to each other and then analyze, question, debate, and defend their points of view to arrive at a mutual understanding of the text. In the heat of discussion, they may wave their hands or even shout at each another.[14] Depending on the size of the yeshiva, dozens or even hundreds of chavrutas can be heard discussing and debating each other's opinions.[15][16] One of the skills of chavruta learning is the ability to block out all other discussions in the study hall and focus on one's study partner alone.[2]
In the yeshiva world, the brightest students are highly desirable as chavrutas.[17] However, there are pros and cons to learning with chavrutas who are stronger, weaker, or equal in knowledge and ability to the student. A stronger chavruta will correct and fill in the student's knowledge and help him improve his learning techniques, acting more like a teacher. With a chavruta who is equal in knowledge and ability, the student is forced to prove his point with logic rather than by right of seniority, which improves his ability to think logically, analyze other people's opinions objectively, and accept criticism. With a weaker chavruta, who often worries over and questions each step, the student is forced to understand the material thoroughly, refine and organize his thoughts in a logical structure, present his viewpoint clearly, and be ready to justify each and every point. The stronger chavruta helps the student acquire a great deal of information, but the weaker chavruta helps the student learn how to learn. Yeshiva students are usually advised to have one of each of these three types of chavrutas in order to develop on all three levels.[7]
Given the pattern their interactions have followed online in the past, one could easily think of classifying Yudkowsky and Hanson's relationship as an informal chavruta. And perhaps we should follow their example: Endoself expressed the desire for such a companion, and suggested that we at Less Wrong establish some similar institution.
Honestly, I don't just think this institution should be introduced into Less Wrong. I think it need to be introduced into every educational system. The way the article is written (though I suspect bias since there isn't even the slightest criticism), it sounds like the most freaking awesome way of studying ever.
Now, here in Lesswrong, we can usually count on each other to read the arguments properly and point out any fauts there may be. It's kind of a collective effort. Therefore, I'm not quite sure we need such an institution on the site proper, since we seem to function like a huge hydra of a chavruta right now. Which we shall demonstrate right now, as usual, in the comments section, where I'll be impatiently waiting for feedback from both Jews and Gentiles.
Einstein's Arrogance
Prerequisite: How Much Evidence Does It Take?
In 1919, Sir Arthur Eddington led expeditions to Brazil and to the island of Principe, aiming to observe solar eclipses and thereby test an experimental prediction of Einstein's novel theory of General Relativity. A journalist asked Einstein what he would do if Eddington's observations failed to match his theory. Einstein famously replied: "Then I would feel sorry for the good Lord. The theory is correct."
It seems like a rather foolhardy statement, defying the trope of Traditional Rationality that experiment above all is sovereign. Einstein seems possessed of an arrogance so great that he would refuse to bend his neck and submit to Nature's answer, as scientists must do. Who can know that the theory is correct, in advance of experimental test?
Of course, Einstein did turn out to be right. I try to avoid
criticizing people when they are right. If they genuinely deserve criticism, I
will not need to wait long for an occasion where they are wrong.
And Einstein may not have been quite so foolhardy as he sounded...
Pluralistic Moral Reductionism
Part of the sequence: No-Nonsense Metaethics
Disputes over the definition of morality... are disputes over words which raise no really significant issues. [Of course,] lack of clarity about the meaning of words is an important source of error… My complaint is that what should be regarded as something to be got out of the way in the introduction to a work of moral philosophy has become the subject matter of almost the whole of moral philosophy...
If a tree falls in the forest, and no one hears it, does it make a sound? If by 'sound' you mean 'acoustic vibrations in the air', the answer is 'Yes.' But if by 'sound' you mean an auditory experience in the brain, the answer is 'No.'
We might call this straightforward solution pluralistic sound reductionism. If people use the word 'sound' to mean different things, and people have different intuitions about the meaning of the word 'sound', then we needn't endlessly debate which definition is 'correct'.1 We can be pluralists about the meanings of 'sound'.
To facilitate communication, we can taboo and reduce: we can replace the symbol with the substance and talk about facts and anticipations, not definitions. We can avoid using the word 'sound' and instead talk about 'acoustic vibrations' or 'auditory brain experiences.'
Still, some definitions can be wrong:
Alex: If a tree falls in the forest, and no one hears it, does it make a sound?
Austere MetaAcousticist: Tell me what you mean by 'sound', and I will tell you the answer.
Alex: By 'sound' I mean 'acoustic messenger fairies flying through the ether'.
Austere MetaAcousticist: There's no such thing. Now, if you had asked me about this other definition of 'sound'...
There are other ways for words to be wrong, too. But once we admit to multiple potentially useful reductions of 'sound', it is not hard to see how we could admit to multiple useful reductions of moral terms.
Many Moral Reductionisms
Moral terms are used in a greater variety of ways than sound terms are. There is little hope of arriving at the One True Theory of Morality by analyzing common usage or by triangulating from the platitudes of folk moral discourse. But we can use stipulation, and we can taboo and reduce. We can use pluralistic moral reductionism2 (for austere metaethics, not for empathic metaethics).
Example #1:
Neuroscientist Sam Harris: Which is better? Religious totalitarianism or the Northern European welfare state?
Austere Metaethicist: What do you mean by 'better'?
Harris: By 'better' I mean 'that which tends to maximize the well-being of conscious creatures'.
Austere Metaethicist: Assuming we have similar reductions of 'well-being' and 'conscious creatures' in mind, the evidence I know of suggests that the Northern European welfare state is more likely to maximize the well-being of conscious creatures than religious totalitarianism.
Example #2:
Philosopher Peter Railton: Is capitalism the best economic system?
Austere Metaethicist: What do you mean by 'best'?
Railton: By 'best' I mean 'would be approved of by an ideally instrumentally rational and fully informed agent considering the question ‘How best to maximize the amount of non-moral goodness?' from a social point of view in which the interests of all potentially affected individuals are counted equally.
Austere Metaethicist: Assuming we agree on the meaning of 'ideally instrumentally rational' and 'fully informed' and 'agent' and 'non-moral goodness' and a few other things, the evidence I know of suggests that capitalism would not be approved of by an ideally instrumentally rational and fully informed agent considering the question ‘How best to maximize the amount of non-moral goodness?' from a social point of view in which the interests of all potentially affected individuals were counted equally.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)