I realise it is over a year later but can I ask how it went, or whether anyone has advice for someone in a similar position? I felt similar existential terror when reading The Selfish Gene and realising on one level I'm just a machine that will someday break down, leaving nothing behind. How do you respond to something like that? I get that you need to strike a balance between being sufficiently aware of your fragility and mortality to drive yourself to do things that matter (ideally supporting measures that will reduce said human fragility) but not so much you obsess over it and become depressed, but it can seem a pretty tricky balance to strike, especially if you are temperamentally inclined towards obsessiveness, negativity and akrasia.
Donation tradeoffs in conscientious objection
Suppose that you believe larger scale wars than current US military campaigns are looming in the next decade or two (this may be highly improbable, but let's condition on it for the moment). If you thought further that a military draft or other forms of conscription might be used, and you wanted to avoid military service if that situation arose, what steps should you take now to give yourself a high likelihood of being declared a conscientious objector?
I don't have numbers to back any of this up, but I am in the process of compiling them. My general thought is to break down the problem like so: Pr(serious injury or death | conscription) * Pr(conscription | my conscientious objector behavior & geopolitical conditions ripe for war) * Pr(geopolitical conditions ripe for war), assuming some conscientious objector behavior (or mixture distribution over several behaviors).
If I feel that Pr(serious injury or death | conscription) and Pr(geopolitical conditions ripe for war) are sufficiently high, then I might be motivated to pay some costs in order to drive Pr(conscription | my conscientious objector behavior) very low.
There's a funny bit in the American version of the show The Office where the manager, Michael, is concerned about his large credit card debt. The accountant, Oscar, mentions that declaring bankruptcy is an option, and so Michael walks out into the main office area and yells, "I DECLARE BANKRUPTCY!"
In a similar vein, I don't think that draft boards will accept the "excuse" that a given person has "merely" frequently expressed pacifist views. So if someone wants to robustly signal that she or he is a conscientious objector, what to do? In my ~30 minutes of searching, I've found a few organizations that, on first glance, look worthy of further investigation and perhaps regular donations.
Here are the few I've focused on most:
The problems I'm thinking about along these lines include:
- Whether or not the donation cost is worth it. There's no Giving What We Can type measure for this as far as I can tell, and even though I know from family experience that veteran mental illness can be very bad, I'm not convinced that donations to the above organizations provide a lot of QALY bang for the buck.
- Another component of bang for the buck is how much the donation will credibly signal that I actually am a serious conscientious objector. If I donate and then a draft board chooses to ignore it, it would be totally wasted. But if I think that 'going to war' is highly correlated with very significant negative outcomes, then just as with cryonics, I might feel that such costs are worth it even for a small probability of avoiding a combat environment.
- Even assuming that I resolve 1 & 2, there's the problem of trading off these donations with other donations that I make. In a self-interest line of thinking, I might forego my current donations to places like SIAI or Against Malaria because, good as those are, they may not offer the same shorter term benefits to me as purchasing a conscientious objector signal.
I'm curious if others have thought about this. Good literature references are welcome. My plan is to compile statistics that let me make reasonable estimates of the different conditional probabilities.
Addendum
Several people seem very concerned with the signal faker aspect of this question. I don't understand the preoccupation with this and feel tired of trying to justify the question to people who only care about the signal faker aspect. So I'll just add this copy of one of my comments from below. Hopefully this gives some additional perspective, though I don't expect it to change anyone's mind. I still stand by the post as-is: it's asking about a conditional question based on sincere belief. Even if the answer would be of interest to fakers too, that alone doesn't make that explanation more likely and even if that explanation was more likely it doesn't make the question unworthy of thoughtful answers.
Here's the promised comment:
... my question is conditional. Assume that you already sincerely believe in conscientious objection, in the sense of personal ideology such that you could describe it to a draft board. Now that we're conditioning on that, and we assume already that your primary goal is to avoid causing harm or death... then further ask what behaviors might be best to generate the kinds of signals that will work to convince a draft board. Merely having actual pacifist beliefs is not enough. Someone could have those beliefs but then do actions that poorly communicate them to a draft board. Someone else could have those beliefs and do behaviors that more successfully communicate them to draft boards. And to whatever extent there are behaviors outside of the scope of just giving an account of one's ideology I am asking to analyze the effectiveness.
I really think my question is pretty simple. Assume your goal is genuine pacifism but that you're worried this won't convince a draft board. What should you do? Is donation a good idea? Yes, these could be questions a faker would ask. So what? They could also be questions a sincere person would ask, and I don't see any reason for all the downvoting or questions about signal faking. Why not just do the thought experiment where you assume that you are first a sincere conscientious objector and second a person concerned about draft board odds?
Stated another way:
1) Avoiding combat where I cause harm or death is the first priority, so if I have to go to jail or shoot myself in the foot to avoid it, so be it and if it comes to that, it's what I'll do. This is priority number one.
2) I can do things to improve my odds of never needing to face the situation described in (1) and to the extent that the behaviors are expedient (in a cost-benefit tradeoff sense) to do in my life, I'd like to do them now to help improve odds of (1)-avoidance later. Note that this in no way conflicts with being a genuine pacifist. It's just common sense. Yes, I'll avoid combat in costly ways if I have to. But I'd also be stupid to not even explore less costly ways to invest in combat-avoidance that could be better for me.
3) To the extent that (2) is true, I'd like to examine certain options, like donating to charities that assist with legal issues in conscientious objection, or which extend mental illness help to affected veterans, for their efficacy. There is still a cost to these things and given my conscientious objection preferences, I ought to weigh that cost.
There's lots to say, but I'll reserve it for a full discussion post soon, and I'll come back here and post a link.
Now I'm thoroughly confused about your position. Here are some claims to which you appear to have committed yourself:
(1) You can only talk about probability distribution over the microstates of a system if you treat that system as a sub-system of some larger system that includes an observer.
(2) Entropy is just a measure of subjective uncertainty, which means it is (presumably) a property of a probability distribution.
(3) You can talk about the entropy of a system without including the observer but this is just an idealization and it does not involve a probability distribution over the microstates of the system.
To me, this third claim is just flat-out in contradiction with the first and second claims. How can you talk about the entropy of something from a stat. mech. point of view without it being a property of a probability distribution? Is there really some completely different concept of entropy that comes into play when you exclude the observer from your analysis?
I will also note that the approach I talked about in my original comment does not deny that probability is in the mind. Probability can be "in the mind" without just being subjective uncertainty. Furthermore, accepting that probability is in the mind does not mean that one cannot attribute probability distributions to systems without explicitly representing the system as a subsystem of a supersystem containing an observer.
I appreciate your patience with me and for the help in getting me to confront my confusions about the topic. Your answer still is unsatisfying to me, and this could totally be my own ignorance at work. However, I cannot understand how your answer is sustainable given the comments at both the Stack Exchange post and at the John Baez link.
I think you've misunderstood me when you articulated the 3 positions listed above, but you've definitely hit upon my confusion so I need to think about it more carefully and do a better job saying what I want to say. I will think on it and write again when I get a chance this weekend.
Again, I do appreciate the patience in helping me understand it.
As it happens, if you want to draw the boundary to exclude me, then the two-gas-system-without-pump also happens to be entropy increasing...
This is what I'm disputing you can get if you treat entropy as subjective uncertainty, while also assuming that the only way to update subjective uncertainty is Bayesian conditionalization. Perhaps you can explain how the two-gas system turns out to be entropy increasing on that viewpoint if you draw the boundary to exclude the observer. How does the entropy of the probability distribution describing the system increase?
"The entropy of the probability distribution describing the system" only has meaning if there is an observer to actually hold that probability distribution. Since probability is in the mind, there is no fixed external thing that just "is" the probability distribution of the system.
There are two distinct things; one is "the system" and the other is "the probability distribution over states of the system." If you make an idealization and do math just on "the system" then the distributions in those idealizations are entropy increasing (if you exclude any observer or external stuff to the system). That does not correspond to reality (because the system's not truly closed), but is often a useful approximation for describing "the arrow of time."
If you want to talk about "the probability distribution over states of the system" then you must also be including some observer with a mind of some sort, or else the notion of there being a probability distribution (as opposed to just whatever the deterministic eventuality of whatever does in fact occur) doesn't make semantic sense.
So, to speak about the "probability distribution of the system" there has to be a Maxwell's demon sitting there having that distribution in its mind (e.g. some observer), and whatever entity it is that is dissipating waste heat while doing physical processes to update its beliefs must be increasing entropy.
If this is your standard for entropy increase, then surely it shouldn't matter what you are observing. If you are observing a refrigerator, your brain is pumping just as much heat into the environment as when you are observing spontaneous equilibriation. Yet in the case of the refrigerator we usually say, in contrast to the two-gas system I described, that it is entropy decreasing. But if the basis for calling the two-gas system entropy increasing is the heat output by the observer's brain, why doesn't the refrigerator qualify as entropy increasing as well?
Or are you disagreeing that we would (or should) call a two-gas system exhibiting spontaneous thermal equilibriation an entropy-increasing system? I think I'm not getting your view exactly right.
This is the semantic problem that you dismissed. When I talk about the refrigerator, it's clear that I mean to draw an imaginary boundary around the refrigerator only and pretend for a second that that is all there is anywhere. Then the entropy is decreasing. If I talk about the process by which I acquired that knowledge, then I have to expand my imaginary boundary to include the source of the photons that bounced off the refrigerator, for instance, and the waste heat my brain produced to acquire this knowledge. That process, the acquiring of the knowledge, was entropy increasing even if what it revealed to me was a less entropic distribution over states of the refrigerator.
The refrigerator is the two gas system with a pump attached. Learning anything about either system is an entropy increasing proposition (if the boundary is drawn around me plus the system). As it happens, if you want to draw the boundary to exclude me, then the two-gas-system-without-pump also happens to be entropy increasing, while drawing a boundary around the refrigerator is entropy decreasing.
This seriously is just Maxwell's demon.
There is no re-definition of "arrow of time" going on here. Shalizi is using the phrase in its standard thermodynamic sense, describing the fact that a number of macroscopic processes are thermodynamically irreversible.
Consider a specific example: two boxes of gas initially at different temperatures are brought into contact through a diathermal barrier. I check the temperature of these gases using a thermometer periodically. I observe that over time temperature difference vanishes. The gases spontaneously equilibriate.
What would you say about what's going on here? The standard story is that the thermal equilibriation takes place due to the Second Law of Thermodynamics. Heat transfer from the hotter gas to the colder one leads to entropy increase. From a (Boltzmannian) statistical mechanical perspective, the region of phase space corresponding to both gases having the same temperature is larger than the region of phase space corresponding to them having different temperatures. So a distribution that is uniform over the former region (and vanishes elsewhere) will have a higher entropy than a distribution that is uniform over the latter region. Note that none of this requires any appeal to the entropy associated with the observer. The entropy increase in this case has nothing to do with the observer's memory. It has to do with heat flowing from one box of gas to the other.
Now it seems like the guy you link to objects that we can't say that the Second Law applies to this two-gas system because the system is not completely isolated. But this ignores two things. First, the Second Law has productively been used a huge number of times in the past to describe the behavior of systems exactly like this. Second, by this standard the Second Law does not apply to any system. There is no actual system that is completely isolated, except the universe as a whole.
The thing is, the two-gas system is "isolated enough". There is no significant mechanical work being performed on or by it (discounting the negligible amount of work required to raise the mercury in the thermometer I use), as there is in the case of a refrigerator. Observing a system's state does involve some exchange of energy with it, but it need not involve doing work on the system.
Now, Shalizi's point is that if we are strict Bayesians about the state of the system, then the entropy of the distribution we associate with it will not increase, so we would say that the entropy of the system is decreasing. But this is wrong! The entropy of the system is increasing. Not the entropy of the system+observer combo, the entropy of the system itself. If your approach to statistical mechanics tells you it is not, then you are the one flying in the face of orthodox thermodynamics, not Shalizi.
Now, Shalizi's point is that if we are strict Bayesians about the state of the system, then the entropy of the distribution we associate with it will not increase, so we would say that the entropy of the system is decreasing. But this is wrong! The entropy of the system is increasing. Not the entropy of the system+observer combo, the entropy of the system itself. If your approach to statistical mechanics tells you it is not, then you are the one flying in the face of orthodox thermodynamics, not Shalizi.
This is the part I take issue with. Everything else is fine. The entropy of the distribution that we associate with the system will decrease, at the expense of pumping our ignorance as waste heat into our mind's surrounding environment. The entropy of my beliefs about the system is not the same thing as the entropy of the system and it's not covered under the umbrella of orthodox physics to act like my state of ignorance regarding the state of the system is the same as the state of the system. When I learn things by pumping entropy into my surroundings with (at the very least) my brain's waste heat, that is not at all like observing a backward arrow of time, because everything else around me is running down, reaching thermal equilibrium, even if I am pinching up some local ignorance-removal regarding the state of some different, fixed other system.
Two things to say here:
(1) The view articulated in that answer, that the Second Law only applies to systems that are genuinely closed, would render the Law empirically useless. There are no systems of this sort, except for the entire universe. But we appeal to the Second Law all the time to account for the time-directedness of systems that aren't completely closed (such as ice melting in a glass of water, or gas spreading through a room). We're really working with an approximate sense of closure, one that allows us to describe reasonably insulated systems as closed (with the denotation of "reasonably" depending on context), even though technically they are exchanging some amount of energy with their environments. If we go by the standards in that post, then yes, no system we observe would be governed by the Second Law. But by the same token, the "system plus observer" supersystem wouldn't be governed by the Second Law either, since this supersystem isn't closed. So then I don't see the point of defending the Second Law by including the observer in the system.
(2) The "begging the question" charge I raised in my post is not merely hypothetical. Shalizi is genuinely skeptical of Landauer's principle, the claim that information erasure must have an entropic cost. So invoking Landauer's principle won't fly against him. I think the right response to the sort of problems he raises with the principle (best captured in the John Norton paper linked in his post) is a view of the sort I recommend above. I'd probably need to say a lot more to make this obvious, but I won't unless you're specifically interested.
ETA: Also worth noting: All competent defenses of Landauer's principle that I have read assume that the observer is governed by the Second Law. The usual argument involves pointing out that erasure involves a reduction of the information theoretic entropy of the data stored by the observer. Since the Second Law holds, this reduction of entropy must be compensated by an increase of entropy in the non-information-bearing degrees of freedom, which usually amounts to the observer releasing heat into the environment. But if we go by the reasoning in the answer to which you link, we have no warrant for assuming the observer is governed by the Second Law unless the observer counts as a genuinely closed system. Of course, no actual observer would qualify. So the poster's own reasoning undermines his appeal to Landauer's principle.
There appears to be a semantic problem with this (I am not a physicist, so please bear with me).
If "the arrow of time" is re-defined to just mean "superficial appearance of decreases in entropy to some observer", then I agree with Shalizi and I also believe the result of his paper is not a 'paradox' and doesn't cast any doubt on validity of Bayesian methods. In local situations, a system might be sufficiently "closed" such that to the observer it looks like the system is spontaneously becoming more complex... that is, the degree of ignorance in the observer's mind might decrease quickly.
But, consistent with the physical laws, somewhere within the observer-system metasystem, that entropy is being accounted for. In order to zoom out and re-apply Shalizi's idea to the meta-system, you have to start talking about some new meta-observer whose states of ignorance are only relevant to the first observer-system metasystem.
So to me, it seems like if your approach accurately describes Shalizi's argument, then all he is doing is redefining "arrow of time" such that he gets the result he wants... but no one has to care about that version of "arrow of time" nor believe that it corresponds to the same "arrow of time" that is discussed in almost all discourse on thermodynamics. And even less should anyone think this is genuine reason to be skeptical of fully Bayesian updating.
I've just skimmed Shalizi's paper, so I might be wrong, but it seems to me his argument can be summarized as follows:
If we suppose that entropy is a measure of subjective uncertainty, then it would only increase if the subject lost information about the state of the system as it evolves. If the dynamical laws governing the microscopic evolution of the system are information-preserving, then this loss of information can only come from the way in which the subject updates his/her beliefs about the system's state. But if the subject updates by simply conditionalizing on the system's new macroscopic state, then this cannot happen. Bayesian conditionalization can only add information; it cannot subtract information. So, generically, updating one's beliefs about the system by conditionalization will lead to decrease in uncertainty about the system and therefore a decrease in the system's entropy.
I don't think points (1) and (3) in Eliezer's comment are an adequate response to this argument. Point (1) says that when the observer measures the system in order to conditionalize, the entropy of the observer's memory registers increases, which I guess is supposed to compensate for the decrease in system entropy induced by measurement. But this is a non-response. When we do statistical mechanics, we are not usually interested in the entropy of the system plus the observer; we are just interested in the entropy of the system, and it is this entropy that is observed to increase. Also, the response seems to beg the question. On what grounds does Eliezer claim that measurement increases the entropy of the observer's memory? Couldn't Shalizi's argument just be re-applied at this level?
Eliezer's point 3 (as far as I can make sens of it) is that in a quantum universe, from a within-a-branch perspective, the system evolution will not be unitary (and therefore not information-preserving) because the system will have decohered. This is the same point jimrandomh makes here. This is fair enough, but I don't think the Bayesian should be happy attributing entropy increase solely to quantum world-splitting. Statistical mechanics originated with the assumption that the underlying laws are classical, and in the majority of applications this assumption is retained for computational convenience. If the Bayesian position amounts to a rejection of a majority of the work done in statistical mechanics, it seems a pretty big bullet to bite.
Eliezer's point 2 is ultimately where I think the action's at. We don't update statistical distributions simply by conditionalization. Every statistical mechanics text points out that there is a coarse-graining step. When we update our distribution, we coarse-grain over the fine details of the distribution, "smoothing" it out. It is this step that accounts for entropy increase. Now Shalizi's response is that if you are a Bayesian then adding this non-Bayesian step is epistemically incoherent. One way to respond to this is as Eliezer does: Yup, none of us are perfect Bayesians. We are not even close to logically omniscient, so we are doomed to incoherence.
I think there's another response, which is that the best way to think about the probability distributions in statistical mechanics is not as accurate representations of our degrees of belief. The distributions are constructed to remove distinctions between microscopic states that are irrelevant to our macroscopic interactions with the system. Suppose I pour a blob of milk into a cup of coffee on the right side of the cup and then stir. Eventually the milk will be completely mixed with the coffee. If I had poured the blob on the left side of the cup, the milk would also eventually have ended up in a mixed state. Now, technically, my state of knowledge about the microstate of the mixed cup is different in these two cases. In the first case I know that the microstate must be one that evolves from the milk being poured on the right. In the second case I know it must be one that evolves from the milk being poured on the left. If the dynamics of the cup are information-preserving, then these are disjoint subsets of phase space. If I was updating as a Bayesian, the distributions would be totally different from one another.
But the thing is, the original position of the blob of milk makes no difference to my practical ability to interact with the milk and coffee system now that the milk is mixed. I might remember this original position, but I cannot now use that information to extract work from the system. My causal capacities are not sufficiently fine-grained to allow me to do that. So the information is irrelevant to how I now treat the system, from a thermodynamic point of view. To conserve computational resources, I might as well pick a distribution that ignores this information. That distribution will not be the distribution that best represents my knowledge of the system, but it will be the distribution that most effectively allows me to plan interactions with the system.
So I guess ultimately I agree with Shalizi. Thinking of thermodynamic entropy as the same thing as subjective uncertainty is wrong. This doesn't mean it doesn't have a lot to do with subjective uncertainty, though, since our uncertainty about systems is a very important constraint on our ability to interact with them.
Can you articulate how your response interfaces with this answer on Cross-Validated?
I agree that the signal being sent is coarse-grained.
I agree that finegraining it is a lovely thing for people to do if they can do it in a way that's low-cost to everyone else.
I disagree with your implicit separation of signalling community (dis)approval on the one hand, and reputation costs on the other. The reputation in this case is precisely a function of community (dis)approval; I don't see how you can sensibly separating them. If I endorse explicitly signalling community (dis)approval at all (which I do), I can't help but endorse explicitly raising/lowering reputation.
My only concern with the FAQ approach is the question of what an individual voter whose reasons for (dis)approval don't align with the FAQ ought to do. If I'm understanding you, your idea is that the FAQ trumps the actual preferences of people in the community -- that is, I'm expected to vote in accordance with the FAQ rather than my own preferences. That makes the existence and contents of the FAQ an implicit power structure, and such things are best approached with caution.
That said, I don't object to it if so approached.
I don't understand the disagreement with splitting the reputation. For example, a really trivially easy way to do it would be like this: On every post, have a thumbs-up/thumbs-down vote button that is specific just to that post, and then have a separate thumbs-up/thumbs-down button that appears next to the name of the user who made the post.
If you just dislike that particular post because it is off-topic, but you think the poster had the intention that it was on-topic (you just dispute that they were correct in their intention), then just downvote the question and not the user. Then the user voting is a signal of an individual's favor in the community and the post voting is a signal of the community's preferences for topical content.
I'm not advocating that we go through the trouble of doing it that way, but it would be an easy way to decouple the second order effect by which a user can feel personally discouraged if a post he or she thought was relevant and interesting is not seen that way by others. Their reputation as a contributing member may remain unchanged; but that particular post is signaled as uninteresting/noisy.
I would like a FAQ that functions much the way the guidelines function at the Stack Exchange websites. Without any guidelines, downvotes are chaotic and lose meaning. If a typical user doesn't like a post, but the reason for dislike is not covered by the FAQ, they can still write a comment, or make a post in one of the Stack Exchange meta sites (to argue constructively for getting their preference category into the FAQ/guidelines). These signal the information and successfully decouple it from what the community says it wants in the FAQ.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Or you can just join a Quaker meeting.
That's not a bad idea. But it does require some social cost since I don't sincerely believe Quaker doctrines and don't want to signal to others that I do or might. It looks like there's a reasonable contingent of nontheist Quakers. Maybe I can get the same signaling benefit from affiliation with the American Humanist Association.
But the point is well-taken. This would be a money-cheap way to signal pacifism, but for me it is a socially-expensive way to attempt it. Paying for donations to orgs would probably be cheaper overall in my preference ordering.