In response to Trust in Math
Comment author: Brandon_Reinhart 16 January 2008 02:11:42AM 0 points [-]

Hey, so, I figure this might be a good place to post a slightly on topic question. I'm currently reading "Scientific Reasoning: The Bayesian Approach" by Howson and Urbach. It seemed like a good place to start to learn Bayesian reasoning, although I don't know where the "normal" place to start would be. I'm working through the proofs by hand, making sure I understand each conclusion before moving to the next.

My question is "where do I go next?" What's a good book to follow up with?

Also, after reading this and "0 and 1 are not probabilities" I ran into the exact cognitive dissonance that Eliezer eluded to with his statement that "this would upset probability theorists is that we would need to rederive theorems previously obtained by assuming that we can marginalize over a joint probability by adding up all the pieces and having them sum to 1." The material is teaching that "P(t) == 1" is a fundamental theorem of the probability calculus and that "P(a) >= 0". These are then used in all succeeding derivations. After re-reading, I came to understand Eliezer's practical disagreement with a theoretical method.

So another question is: has anyone gone through the exercise of re-deriving the probability calculus, perhaps using "0 < P(a) < 1" or something similar, instead of the two previous rules?

In response to My Strange Beliefs
Comment author: Brandon_Reinhart 31 December 2007 05:05:05AM 1 point [-]

The whole libertarian vs socialism thing is one area where transhumanism imports elements of cultishness. If you are already a libertarian and you become familiar with transhumanism, you will probably import your existing arguments against socialism into your transhumanist perspective. Same for socialism. So you see various transhumanist organizations having political leadership struggles between socialist and libertarian factions who would probably be having the same struggles if they were a part of an international Chess club or some such other group.

The whole thing becomes entrenched in debates about things like Transition Guides and what amounts to "how to implement transhumanist policy in a [socialist/libertarian] way that's best for everyone." I always thought these discussions were what amounted to discourse at the "fandom" level of the transhumanist community, but after reading some of Eliezer's posts about his own experiences at transhumanist/singularitarian events I see that it happens at all levels.

Half-formed thought I need to pursue more offline but I'll write it down now: If you say "I am a transhumanist" and you say "I am a libertarian" and then you try to find libertarian ways to meet transhumanist goals you have made your transhumanism subservient to your libertarianism. I think it is better to find transhumanist ways to meet libertarian goals. The fact that a group of transhumanists would derail a debate by getting into politics seems to express to me that the group has made transhumanism the subservient value. Which seems inelegant given that transhumanism is probably the simpler value. Seems like there's a possible post for my own blog brewing in there, but I have to think about it some.

In response to My Strange Beliefs
Comment author: Brandon_Reinhart 31 December 2007 04:46:51AM 0 points [-]

There are a large number of transhumanists who are socialists, not libertarians. In fact, as far as I can tell "libertarian transhumanism" is a distinctly American phenomenon. Saying that most transhumanists _you know_ are libertarians may be true, but to assume that their experiences define the entire span of transhumanist belief would be creating an invalid generalization from too little evidence.

Comment author: Brandon_Reinhart 18 December 2007 05:09:00PM 3 points [-]

Great post. You nailed my main issues with objectivism. I think the material is still worth reading. Rand considered herself a philosopher and seemed to feel there was a lot to be gained from telling her people to read more philosophy and broaden their horizons, but when it came to scientific works she never expresses much awareness of the "state of the art" of her time. In fact, her epistemology makes assumptions about the operation of the brain (in behavioralism and learning) that I'm not sure could be made correctly with the state of neuroscience and related disciplines at the time.

Comment author: Brandon_Reinhart 10 December 2007 07:38:44PM 1 point [-]

Reminds of the "The Watchmen." A comic book in which the super heroes conspire to create the illusion of an alien invasion so that humanity would unite (under the heroes as leaders) against the unknown external threat.

Comment author: Brandon_Reinhart 09 December 2007 04:35:28AM 2 points [-]

Comparing the lives lost in 9/11 to motorcycle accidents is a kind of moral calculus that fails to respect the deeper human values involved. I would expect people who die on motorcycles to generally understand the risks. They are making a choice to risk their lives in an activity. Their deaths are tragic, but not as tragic. The people who died in the WTC did not make a choice to risk their lives, unless you consider going to work in a high rise in America to be a risky choice. If you're doing moral calculus, you need to multiply in a factor for "not by known/accepted risk" to the deaths in the attack.

Tragedy of Death: (by Known / Accepted Risk) < (by Unknown Risk) < (by Aggressor Who Offers No Choice)

My last post, though, since The More I Post, The More I'm Probably Wrong.

Comment author: Brandon_Reinhart 09 December 2007 04:15:21AM 0 points [-]

"We will be safer after we conquer every potential enemy."

There are limits on our physical and moral capacity for making war. My post was simply pointing out that failing to respond to someone who actually attacks you can have increasingly dangerous results over time. That enemy leeches at your resources and learns how to become better at attacking you, while you gain nothing. There are plenty of potential enemies out there who aren't attacking us and may never attack us. They aren't gaining actual experience at attacking us. Their knowledge is only academic. As long as they don't attack us and we don't attack them, we may find our mutual interests transforming us into allies.

So while we could launch a crusade against the world, it doesn't seem to make sense if it has no chance of succeeding and would likely cost us everything we value. At the same time, though, we have to defend ourselves from the potential of an attack and plan for potential responses. Once one of those enemies actively attacks us, we have to defend ourselves (obviously) and then respond by counter-attacking, if capable, to discourage future attacks.

Arguing that responding, violently, to an attack is not an argument for pre-emptively attacking all potential enemies. There are many lines in the sand: resource limitations, economic limitations, moral limitations, etc.

You do hit on the core question: when is it right to preemptively attack another state? Also: what do we mean by 'right'? Strategically correct? Morally acceptable? It seems to me that popular wars will be morally acceptable wars and those will be wars of defense and wars against aggressors. Wars of aggression against non-aggressors would rarely be popular, except in cases of "revanchism" or by non-liberal states that control their population through nationalism. You would expect liberal states to generally not pursue wars of aggression.

If we follow that we cast a bit of light on why the "spreading democracy" meme has been popular among some. "Democracy" as a system has been conflated with classical liberalism. The idea being: conquer non-liberal states and institute democracies. The world then becomes safer, because liberal states prefer to resolve differences in ways that aren't physically violent. The flaw being that simply creating a democracy doesn't guarantee that the values of classical liberalism will be ... ah ... valued.

So yeah. I don't support knocking down the walls of potential enemies "just because."

Comment author: Brandon_Reinhart 09 December 2007 02:34:40AM 2 points [-]

Some very vehement responses.

If you believe invading Afghanistan was a correct choice then I'm not sure how you could say Iraq was a complete mistake. The invasion of Afghanistan was aimed at eliminating a state that offered aid and support to an enemy who would use that aid and support to project power to the US and harm her citizens or the citizens of other western states. Denying that aid and support would hope to achieve the purpose of reducing or eliminating the ability of the enemy to project power.

Any other state that might offer aid and support to the enemy would enable the enemy to rebuild their ability to project power. Iraq was one possible source of aid and support. Any Sunni state with sufficient reason to wish harm upon the west, with the desire to support organizations that might bring about that harm, and with the ability to provide aid and support to that end was (or is) a threat.

al Qaeda is now largely holed up in regions that do not offer much by way of aid and support, at least for now. al Qaeda may still be able to project limited power, but its ability to strike at the US in such a coordinated way has been significantly hampered.

The harms of 9/11 cannot be measured by the harms of the event alone. The economic damage and the lives lost are only a small part of a complete justification for a vigorous response. If we merely rebuilt the towers and moved on, we would have done nothing to deny an enemy the power to strike again. We would have done nothing to deny the enemy their ability to develop their offensive capacity. Without our interference and no change in the demeanor of the enemy, a second attack would likely have been larger and more damaging, as the enemy would have continued to develop offensive capacity and support while we stood aside.

Additionally, toppling two governments sends a strong message to other states that might harbor the enemy that they will be pursued and punished. Although it did not serve Russia or China politically to openly support US actions in the Middle East, it seems likely that both states had reason to desire an outcome in which the extremist groups were heavily disrupted. Of course, their ideal outcome would also involve a significant loss of prestige, financial power, and influence by the US as well.

If you allow an enemy to batter your gates, you could sleep easily knowing that you built your gates to be strong and withstand such assaults. Eventually, however, your enemy will learn the weaknesses of your gates and batter them down or circumvent them. You would have failed: not in the construction of your defenses, but by failing to hunt down your enemy and deny them the opportunity of future assaults.

It is just as unfortunate for the strategists that hatred and emotional fervor clouded the discussion of response. No right minded military commander wishes to unnecessarily expend resources on a purposeless campaign. While it may be that a clearly reasoned discussion on response would not have led to as extensive a result, I believe that leaving the gates to attack those harboring the enemy would have been considered strategically sound.

In response to Fake Selfishness
Comment author: Brandon_Reinhart 08 November 2007 09:16:45PM 1 point [-]

My understand is that the philosophy of rational self-interest, as forwarded by the Objectivists, contains a moral system founded first on the pursuit of maintaining a high degree of "conceptual" volitional consciousness and freedom as a human being. Anything that robs one's life or robs one's essential humanity is opposed to that value. The Objectivist favor of capitalism stems from a belief that capitalism is a system that does much to preserve this value (the essential freedom and humanity of individuals). Objectivists are classical libertarians, but not Libertarians (and in fact make much of their opposition to that party).

I believe that an Objectivist would welcome the challenges posed in the post above, but might not consider them a strong challenge to his beliefs simply because they aren't very realistic scenarios. Objectivists generally feel that ethics need not be crafted to cover every scenario under the sun, but instead act as a general guide to a principled life that upholds the pursuit of freedom and humanity.

> "If you're genuinely selfish, then why do you want me to be selfish too? Doesn't that make you concerned for my welfare? Shouldn't you be trying to persuade me to be more altruistic, so you can exploit me?"

In the long run, exploiting others seems likely to end up a dead end road. It might be rational and rewarding in the short term, but ultimately it is destructive. Furthermore, it seems to be a violation of principle. If I believe in my own freedom and that I would not want to be misled, I should not attempt to rob the freedom or mislead others without significant compelling reason. Otherwise, I'm setting one standard for my own rights and another for others. By my example, then, there would be no objective ethical standard for me to object against someone attempting to mislead or exploit me. After all, if I set a subjective standard for behavior why shouldn't they? But this isn't rigorous logic and smacks of a rationalization as referenced here:

> "But what I really want to know is this: Did you start out by thinking that you wanted to be selfish, and then decide this was the most selfish thing you could possibly do? Or did you start out by wanting to convert others to selfishness, then look for ways to rationalize that as self-benefiting?"

The problem seems to be more general: argument with the intent of converting. That intent alone seems to cast suspicion on the proceedings. A rational person would, it seems to me, be willing to lay his arguments on the table for review and criticism and discussion. If, at some point in the future, others agree they are rational arguments and adopt them as beliefs then everyone should be happy because the objectives of truth and learning have been fulfilled. But "converting" demands immediate capitulation to the point of discussion. No longer is the discussion about the sharing of ideas: reward motivators have entered the room.

Self-edification that one's own view has been adopted by another seems to be a reward motive. Gratification that a challenge has been overcome seems to be a reward motive. Those motives soil the discussion.

> And the one said, "You may be right about that last part," so I marked him down as intelligent.

The man is intelligent, not because he agreed with Eli's point, but because he was reviewing his beliefs in light of new information. His motive was not (at least not entirely) conversion, but genuine debate and learning.

"Intelligence is a dynamic system that takes in information about the world, abstracts regularities from that information, stores it in memories, and uses it knowledge about the world to form goals, make plans and implement them."

The speaker is doing just that. He might later choose to reject the new information, but at this time he is indicating that the new information is being evaluated.

In response to Fake Selfishness
Comment author: Brandon_Reinhart 08 November 2007 03:02:16PM 3 points [-]

Are providing answers to questions like "Would you do incredible thing X if condition Y was true" really necessary if thing X is something neither person would likely ever be able to do and condition Y is simply never going to happen? It seems easy to construct impossible moral challenges to oppose a particular belief, but why should beliefs be built around impossible moral edge cases? Shouldn't a person be able to develop a rational set of beliefs that do fail under extreme moral cases, but at the same time still hold a perfectly strong and not contradictory position?

View more: Prev | Next