All of AndySimpson's Comments + Replies

"...natural selection built the brain to survive in the world and only incidentally to understand it at a depth greater than is needed to survive. The proper task of scientists is to diagnose and correct the misalignment." -- E. O. Wilson

"Fanatics may suppose, that dominion is founded on grace, and that saints alone inherit the earth; but the civil magistrate very justly puts these sublime theorists on the same footing with common robbers, and teaches them by the severest discipline, that a rule, which, in speculation, may seem the most advantageous to society, may yet be found, in practice, totally pernicious and destructive." -- David Hume

More of an anti-fanaticism quotation, but it seems to belong.

Pain is broadly not preferred. That is to say, an absence of cognition is preferred to the cognition of pain. This makes the question easy for a preference utilitarian, who holds there is nothing impeding the value of the preferences of subjects: Badness attaches to pain when a subject would rather not be feeling it. When a subject prefers pain for whatever reason, there is nothing wrong with it. For objective moral systems outside of preference utilitarianism, the question is a little more threatening.

I have no idea what I'm wading into here, but a few things occured to me reading this:

Taking offense to something relies on status and perhaps more significantly on interpellation. Interpellation and its inherent insistence on dignity create barriers to what I'll call effective communication and introduce a rhetoric of respect. If we wish to be rationalists, really and truly, it seems like we must have a discourse that avoids insisting on respect for anyone or anything. We must all get thick skins, be willing to hear ourselves treated as objects of outs... (read more)

5hirvinen
As kpreid already said, that's pretty much Crocker's Rules, but few people can manage them, so assuming them or expecting people to declare them is a bad idea.
kpreid100

You've invented Crocker's Rules.

The Pope is a good neutral third party. He has taken the consolation prize of being the World's Most Moral Man because he can't be Vladimir Putin or Barack Obama, both of whom have more friends and more power.

Two corollary explanations come to mind. First, writing uses a wider variety of registers and styles than spoken language. Forms and usages that would sound exaggerated or affected in spoken language are socially appropriate in writing. Writing is constructed over time and predominately "for the record," so it uses precise, unforgiving language that suits the specific context of the writing. This is why the first line of a Wikipedia article on some topic in math, poetry, or physics is often indecipherable to a lay reader, even an educated one... (read more)

Here is my question: Is there any payoff whatsoever for everyone drawing?

Understood. I should've made it clear I was responding specifically to

A large part of the satisfaction of motorcycle work that Crawford describes comes from the fact that such work requires one to confront reality, however harsh it may be. Reality cannot be placated by hand-waving, Powerpoint slides, excuses, or sweet talk. But the very harshness of the challenge means that when reality yields to the finesse of a craftsman, the reward is much greater.

4RobinZ
Nearly a year late, but the reality that motorcycle mechanics must adhere to is the selfsame reality which justifies their job: the continued operation of the motorcycle. In contrast, the politician must adhere to many factors - public opinion, party loyalty, fundraising, etc. - which are only weakly related to the reality which justifies their job: public well-being.

Reality is not very harsh when all you're dealing with is a broken motorcycle or a program that won't compile. When you're dealing with public policy, which even in its best form is usually social triage, deciding who gets what and who will be left unemployed, poor, sick, in debt, unfunded, oppressed, or dead, the facts have a much greater sting.

And, as MichaelVassar points out, political success is usually pretty clear cut, at least in the long run. Just ask Walter Mondale or John McCain.

4jimrandomh
It's not the severity of the consequences that matters, but the distance. If a program or a motorcycle is broken, you can see that almost immediately. If a public policy is broken, it may take years for the problems to become clear, by which time the thought processes that lead to the bad decision will be long forgotten and cannot be connected to their consequences.

This seems to broaden the discussion considerably from works of art with fandoms to anything with a following. I think you'll agree that there's a noticeable difference between the attitude of otaku toward anime and F1 followers toward F1 cars and races.

0David_Rotor
Perhaps my error ... I didn't read anything in Bond's article that suggested he was only referring to fans of fiction and movies. Are there differences between otaku and tifosi? What are they?

This strikes me as the right answer. Things like Star Trek and Tolkien are incredibly powerful for very small subsets of the population because their creators make risky aesthetic and narrative choices. It isn't so much that fans feel they must come to the defense of their preferred works, but that those works speak to them in rare and intense ways that are really distasteful to most people. So fans bask in the uncommon power of their fan-objects and disregard prevailing opinion. People aren't as fanatical about things like Indiana Jones or Animal Farm... (read more)

2Vichy
"Things like Star Trek and Tolkien are incredibly powerful for very small subsets of the population because their creators make risky aesthetic and narrative choices." I would say there is some truth to this, for example I don't mind diplomacy scenes that take up 2/3rds of the episode since I'm an exposition sort-of person to begin with, but a lot of people really hate that.

As other commenters have suggested, what is moral is not reducible to what is natural. This assumption, which underlies the entire post, is left totally un-addressed. I understand that genetic fitness is relevant to morality because people must endure, but this doesn't seem to demand that the extent of morals be fitness. I would love a post that explains morality as inherently and solely about fitness.

This post flies from one topic to another very quickly, and I can't understand all the connections between topics. Why is the human designer of transhuman... (read more)

-3PhilGoetz
You are about as wrong as it is possible to be. The point of the post is that there is a parameter which goal-optimization provides a setting for, but which also has moral implications. If I believed that what was natural was moral, there would be no issue. You would simply set that parameter in a way that is best for goal-seeking, and be done with it. Now you're the one saying that what is natural is moral. See, as I said, that's what the post is about. If what is natural is moral, then your comment would be the obvious conclusion.

Also, organisms are always adaptation-executors rather than direct fitness-maximizers.

1timtyler
What are you guys talking about, exactly? Phil describes evolution as an optimisation process - which seems fair enough to me. Are you three "adaptation-execution" folk trying to deny that evolution acts as an optimisation process? If not, what does all this have to do with Phil's original post?

On first glance, the answer that came to mind was accidental death or serious injury due to sheer incompetence, like walking off a cliff. Something that has a massive survival cost and only communicates failure seems like it couldn't be signaling. Mistakes are revealing, after all. But this kind of signaling happens all the time, mostly as a flawed means of signaling courage or simply drawing attention.

It struck me then that the question of what is "least signaling" may not be useful for determining states of mind, that every behavior can be a... (read more)

Colonel F suggests the worst kind of compromise between the optimal and the real. Political actors must not overlook reality, as many of the great revolutionaries of history did, but neither should they bend their agendas to it, as Chamberlain, Kerensky, and so many tepid liberals and social democrats did. To do so is to surrender without even fighting. This is especially true for political actors with a true upper hand, like Eisenhower or MacArthur after World War II. They must control the conversation, they must push the Overton window away from compe... (read more)

The thing is, I think Wikipedia beat you to the punch on this one. They may not be Yudkowskian, big-R Rationalists, but they are, broadly-speaking, rational. And they do an incredibly effective job of pooling, assessing, summarizing, and distributing the best available version of the truth already. Even people of marginal source-diligence can get a clear view of things from Wikipedia, because extensive arguments have already distilled what is clearly true, what is accepted, what is speculation, and what is on the fringe.

I encourage you to bring the clar... (read more)

4byrnema
Agreed, we shouldn't duplicate anything that Wikipedia already does. However, Wikipedia is an encyclopedia of general information and, explicitly, doesn't want the role I am advocating here. While users try to expand the role of Wikipedia, the mediators want a narrower role for Wikipedia and would probably appreciate a complementary site for the purpose of analyzing information. Wikipedia: I would be open to petitioning for some kind of "WikiAnalysis" sister site, but that would do little for R-outreach (Is R-outreach something we are interested in?) and we'd be able to do it better.

NPOV does not stand for "No point of view." Nor does it mean "balance between competing points of view." Check out this and this. NPOV requires that Wikipedia take the view of an uninvolved observer, and it is supplemented by verifiability, which requires that Wikipedia take an empirical, secondary point of view that credits established academia.

So content disputes are usually settled by evaluating claims as true or false through verification. Those who continue to object to a claim once it has been established do not have to be incl... (read more)

0Alexandros
I don't think I said anything about 'no' point of view. I just claimed that the current policy of wikipedia is to reach for general consensus rather than the truth-seeking standards of this community. You could probably find a few examples of topics where the beliefs held here are not mirrored in the correcponding wikipedia page. This would seem to indicate that the two communities have different reasoning mechanisms. The examples you mentioned belong in the overlap between the two, simply because consensus on these matches the rational viewpoint, despite vocal oposition. However I can think of other articles where there would be quite significant difference (think of the list of topics in the comments here for instance.

So what lesson does a rationalist draw from this? What is best for the Bayesian mathematical model is not best in practice? Conserving information is not always "good"?

Also,

I will simply rationalize some other explanation for the destruction of my apartment.

This seems distinctly contrary to what an instrumental rationalist would do. It seems more likely he'd say "I was wrong, there was actually an infinitesimal probability of a meteorite strike that I previously ignored because of incomplete information/negligence/a rounding error."

On the whole, we're agreed, but I still don't know how I'm supposed to choose values.

This fact is often obscured by the tendency for political disputes to impute 'bad' values to opponents rather than to recognize the actual disagreement, a tactic that ironically only works because of the wide agreement over the set of core values, if not the priority ordering.

I think this tactic works best when you're dealing with a particular constituency that agrees on some creed that they hold to be objective. Usually, when you call your opponent a bad person, you're playing to your base, not trying to grab the center.

I think we are close. Do you think enjoyment and pain can be reduced to or defined in terms of preference? We have an explanation of preference in evolutionary psychology, but to my mind, a justification of its significance is necessary also. Clearly, we have evolved certain intuitive goals, but our consciousness requires us to take responsibility for them and modulate them through moral reasoning to accept realities beyond what our evolutionary sense of purpose is equipped for.

To me, preference is significant because it usually underlies the start of ... (read more)

0mattnewport
Certainly close enough to hope to agree on a set of rules, if not completely on personal values/preferences. I don't really recognize a distinction here. The explanation explains why preferences are their own justification in my view. I think I at least partially agree - sometimes we should override our immediate moral intuitions in light of a deeper understanding of how following them would lead to worse long term consequences. This is what I mean when I talk about recognizing contradictions within our value system and consciously choosing priorities. This looks like the utilitarian position and is where I would disagree to some extent. I don't believe it's necessary or desirable for individuals to prefer 'aggregated' utility. If forced to choose I will prefer outcomes that maximize utility for myself and my family and friends over those that maximize 'aggregate' utility. I believe that is perfectly moral and is a natural part of our value system. I am however happy to accept constraints that allow me to coexist peacefully with others who prefer different outcomes. Morality should be about how to set up a system that allows us to cooperate when we have an incentive to defect.

Peaceful coexistence is not something I object to. Neither does anything oblige agents to perfectly align their values, each is free to choose. I strongly endorse people with wildly different values cooperating in areas of common interest: I'm firmly in Anton LaVey's corner on civil liberties, for instance. It should be recognized, though, that some are clearly more wrong than others because some people get poor information and others reason poorly through akrasia or inability. Anton LaVey was not trying hard enough. I think the question is worth asking, because it is the basis of building the minimal framework of rules from each person's judgement: How are we supposed to choose values?

1mattnewport
It seems to me that most problems in politics and other attempts to establish cooperative frameworks stem not from confusion over terminal values but from differing priorities placed on conflicting values and most of all on flawed reasoning about the best way to structure a system to best deliver results that satisfy our common preferences. This fact is often obscured by the tendency for political disputes to impute 'bad' values to opponents rather than to recognize the actual disagreement, a tactic that ironically only works because of the wide agreement over the set of core values, if not the priority ordering.

Why do you think it needs to be confronted? ... I don't however feel the need to 'prove' that my underlying preference for preserving the lives of myself and my family and friends (and to a lesser extent humans in general) is a fundamental principle - I simply take it as a given.

I think it needs to be confronted because simply taking things as given leads to sloppy moral reasoning. Your preference for self-preservation seems to be an impulse like any other, no more profound than a preference for chocolate over vanilla. What needs to be confronted is w... (read more)

0mattnewport
I'm very interested in those questions and have read a lot on evolutionary psychology and the evolutionary basis for our sense of morality. I feel I have a reasonably satisfactory explanation for the broad outlines of why we have many of the goals we do. My curiosity can itself be explained by the very forces that shaped the other goals I have. Based on my current understanding I don't however see any reason to expect to find or to want to find a more fundamental basis for those preferences. Our goals are what they are because they were the kind of goals that made our ancestors successful. They're the kind of goals that lead to people like us with just those kind of goals... There doesn't need to be anything more fundamental to morality. To try to explain our moral principles by appealing to more fundamental moral principles is to make the same kind of mistake as to try to explain complex entities with a more fundamental complex creator of those entities. Hopefully we can all agree on that.

In theory, the westerners would just be sending their money to desperately poor people.

I'm not an economist, and but I think you could model that as a kind of demand. And I don't think I stipulated to there being a transfer of wealth.

Unless you believe in objective morality, then a policy of utilitarianism, pure selfishness, or pure altruism all may be instrumentally rational, depending on your terminal values.

For me, the interesting question is how one goes about choosing "terminal values." I refuse to believe that it is arbitrary or th... (read more)

2knb
Yes that was my point. I go on to say that aggregate demand would not decrease. I recommend Eliezer's essay regarding the objective morality of sorting pebbles into correct heaps. http://www.overcomingbias.com/2008/08/pebblesorting-p.html
2knb
Short answer? We don't. Not really. Human beings have an evolved moral instinct. These evolutionary moral inclinations lead to us assigning a high value to human life and well-being. The closest internally coherent seeming ethical structure seems to be utilitarianism. (It sounds bad for a rationalist to admit "I value all human life equally, except I value myself and my children somewhat more.") But we are not really utilitarians. Our mental architecture doesn't allow most of us to really treat every stranger on earth as though they are as valuable as ourselves or our own children.
0mattnewport
I'm interested in a system that allows a John Stuart Mill and an Anton LaVey to peacefully coexist without attempting to judge who is more 'objectively' moral. I wish to be able to choose my own terminal values without having to perfectly align them with every other agent. Morality and ethics are then the minimal framework of agreed rules that allows us all to pursue our own ends without all 'defecting' (the prisoner's dilemma is too simple to be a really representative model but is a useful analogy). The extent and nature of that minimal framework is an open question and is what I'm interested in establishing.

Ok, here is what I don't agree with:

Choosing those goals is not something that rationality can help much with - the best it can do is try to identify where goals are not internally consistent.

I think rationality absolutely must confront the question of purpose, and head-on. How else are we to confront it? Shouldn't we try to pin down and either discard or accept some version of "purpose," as a sort of first instrumental rationality?

I mention objectivity because I don't think you can have any useful ethics without some static measure of comp... (read more)

0mattnewport
Why do you think it needs to be confronted? I know there are many things that I want (though some of them may be mutually exclusive when closely examined) and that there are many similarities between the things that I want and the things that other humans want. Sometimes we can cooperate and both benefit, in other cases our wants conflict. Most problems in the world seem to arise from conflicting goals, either internally or between different people. I'm primarily interested in rationality as a route to better meeting my own goals and to finding better resolutions to conflicts. I have no desire to change my goals except to the extent that they are mutually exclusive and there is a clear path to a more self consistent set of goals. To the extent that we share a common evolutionary history our goals as humans overlap to a sufficient extent that cooperation is beneficial more often than not. Even where goals conflict, there is mutual benefit to agreeing rules for conflict resolution such that not everything is permitted. It is in our collective interest not to permit murder, not because murder is 'wrong' in some abstract sense but simply because most of us can usually agree that we prefer to live in a society where murder is forbidden, even at the cost of giving up the 'freedom' to murder at will. That equilibrium can break down and I'm interested in ways to robustly maintain the 'good' equilibrium rather than the 'bad' equilibrium that has existed at certain times and in certain places in history. I don't however feel the need to 'prove' that my underlying preference for preserving the lives of myself and my family and friends (and to a lesser extent humans in general) is a fundamental principle - I simply take it as a given.

It's hard to reconcile any western lifestyle with traditional utilitarianism though so if that's your main concern with cryonics perhaps you need to reconsider your ethics rather than worry about cryonics.

One of the beauties of utilitarianism is that its ethics can adapt to different circumstances without losing objectivity. I don't think every "western lifestyle" is necessarily reprobate under utilitarianism. First off, if westerners abandoned their western lifestyles, humanity would be sunk: next to the collapse of aggregate demand that wo... (read more)

3Nick_Tarleton
Universalizability arguments like this are non-utilitarian; it's the marginal utility of your decision (modulo Newcomblike situations) that matters. It definitely seems to me that refraining from these things is so much less valuable than making substantial effective charitable contributions (preferably to existential risk reduction, of course, but still true of e.g. the best aid organizations), probably avoiding factory-farmed meat, and probably other things as well.
1knb
Interesting. I'm not certain, but I think this isn't quite right. In theory, the westerners would just be sending their money to desperately poor people, so aggregate demand wouldn't necessarily decline, it would move around. Consumption really doesn't create wealth. Of course rational utilitarian westerners would recognize the transfer costs and also wouldn't completely neglect their own happiness. Unless you believe in objective morality, then a policy of utilitarianism, pure selfishness, or pure altruism all may be instrumentally rational, depending on your terminal values. If you have no regard for yourself then pursue pure altruism. Leave yourself just enough that you can keep producing more wealth for others. Study Mother Teresa. If you have no regard for others, then a policy of selfishness is for you. Carefully plan to maximize your total future well-being. Leave just enough for others that you aren't outed as a sociopath. Study Anton LaVey. If you have equal regard for the happiness of yourself and others, pursue utilitarianism. Study Rawls or John Stuart Mill. Most people aren't really any of the above. I, like most people, am somewhere between LaVey and Mill. Of course defending utilitarianism sounds better than justifying egoism, so we get more of that.
0mattnewport
I don't think objectivity is an important feature of ethics. I'm not sure there's such a thing as a rationalist ethics. Being rational is about optimally achieving your goals. Choosing those goals is not something that rationality can help much with - the best it can do is try to identify where goals are not internally consistent. I gave a rough exposition of what I see as a possible rationalist ethics in this comment but it's incomplete. If I ever develop a better explanation I might make a top level post.

This may be a naïve question, but could someone make or link me to a good case for cryonics?

I know there's a fair probability that we could each be revived in the distant future if we sign up for cryonics, and that is worth the price of admission, but that always struck me as a mis-allocation of resources. Wouldn't it be better, for the time being, if we dispersed all the resources used on cryonics to worthwhile causes like Iodized salt, clean drinking water, or childhood immunization and instead gave up our organs for donation after death? Isn't the c... (read more)

1mattnewport
I'd agree that signing up for cryonics and being a traditional utilitarian (valuing all human life equally) aren't really compatible. I'm not a utilitarian so that's not my problem with cryonics but it does seem to be hard to reconcile the two positions. It's hard to reconcile any western lifestyle with traditional utilitarianism though so if that's your main concern with cryonics perhaps you need to reconsider your ethics rather than worry about cryonics.

Why a 0.3 chance? Is that totally arbitrary? Also, it seems like a "boo" button would quickly become a means for people to indulge in inappropriate down-voting and feel insulated from responsibility for the outcome. It would also be a tempting false compromise between actually down-voting and doing nothing. Usually, one or the other is the right choice.

I really do think we're all getting too worked up over the minutia of the karma system.

Agreed, but:

This isn't a game.

We must admit that to a great extent, it is. We are all attempting to make ourselves appear more useful to the community, and karma is the only quantitative way to tell if we're making progress. Like so many things, it feels like it trivializes but it is there for a purpose.

0Mulciber
That gets to the heart of why I don't think the karma system is worth too much emphasis. Shouldn't we instead be attempting to make ourselves more useful to the community? That's true. I do think we're better off with it than we would be without it, but it shouldn't get attention disproportionate to its purpose. It's a means to an end, nothing more.

An important, so-often-useful distinction. This reminds me of the Buddhist notion of fetters. Fetters are personal features that impair your attainment of enlightenment and bind you to suffering. You can cast them off, but in order to do so, you have to cut the crap and practice doing without them, with the full knowledge that it may takes many lifetimes to free yourself. It is not sufficient to announce your adhesion to the creed of enlightenment. The only things that make you do better are the things that make you do better. Everything else is wind... (read more)

I used to be worried about this, too. Then I found this beautifully concise term that resolves the whole question and ends semantic arguments over this arbitrary, imaginary distinction: agnostic atheist. This correctly describes me and I think it describes most other people who would call themselves agnostic or atheist. I encourage you to spread the term, and, when it's necessary or convenient, collapse the term into what you mean: atheist, which signifies only a lack of positive theism.

Also, Bertrand Russell explored this question thoroughly in his essa... (read more)

Which was terrible and sitting at -1? I don't understand. All I was trying to indicate is that I've noticed a pronounced deviation from standard upvoting and downvoting practices in this thread, mostly towards downvoting.

2MrHen
This comment has been fluctuating between -1 and -4 for a while. As of now it is at -3. I was using it as an example of people upvoting a comment that really was not doing anything. Since it is back to -3, I suppose I have no valid point left. So, yeah, you could be right.

Really must set up my LessWrong dev environment so I can add a patch to show both upvotes and downvotes!

Indeed. If that is the only change to this site's system or ethic that comes out of this discussion, it will have been worth it.

Agreed --- What seems to be happening, funny enough, is an echo chamber. Eliezer said "you must downvote bad comments liberally if you want to survive!" and so everyone's downvoting everyone else's comments on this thread.

1MrHen
Except they are not. The complete irony is that my comment about downvoting dropped to -4 and has been climbing ever since. It displayed the exact behavior I was complaining about in my other comment. I expected this comment to drop like a rock and now it is sitting with all of my other bad and mediocre posts at a useless -1. My comment was terrible. It should be downvoted. (Edit) Oh, I guess I should voice my agreement with infotropism. I think downvoting "more" is just overcompensating.

I have the same apprehension. I'm somewhere between "complete poser" and "well-established member of the community," I just sort of found out about this movement about 50 days ago, started reading things and lurking, and then started posting. When I read the original post, I felt a little pang of guilt. Am I a fool running through your garden?

I'm doing pretty well for myself in the little Karma system, but I find that often I will post things that no one responds to, or that get up-voted or down-voted once and then left alone. I fi... (read more)

gjm asks wisely:

What would you think of a musician who decided to give a public performance without so much as looking at the piece she was going to play? Would you not be inclined to say: "It's all very well to test yourself, but please do it in private"?

The central thrust of Eliezer's post is a true and important elaboration of his concept of improper humility, but doesn't it overlook a clear and simple political reality? There are reputational effects to public failure. It seems clear that those reputational effects often outweigh whatev... (read more)

But in this case, someone with a degree of astronomical knowledge comparable to yours, acting in good faith, has come up to you and has said "I'm 99% confident that a meteor will hit your house today. You should leave." Why not investigate his claim before dismissing it?

5matt
The original post specifies that even taking account of the other doctor's opinion, we're still 99% sure. This seems pretty unlikely, unless we know that the other doctor is really very rationally deficient, but it's the scenario we're discussing.
5Furcas
What if I'm wrong? Well, what if my house gets hit by a meteor today, and I get seriously wounded? Should I then regret not having left my house today? I could wish I had left, but regretting my decision would be silly. We can only ever make decisions with the information that's available to us at the moment. Right now I have every reason to believe my house will not get hit by a meteor, and I feel like staying at home, so that's the best decision. Likewise, in the OP's scenario I have every reason to believe the disease is malaria, so getting my hands on as much malaria medication as I can is the best decision. That's all there is to it.

Facts do not cease to exist because they are ignored.

--Aldous Huxley

Reality is that which, when you stop believing in it, doesn't go away.

Philip K. Dick

Life is short, and truth works far and lives long: let us speak the truth.

--Arthur Schopenhauer

...natural selection built the brain to survive in the world and only incidentally to understand it at a depth greater than is needed to survive. The proper task of scientists is to diagnose and correct the misalignment.

-E. O. Wilson

Before we study Zen, the mountains are mountains and the rivers are rivers. While we are studying Zen, however, the mountains are no longer mountains and the rivers are no longer rivers. But then, when our study of Zen is completed, the mountains are once again mountains and the rivers once again rivers.

-- Buddhist saying

It seems like you assume implicitly that there's an equal probability of the other doctor defecting: (0 + 10,000)/2 < (5,000 + 15,000)/2. That makes sense in the original prisoner's dilemma, but given that you can communicate, why assume this?

1Furcas
It doesn't make a difference. I'm better off defecting no matter what the other doctor does. Like I said, I'll try to convince him to cooperate and then I'll break our agreement. If I succeed, good for me; if I fail, at least I'll have saved 5000 people. That's only if there's a single iteration of this dilemma, of course. If I have reason to believe there will be three iterations and if I'm pretty sure I managed to convince the other doctor, I should cooperate (10000 * 3 > 15 000 + 5000 + 5000).

In utilitarianism, sometimes some animals can be more equal than others.. It's just that their lives must be of greater utility for some reason. I think sentimental distinctions between people would be rejected by most utilitarians as a reason to consider them more important.

That is a good question for a statistician, and I am not a statistician.

One thing that leaps to mind, however, is two-boxing on Newcomb's Problem using assumptions about the prior probability of box B containing $1,000,000. Some new work using math that I don't begin to understand suggests that either response to Newcomb's problem is defensible using Bayesian nets.

There could be more trivial cases, too, where a person inputs unreasonable prior probabilities and uses cargo-cult statistics to support some assertion.

Also, it's struck me that a frequentist st... (read more)

I'm not sure this is always a bad thing.

It may be useful shorthand to say "X is good", but when we forget the specific boundaries of that statement and only remember the shorthand, it becomes a liability. When we decide that the statement "Bayes' Theorem is valid, true, and useful in updating probabilities" collapses into "Bayes' Theorem is good," we invite the abuse of Bayes' Theorem.

So I wouldn't say it's always a bad thing, but I'd say it introduces unnecessary ambiguity and contributes to sub-optimal moral reasoning.

3janos
Do you have some good examples of abuse of Bayes' theorem?

What army of free-market mercenaries could seriously hope to drive the modern US Armed Forces, augmented by a draft, to capitulation? Perhaps more relevantly, what army of free-market mercenaries could overcome the fanatical, disciplined mass of barbarians?

What I'm inferring from your comment is that a rational society could defend itself using market mechanisms, not central organization, if the need ever arose. Those mechanisms of the market might do well in supplying soldiers to meet a demand for defense, but I'm skeptical of the ability of the blind m... (read more)

0[anonymous]
Big ones.
2mattnewport
Plenty of private corporations seem to do quite well at grand strategy and defeating enemies in market competition. It doesn't seem a huge stretch to imagine them achieving similar success in battle. Much of military success comes down to logistics and I think a reasonable case can be made that private corporations already demonstrate greater competence in that area than most government enterprises.

This is a thoughtful, thorough analysis of some of the inherent problems with organizing rational, self-directing individuals into a communal fighting force. What I don't understand is why you view it as a special problem that needs a special consideration.

Society is an agreement among a group of people to cooperate in areas of common concern. The society as one body defends the personal safety and livelihood of its component individuals and it furnishes them with certain guarantees of livability and fair play. In exchange, the component individuals ple... (read more)

3matt
"social contract" [shudders], I don't remember signing that one. A "social contract" binding individuals to make self-sacrificing decisions doesn't seem necessary for a healthy civilization. See David D. Friedman's Machinery of Freedom for details; for a very (very) brief sketch consider that truck drivers rationally risk death on the roads for pay and that mercenaries face a higher risk of death for more pay - and that merchants will pay both truck drivers and soldiers for their services. Soldiery doesn't have to be a special case requiring different rational rules.

Rationalism isn't exclusively or even necessarily empirical. Just ask Descartes.

I think coming to agreement on terms through a dialectic is something most everyone can agree to engage in, and I don't think it's offensive to or beyond the scope of rationality. Socrates' way is the sort of meta-winning way, the way that, if fully pursued, will arrive at the conclusion of rationality.

For instance, In any one of those cases, I could start with a dialectic about problem-solving in everyday life, or at least general cases, and proceed to the principle that rationality is the best way. I'd try to come to agreement about the methods we us... (read more)

Load More