4 min read14th Mar 200939 comments

27

Follow-up to: So you say you're an altruist

The responses to So you say you're an altruist indicate that people have split their values into two categories:

  1. values they use to decide what they want
  2. values that are admissible for moral reasoning

(where 2 is probably a subset of 1 for atheists, and probably nearly disjoint from 1 for Presbyterians).

You're reading Less Wrong.  You're a rationalist.  You've put a lot of effort into education, and learning the truth about the world.  You value knowledge and rationality and truth a lot.

Someone says you should send all your money to Africa, because this will result in more human lives.

What happened to the value you placed on knowledge and rationality?

There is little chance that any of the people you save in Africa will get a good post-graduate education and then follow that up by rejecting religion, embracing rationality, and writing Less Wrong posts.

Here you are, spending a part of your precious life reading Less Wrong.  If you spend 10% of your life on the Web, you are saying that that activity is worth at least 1/10th of a life, and that lives with no access to the Web are worth less than lives with access.  If you value rationality, then lives lived rationally are more valuable than lives lived irrationally.  If you think something has a value, you have to give it the same value in every equation.  Not doing so is immoral.  You can't use different value scales for everyday and moral reasoning.

Society tells you to work to make yourself more valuable.  Then it tells you that when you reason morally, you must assume that all lives are equally valuable.  You can't have it both ways.  If all lives have equal value, we shouldn't criticize someone who decides to become a drug addict on welfare.  Value is value, regardless of which equation it's in at the moment.

How do you weigh rationality, and your other qualities and activities, relative to life itself?  I would say that life itself has zero value; the value of a life is the sum of the values of things done and experienced during that life.  But society teaches the opposite: that mere life has a tremendous value, and anything you do with your life has negligible additional value.  That's why it's controversial to execute criminals, but not controversial to lock them up in a bare room for 20 years.  We have a death-penalty debate in the US, which has consequences for less than 100 people per year.  We have a few hundred thousand people serving sentences of 20 years and up, but no debate about it.  That shows that most Americans place a huge value on life itself, and almost no value on what happens to that life.

I think this comes from believing in the soul, and binary thought in general.  People want a simple moral system that classifies things as good or bad, allowable or not allowable, valuable or not valuable.  We use real values in deciding what to do on Saturday, but we discretize them on Sunday.  Killing people is not allowable; locking them up forever is.  Killing enemy soldiers is allowable; killing enemy civilians is not.  Killing enemy soldiers is allowable; torturing them is not.  Losing a pilot is not acceptable; losing a $360,000,000 plane is.  The results of this binarized thought include millions of lives wasted in prison; and hundreds of thousands of lives lost or ruined, and economies wrecked, because we fight wars in a way intended to avoid violating boundary constraints of a binarized value system rather than in a way intended to maximize our values.

The idea of the soul is the ultimate discretizer.  Saving souls is good.  Losing souls is bad.  That is the sum total of Christian pragmatic morality.

The religious conception is that personal values that you use for deciding what to do on Saturday are selfish, whereas moral values are unselfish.  It teaches that people need religion to be moral, because their natural inclination is to be selfish.  Rather than having a single set of values that you can plug into your equations, you have two completely different systems of logic which counterbalance each other.  No wonder people act schizophrenic on moral questions.

What that worldview is really saying is that people are the wrong level of rationality.  Rationality is a win for the rational agent.  But in many prisoners-dilemma and tragedy-of-the-commons scenarios, having rational agents is not a win for society.  Religion teaches people to replace rational morality with an irrational dual-system morality under the (hidden) theory that rational morality leads to worse outcomes.

That teaching isn't obviously wrong.  It isn't obviously irrational.  But it is opposed to rationalism, the dogma that rationality always wins.  I use the term "rationalism" to mean not just the reasonable assertion that rationality is the best policy for an agent, but also the dogmatic belief that rational agents are the best thing for society.  And I think this blog is about giving fanatical rationalism a chance.

So, if you really want to be rational, you should throw away your specialized moral logic, and use just one logic and one set of values for all decisions.  If you decide to be a fanatic, you should tell other people to do so, too.

EDIT: This is not an argument for or against aid to Africa.  It's an observation on an error that I think people made in reasoning about aid to Africa.

New Comment
39 comments, sorted by Click to highlight new comments since: Today at 1:35 PM

First, I agree with the main thrust of your argument: that our current morality is riddled to the core with things derived from supernatural beliefs, and we're going to have to rebuild it from the ground up. And that we need to stop using "life" as a primitive unit.

But I disagree with you about your specific example. I don't think those of us who want to donate money to poor countries are trying to create more human lives, or even save more human lives - see Parfit's Repugnant Conclusion. We're trying to alleviate suffering.

If the best way to alleviate suffering is to buy condoms for starving Africans to prevent the birth of another generation of starving Africans, I am all for this even though it decreases the number of Africans. If the best way to alleviate suffering is to try to improve the African economy through programs like microfinance, I am all for this even though it holds the number of Africans constant. If the best way to alleviate suffering is by curing malaria, I am all for this even though it increases the number of Africans - as long as the total amount of suffering including those new Africans is less than it was before.

I draw a sharp distinction between about twenty different meanings of the word "value". Value on instrumental grounds is one of them. When I say that drug addicts are less valuable than other people, I probably mean they're less useful to society. That doesn't mean it's morally more okay to torture a drug addict than it is to torture Bill Gates (except insofar as torturing Bill Gates would disrupt his various societally useful activities). But there are a whole lot of values that aren't value to society - even someone who's valueless to society may well have a lot of value to himself.

I don't know if this is exactly what you're proposing, but I'll argue against it anyway - I reject a "multiplier" theory of value. That is, if Person A is twice as good a person as Person B (more intelligent, prettier, whatever) then that doesn't mean that torturing Person B is twice as acceptable as torturing Person A. A unit of suffering is a unit of suffering. Person A deserves credit for all the great things he does, but that doesn't change the ethical calculus. The exception is that it's better to kill an unhappy person than a happy person, because it's a better change to the joy/suffering balance. You still can't go around killing unhappy people willy-nilly though because of precedent reasons.

Although I value rationality, I have to admit that I value it mostly as an instrumental value. Although I value it as a terminal value a little, I don't think it has quite as much power for me as the joy vs. suffering value. I can't think of any non-trivial amount of torture I would inflict on Person A that would be justified if it caused Person B to read a book on Bayesian statistics. That's why I'd prefer to spend my money on starving people in Africa than anything else. It seems like the cheapest way to alleviate the most suffering, and alleviating suffering is top priority in my ethical system right now (I include in this really indirect ways to help people in Africa, like the Singularity Institute).

My comment isn't as clear as I think it should be, but I'm not even sure to what degree we disagree so I won't fret too much about it (I might just be rounding you off to the nearest cliche, as one person put it). One thing, though: you do accept that even if you value education more than saving starving Africans, John Maxwell's argument still holds, right? You just need to donate all that money to educational charities. The argument holds as long as there's something, anything, you value more than your own convenience.

Question: am I the only person who (unless I considered immortality likely) would vastly, VASTLY prefer a death sentence to twenty years in prison?

Yes, John Maxwell's basic argument still holds.

You shouldn't be mixing "rational agents win" with "rational societies lose". If you one-box on Newcomb's Problem (as motivated by "rational agents win") then you probably cooperate on the Prisoner's Dilemma with similar agents; these are widely regarded as almost the same problem.

What happened to the value you placed on knowledge and rationality?

It was an instrumental value of saving lives in the first place... he said untruthfully; but still, you see the point.

But society teaches the opposite: that mere life has a tremendous value, and anything you do with your life has negligible additional value. That's why it's controversial to execute criminals, but not controversial to lock them up in a bare room for 20 years.

What if the criminals you execute might otherwise stand a decent chance of living forever?

Society tells you to work to make yourself more valuable. Then it tells you that when you reason morally, you must assume that all lives are equally valuable. You can't have it both ways.

I disagree with a lot of this, but finally upvoted just for that one argument.

In real life, rational agents routinely fail to coordinate on PD problems. Perhaps they would coordinate, if they were more rational. In that case, there is a valley of bad rationality between religion and PD-satisficing rationality.

What if the criminals you execute might otherwise stand a decent chance of living forever?

I was inferring the values of the majority of the population from their actions. The majority of the population doesn't think people have a decent chance of living forever in this world.

there is a valley of bad rationality between religion and PD-satisficing rationality

I hadn't seen this insight expressed so clearly before, thank you.

The majority of the population doesn't think people have a decent chance of living forever in this world.

If we're reasoning from the values of the majority, the majority are religious, and are hoping that there is a non-zero chance that during those years in jail, you might be saved, and wind up spending eternity in heaven rather than hell. Of course, most prisons are...shall we say less than optimally designed for this purpose.

Probably though we should assume that evolution built people to cooperate about the right amount for their ancestral environment, neither too much nor too little, and that cultures then promoted excess cooperation from a gene's eye view because your tendency towards cooperation has larger benefits to me than costs to you so I will pay more to create it than you will to avoid it.

In that case, there is a valley of bad rationality between religion and PD-satisficing rationality.

Perhaps this should be a major goal for our community -- hopefully to shepherd people safely from one end of the valley to the other, but even failing that, simply to stand on the other side, waving and yelling, so that people know the opposite ledge exists, and have motivation in their journey.

Someone says you should send all your money to Africa, because this will result in more human lives.

To which I would respond "Okay, but is that necessarily a good thing?"

Here you are, spending a part of your precious life reading Less Wrong. If you spend 10% of your life on the Web, you are saying that that activity is worth at least 1/10th of a life, and that lives with no access to the Web are worth less than lives with access. If you value rationality, then lives lived rationally are more valuable than lives lived irrationally. If you think something has a value, you have to give it the same value in every equation. Not doing so is immoral. You can't use different value scales for everyday and moral reasoning.

I think many of the reasons I disagree with your post as a whole have kernels in this paragraph.

If I spend 10% of my life on the web, it doesn't necessarily mean I value going on the web at least by 1/10th of my life. I think the truth is closer to "10% of my time awake, I was bored and happened to have a computer with internet access nearby". If you offered someone the deal "we will extend your life by 10%, but you may never access the web again"... well, I wouldn't accept the deal, but I'm sure there exists people who would; people who have spent 10% of their lives on the web.

And just because I spend 10% of my time on the web doesn't mean I value lives that go on the web more than lives which don't. This argument is as much of a non-sequitur as saying "People who spend 10% of the their time masturbating value lives involving masturbation more than lives that don't".

And I can easily give lives different values, while remaining self consistent and rational. Perhaps I'm selfish and I value my life more than any one else's lives. If the king had gave me the choice of either I die, or one other person dies, then I'd choose for the other person to die. If it was me against two other people, I'd probably still choose myself, though I'd feel guiltier about it. When it becomes ten is when I really start to hesitate, and when it becomes a million, I guess I really have to give up and allow myself to die.

Unless, of course, the world is already suffering through an overpopulation crisis, in which case by allowing those million to die, I could claim to be acting in the greater good of the world.

I was with you right up to the last three paragraphs, and again feel like we might need to taboo "rational" and "rationalist".

Maybe that should have been a separate post.

I don't know how different people define "rationalism". But I do think it's important to be aware whether you believe that rationality is the best policy for an agent, or that, in addition, rational agents are the best thing for a society. The latter is probably not true as a universal rule for all types of agents.

Disbelief in the latter is of course why SIAI exists. The latter may however be true for human rationalists with average personalities. Even if it was only true for average personalities AMONG those who became rationalists this community would make sense.

How'd you manage the dual post? Should this be a bug report?

I clicked on "comment"; the browser (Firefox) hung for a few minutes trying to submit it; I clicked on it again.

[-][anonymous]15y10

Maybe that should have been a separate post.

I don't know how different people define "rationalism". But I do think it's important to be aware whether you believe that rationality is the best policy for an agent, or that, in addition, rational agents are the best thing for a society. The latter is probably not true as a universal rule for all types of agents.

Society tells you to work to make yourself more valuable. Then it tells you that when you reason morally, you must assume that all lives are equally valuable.

Nitpicking: I don't think this is a good way of framing the issue. "Society" doesn't tell you to do anything. There are societal structures in place that reward certain actions, but you are not told to do anything one way or another. I only mention this because you are not the first to do so.

As far as your ethics are concerned, you are assuming that a rationalist will be able to deduce the best possible action at the outset of his life, instead of experimenting with various strategies and updating your beliefs. In a probabilistic environment, reward matching is the best strategy.

I find your African aid example jarring, and my back of the envelope calculations suggest it is backwards.

Many aid organisations exist that focus their spending on funding education directly, or improving educational infrastructure. Educated children are more likely to escape peasant-hood, and more likely to ensure that their own children are educated. It seems probable to me that the potential net rationality (measured in rations or some such unit) produced from small donations is positive. Assuming we want to maximize humanity's mean rationality score, this may be an example of comparative advantage at work.

The net value of an extra $50 in my pocket on friday is negative, it will probably be spent on beer, takeaway, maybe a new game I can waste time playing. I already spent all day reading papers and writing code, the chance of me spending that $50 to level up my rationality again is negligible compared to the chance of my $50 hangover cutting into my Saturday morning research time. The net value (in rations) of posting that $50 to Plan or some such organisation to spend it providing primary school education to girls who have a non-zero probability of going on to become biotech researchers is positive.

I'd even be inclined to suggest that the value of a potential small-r rationalist in an intellecutally backward country is higher than a small time fraction of a rationalist in an educated society. You get to decide which of Africa or the States is intellectually more backward...

ac

That's a good reply. But most people didn't make it in the thread on African aid. They just waltzed right into the "a life is a life is a life" assumption without even pointing it out.

An attempt to seriously address the dilemma of African aid is much more than I can do in a blog post. My own reasoning on the matter has not hit bottom yet. Please don't interpret my post as being against aid to Africa, or as being my final rejection to aid from Africa.

On part of your article, and maybe not to its point, yet - Nope, I believe in the value of human life as an absolute and a minimal starting point, independently of its economical or hedonistic or whatnot value, above which you may be free to add further value for reason X or Y, because I'm scared as hell that if I'm not efficient enough, my value on the market (or any other way of assessing my value) will drop, possibly low enough to do me in finally.

Not for any other reason than to assure the safety of my own human person. Being human is being fragile, and we live a precarious life. We need to be protected, and to have some sort of value in and by ourselves. I also extend that idea to others because at this point it doesn't cost me anything significant to do so, and because if I don't, then there's less incentive for others humans to extend it towards me. Also, genuine intellectual honesty and fairness, I don't see why the absolute value of another human life should be any less than mine.

This doesn't mean that on a personal, relative level, I don't value mine more anyway. I expect the other guy to value his as well, though, so once again I both expect to have to cooperate, and also, to empathize with someone who's like me, because I simply can't help it once I realize how it feels to be in the other's shoes.

In philosophy there is an objection to utilitarianism called the Repugnant Conclusion, which goes something like this:

"For any possible population of at least ten billion people, all with a very high quality of life, there must be some much larger imaginable population whose existence, if other things are equal, would be better even though its members have lives that are barely worth living."

I don't think the conclusion is repugnant. The reason it appears repugnant is because of some verbal slight of hand that's going on: when we think of lives that are "barely worth living", the lives we actually think of are those that aren't worth living (for example, an extremely depressed person who will spend the rest of their life in a padded cell.) The reason we do this is because, as you explain, people make the mistake of putting a huge value on life itself.

Rationality is a win for the rational agent. But in many prisoners-dilemma and tragedy-of-the-commons scenarios, having rational agents is not a win for society.

It's not just social dilemmas that can favor "irrational" morality; an imperfect consequentialist agent may do better according to its own values by adopting non-consequentialist ethics. I question that this is actually "irrational", per my interpretation of "rational agents win".

If you decide to be a fanatic, you should tell other people to do so, too.

What if I believe that an individual's utility-maximizing level of fanaticism is proportional to their reasoning ability (or something similar), and that my reasoning ability is much higher than average?

What if I believe that an individual's utility-maximizing level of fanaticism is proportional to their reasoning ability (or something similar), and that my reasoning ability is much higher than average?

Then you keep it to yourself, because the people to whom that applies can figure it out for themselves, and everyone else will just get angry at you.

I think that giving aid to Africa just makes us "feel better." While, say, bed nets may not make us feel warm and fuzzy, they may be what Africa needs. The proper action may be to give emotive stories on every person saved by bed nets.

Life, as such, is valueless.

Every person's life has value to that person (we can reasonably assume - or he would commit suicide, either actively or passively).

I have long believed that the rational value of a person's life to other people is the contribution that person makes to the welfare of the others - either directly to friends and family or to society (through paid work) to strangers. I stress paid work, since there is no way to judge whether someone's unpaid actions is a net contribution or not - at least if someone is willing to pay for something it is a benefit to that person.

Part of the problem you discuss is a confusion between a person's self-valuation and his value to others.

I don't really buy that. As a wise man once said "We were put here to serve others. Why others were put here is beyond me."

If the only value is to help others, and the others' only value is to help you, then isn't the whole system ultimately valueless?

Sorry for the confusion. A person's primary value should be himself. His value to others is what he contributes. I don't expect you to value me, except for what I may contribute, through my work or writing or whatever, to whatever you value.

Same question as I asked scientism on the other thread: if I started torturing a randomly selected 8-year old child, would this bother you? Assume the child has never contributed anything important, and that my torture will stop before it kills him or scars him so badly that he can't contribute in the future.

What you describe is similar to my own position. I made a short note of it in the "closet survey" thread: I don't think any life has inherent value. However, there's another problem with morality I want to draw attention to, and that's the idea that people could somehow straightforwardly accumulate value by increasing virtue or reducing vice.

I find utilitarian ethical notions such as "alleviating suffering", "increasing happiness" and even "increasing rationality" incoherent. These aren't things you can pour into a bucket. Pain and happiness are not cumulative. Experiencing 40 years of uninterrupted happiness will not lead to Nirvana and will most likely not be particularly different, at the end of the 40 years, from having experienced a mixed life. (The degree to which suffering does have a lasting effect is probably due to long-term consequences to health and behavior rather than the accumulation of the negative experiences themselves.)

To me, what is accumulated has to be a something that is genuinely cumulative, which I believe can only be the gross empirical knowledge of human society as a whole (I think it can be argued that science is the only truly cumulative human activity; everything else is fad). Everybody who is contributing to the advancement of knowledge, whether directly or indirectly, has value. Their value would be a function of how important they are to a society focused on the pursuit of empirical knowledge. Everybody else has negative value. (I don't believe it's possible to be neutral; human beings require a lot of resources to merely exist.)

I don't find your agreement reassuring. Imagine a world controlled by a singleton with those values.

But your comment about pain and happiness not being additive is right. Or, rather, happiness is not a linear function of good things in your life. It's more like the derivative of good things in your life.

We have a death-penalty debate in the US, which has consequences for less than 100 people per year. We have a few hundred thousand people serving sentences of 20 years and up, but no debate about it. That shows that most Americans place a huge value on life itself, and almost no value on what happens to that life.

Another reason might be the possibility of judicial errors. You can release a locked-up convict and compensate him somehow, but you can't resurrect a dead convict.

Edit: my comments below this line are irrelevant to the point of this discussion, please disregard them.

Killing enemy soldiers is allowable; killing enemy civilians is not.

One can't win a war without killing enemy soldiers, but one can win a war without killing enemy civilians (modern wars tend to somewhat blur the line between combatants and civilians). Also, killing civilians motivates enemy soldiers and stimulates the spontaneous formation of militia.

Killing enemy soldiers is allowable; torturing them is not.

Torturing enemy soldiers doesn't help you win a war and may motivate enemy soldiers to fight to the death, or to torture your own soldiers in return (let alone PR disasters that may arise out of this).

Losing a pilot is not acceptable; losing a $360,000,000 plane is.

Not sure what plane are you talking about. Also, the total costs of training a pilot may well be in the millions. When you officially devalue the pilot life, you're sending a message to other pilots which will tend to avoid dangerous missions, or just drop the career. (Also, unmanned flight will make the problem obsolete).

My personal position: any life is valuable. Murder is not acceptable.

You can release a locked-up convict and compensate him somehow, but you can't resurrect a dead convict.

In Virginia, if you're convicted of murdering someone and sentenced to life in prison, and that person shows up alive and healthy more than 21 days after your conviction, that is not grounds for an appeal of your sentence. So there's little interest in releasing innocent people. In many states, you can be sentenced to life in prison if you are convected of any 3 felonies. So there just isn't this concern over prison sentences being too harsh, as long as they don't kill.

And you can't compensate someone for 15 years in prison.

One can't win a war without killing enemy soldiers, but one can win a war without killing enemy civilians.

Torturing enemy soldiers doesn't help you win a war and may motivate enemy soldiers to fight to the death, or to torture your own soldiers in return (let alone PR disasters that may arise out of this).

You're not thinking any of these things through; but I'm not going to think them through for you here, because I would have to say things more socially unacceptable than I'm willing to say.

Losing a pilot is not acceptable; losing a $360,000,000 plane is.

Not sure what plane are you talking about.

The F22 Raptor.

In general, you are seizing on individual examples and using them as a justification to ignore my point.

In Virginia, if you're convicted of murdering someone and sentenced to life in prison, and that person shows up alive and healthy more than 21 days after your conviction, that is not grounds for an appeal of your sentence.

I don't believe you. Or rather, I believe that you have been confused by legal terminology. If a lower court reviews new evidence and decides to reverse a conviction, then that is not an appeal, because appeals can only be granted by a higher court, but it leads to the person being released anyways. You have confused a statement about legal procedure for a statement about outcomes, leading you to an obviously absurd conclusion about the value of innocent peoples' freedom.

Phil is stuck in the 20th century. In 2001 "biological" evidence became admissible later. This is supposed to mean DNA, but might cover a living "victim." In 2004, more evidence became admissible. But those who plead guilty are stuck. http://truthinjustice.org/VAevidence.htm

In Virginia, until 2001, neither a lower court, nor any other court, could review new evidence after 21 days. In 2001, an exception was made for DNA evidence. In 2004, an exception was made for evidence that could not possibly have been discovered within 21 days, that would have led any "reasonable" person to a verdict of innocent, for people who pled innocent. See http://www.vadp.org/21day.htm, http://truthinjustice.org/VAevidence.htm .

You're not thinking any of these things through; but I'm not going to think them through for you here

In general, you are seizing on individual examples and using them as a justification to ignore my point.

Sorry for that. That's what happens when I post things at 3AM in the morning. (Makes a note -- actually three -- to himself). I agree that my comments above are irrelevant to your main point. However, I'd keep my first point (about judicial errors) as a relevant side note.

Regarding the point of the original post. I've re-read it twice, trying to understand what you're saying, but I kept stumbling on things that struck me as plain wrongs and made me want to snip them out of context and post an angry reply -- e.g. "If I spend 20% of my life on growing petunias, am I thus saying that anyone who doesn't grow petunias is less valuable?", or, "does this imply that a life of a newborn baby -- my baby! -- is worth exactly zero, because she hasn't experienced anything except her mother's womb?"

Either I grossly misunderstood what you're trying to say (perhaps due to our cultural differences), or the point you're making is too hard for me to digest. I'll keep my mouth shut in this thread until I've spent more time with the questions.

(I'll add a note to my comment above to reflect my current position).

I am often dismayed to learn the logical results of my ideas. :)

"If I spend 20% of my life on growing petunias, am I thus saying that anyone who doesn't grow petunias is less valuable?"

I think so. You might believe that other people have different value systems that are equally valid. I think that the question of how to compare or combine the values of different people is a different question.

In most situations, if you weigh your life vs. other peoples' lives, it would seem you should assign your life a much higher value. The stranger your values are, the higher you should value your life wrt other people. That's because your life is directed towards your values, and other peoples' lives are not. So altruism, of the type explained by kin selection, is immoral. But this is countered by the fact that, the stranger your values are, the more you should discount your own values wrt the values of others. You'd probably have to formalize it to figure out which factor predominates.

"does this imply that a life of a newborn baby -- my baby! -- is worth exactly zero, because she hasn't experienced anything except her mother's womb?"

The way I said it does. I was being sloppy. But even if I revise it so that the infant has some non-zero value, it wouldn't be satisfactory; because parents have it biologically programmed into them to assign extra value to their own children. We would have to address the problem of combining different peoples' values. And I'm not going to address that problem now.

Sure, the set of arguments for these positions is not the empty set, but are they actually right?

The point isn't that torturing soldiers or killing civilians is necessarily good, but that you actually have to think about the problem first. How many planes is a pilots worth to you? How many is it worth to him?

If these numbers aren't the same, how do you explain this?

Also, killing civilians motivates enemy soldiers and stimulates the spontaneous formation of militia.

This is circular (it indeed true). It supposedly motivates soldiers for the same reason: in the abstract, they consider death of their fellow civilians worse than death of their fellow soldiers.

[-][anonymous]15y00

Arguments about ethics here, and in the discussion below, seem focused and peripheral relative to a bigger question: is there any guarantee, a priori, that perfect rationality would result in something that I (and you) would consider ethical? Is there some argument that there is a rational solution and it will be acceptable? My head goes in circles when I think about how rationality must lead to the truth so it must be acceptable but then I read the comments below and I think, "how can humans be trusted on this, individual by individual?".

Perhaps there's a post written on this already ...?