Comment author: Risto_Saarelma 14 February 2011 02:02:04PM *  5 points [-]

"Health reasons," if someone presses it. Ambiguous, but interpretable in a way that doesn't look like you passing judgment to the others who do choose to drink.

Comment author: rohern 26 February 2011 08:01:18AM 3 points [-]

When I lived in China, drinking as a group over dinner was a common social interaction. The one acceptable excuse, on which no one would press you, was to claim that your doctor has forbid it, which is another form of "health reasons". If people do press you on it, give them a quick cold glance that says "you are being rude" and then get back to the conversation.

Comment author: rohern 26 February 2011 07:59:09AM 3 points [-]

I do not mean this to be flippant, but Richard Feynman's -- who quit drinking when he thought he might be showing early signs of alcoholism and did not want to risk damaging his brain -- wife would ask you this:

What do you care what other people think?

If you are at a bar or a party and you determine that other people are looking down on you for not drinking, why should you care about such silliness? It's your body and your health and damn people who cannot respect that.

Good on you for not drinking.

Comment author: James_Miller 25 February 2011 08:58:38PM *  5 points [-]

The way most people can best contribute to society is to make as much money as possible and donate much of it to a charity that offers a high social return per dollar.

If you contribute to a charity that increases by one part in a trillion the probability of mankind surviving the next century and if conditional on this survival mankind will colonize the universe and create a trillion times a trillion sentient lifeforms then your donation will on average save a trillion lives.

Comment author: rohern 26 February 2011 07:52:42AM 4 points [-]

Should we not have at least some good evidence that the world has been measurably changed by charitable actions before positing this? Can we also establish that the making of as much money as possible does not itself have costs and do damage?

It can be easily, even sleepily argued that many of the popular vehicles for becoming wealthy are quite destructive. We can happily found charities to ameliorate this damage, but what of it?

You may have excellent arguments to support this charity statement, but these are not at all apparent to me. Please do enumerate them if you have a moment.

To give my own answer, I think the single best contribution that a person can make to society is to raise a child (genetically related or adopted) educated in the sciences and in reason, and with mind strong and nimble and ready to apply this knowledge in any field she finds to be interesting.

Comment author: rohern 26 February 2011 07:38:28AM 1 point [-]

If you think that people working in synthetic biology and bioengineering are doing worthwhile work (and I entirely agree that they are), then go help them. Why the ennui? Set yourself to spend a month investigating these fields and find if you are able to suss out interesting ideas that might (how can you know?) be of use. If your imagination is sparked, then you should find a job in a lab on a trial basis and take your investigations further. I would encourage anyone with a good mind to go into this area of research, as it will doubtless benefit me (I cannot speak to society).

I think your arguments against the utility of mathematics can be applied generally to any science, which is why I reject them. However, the weakness of my objection (it relies on unstable induction) is also the weakness of your argument. Look, sure, you cannot KNOW that what you are doing is going to result in something useful. But I see no evidence at all that anyone who has made worthwhile discoveries knew otherwise. It just is not true, we have no evidence for it, that Newton set out to lay down the mathematical foundations of physics for the benefit of anyone. He seems to have done it for reasons of curiosity and perhaps ambition. I imagine he had a bit of fun with it. Like it or not, this is why people do things, especially when said things require years of work.

I would posit (but do not know), that if you do want to make a useful contribution, the state of ignorance is exactly the right position to be. The x-ray, the laser, the computer, antibiotics, physics, Greek geometry, etc. etc. down the line are the result of accident, aimless research motivated only by curiosity, or people having fun with ideas. Some of these might even have been the result of chaps trying to get the girl. That is how it goes. I see almost no evidence at all (with exceptions of specific technologies, the airplane, for example) that the best way to go about making discoveries is to trying to make specific discoveries. You get interested in something and, if you find something useful, good for you, but most people do not. Given this, that we would find that the most successful scientists are motivated by curiosity, playfulness, and perhaps a little ambition. A survey of the history of science reveals, I think unquestionably, that yes, this is exactly the case!

It is certainly possible, even likely, that if you do spend your time doing theoretical math, that you will do nothing of importance or use to society. The chances, I think, are, at best, only very slightly better if you switch fields to do something else. You should do what gets you excited and interested, because only then, no matter what your pursuit, can you really increase your chances of doing something useful for yourself and society. At the very least, you will be happy, and that is not nothing.

Comment author: Yvain 16 February 2011 12:17:26PM *  28 points [-]

You use the words "solved" and settled" here, but I think they have very different meanings. In particular I can think of two relevant definitions of "settled": first, that someone, somewhere, has the correct answer; but second, that the correct answer is widely accepted, uncontroversial, and someone ignorant of the field can easily discover it just by reading a textbook..

I think your examples fall into the first category but not the second. According to the PhilPapers survey, only 32% of philosophers "accept" physicalism (a further 20% were "leaning towards" it). Another presentation of the poll said that 73% of philosophers either "lean toward or accept atheism".

When you can't get even three quarters of a field to even "lean toward" a position, I don't think you can call that a "settled question" under the second definition, especially compared to science where hopefully 100% of astronomers would either "lean toward or accept" heliocentrism. And when I complain that philosophers cannot settle questions, I am mostly referring to that second definition.

Comment author: rohern 26 February 2011 07:09:33AM 0 points [-]

I think this objection, though I empathize with your bringing it up, is not really worth our time in considering.

Look, we all know, if we are honest, that there is a kind of skepticism (the result of realizing the problem of solipsism and following through on its logical consequences) that cannot be eliminated from the system. It is universal and infects everything.

For this reason, we really need to know more about why these folks have objections to these conclusions. Why we should give particular credence to the opinions of members of the philosophical professions is not obvious, as certainly this site testifies to the fact that you need not be a professor of philosophy to investigate philosophical questions. I suspect, but let us test, that in a fair number of cases the kind of doubts that are raised can be raised in any and every case of a claim of truth. If this is the case, then what matters it? I do not think our interests, practical as they seem largely to be, require that we be constantly limited by such doubts.

I am sure this reveals me as a scientist, but cannot we agree that in the cases of such doubt we should just move on and get on? We should, I think, care about doubts specific to the problems we are considering rather than doubts general to all problems, or we can be pretty sure that we are not going to get anywhere on any topic ever.

In response to Go Try Things
Comment author: rohern 26 February 2011 06:49:31AM 0 points [-]

You might read Nicholas Taleb's book The Black Swan for more ideas on this topic, as he agrees with you on your main point. He argues, I think strongly, that the best way to go about discovering new ideas and methods is to obsessively tinker with things, and thus to expose oneself to the lucky accident, which is generally the real reason for insight or original invention.

Comment author: TheOtherDave 24 February 2011 02:23:35PM 1 point [-]

You're right, I'm concerned with morality as it applies to people generally.

If you are exclusively concerned with sufficiently rational people, then we have indeed been talking past each other. Thanks for clarifying that.

As to your question: I submit that for that community, there are only two principles that matter:

  1. Come to agreement with the rest of the community about how to best optimize your shared environment to satisfy your collective preferences.

  2. Abide by that agreement as long as doing so is in the long-term best interests of everyone you care about.

...and the justification for those principles is fairly self-evident. Perhaps that isn't a morality, but if it isn't I'm not sure what use that community would have for a morality in the first place. So I say: either of course there is, or there's no reason to care.

The specifics of that agreement will, of course, depend on the particular interests of the people involved, and will therefore change regularly. There's no way to build that without actually knowing about the specific community at a specific point in time. But that's just implementation. It's like the difference between believing it's right to not let someone die, and actually having the medical knowledge to save them.

That said, if this community is restricted to people who, as you implied earlier, care only for rationality, then the resulting agreement process is pretty simple. (If they invite people who also care for other things, it will get more complex.)

Comment author: rohern 25 February 2011 04:17:06AM 0 points [-]

Very well put.

Comment author: TheOtherDave 23 February 2011 01:58:53PM *  0 points [-]

I am fairly sure that we aren't talking past each other, I just disagree with you on some points. Just to try and clarify those points...

  • You seem to believe that a moral theory must, first and foremost, be compelling... if moral theory X does not convince others, then it can't do much worth doing. I am not convinced of this. For example, working out my own moral theory in detail allows me to recognize situations that present moral choices, and identify the moral choices I endorse, more accurately... which lowers my chances of doing things that, if I understood better, I would reject. This seems worth doing, even if I'm the only person who ever subscribes to that theory.

  • You seem to believe that if moral theory X is not rationally compelling, then we cannot come to agree on the specific claims of X except by chance. I'm unconvinced of that. People come to agree on all kinds of things where there is a payoff to agreement, even where the choices themselves are arbitrary. Heck, people often agree on things that are demonstrably false.

  • Relatedly, you seem to believe that if X logically entails Y, then everyone in the world who endorses X necessarily endorses Y. I'd love to live in that world, but I see no evidence that I do. (That said, it's possible that you are actually making a moral claim that having logically consistent beliefs is good, rather than a claim that people actually do have such beliefs. I'm inclined to agree with the former.)

  • I can have a moral intuition that bears clubbing baby seals is wrong, also. Now, I grant you that I, as a human, am less likely to have moral intuitions about things that don't affect humans in any way... but my moral intuitions might nevertheless be expressible as a general principle which turns out to apply to non-humans as well.

  • You seem to believe that things I'm biologically predisposed to desire, I will necessarily desire. But lots of biological predispositions are influenced by local environment. My desire for pie may be stronger in some settings than others, and it may be brought lower than my desire for the absence of pie via a variety of mechanisms, and etc. Sure, maybe I can't "will myself to unlove it," but I have stronger tools available than unaided will, and we're developing still-stronger tools every year.

  • I agree that the desire to be rational is a desire like any other. I intended "much of anything else" to denote an approximate absence of desire, not a complete one.

Comment author: rohern 24 February 2011 05:25:07AM 0 points [-]

I think an important part of our disagreement, at least for me, is that you are interested in people generally and morality as it is now --- at least your examples come from this set --- while I am trying to restrict my inquiry to the most rational type of person, so that I can discover a morality that all rational people can be brought to through reason alone without need for error or chance. If such a morality does not exist among people generally, then I have no interest for the morality of people generally. To bring it up is a non sequitur in such a case.

I do not see that people coming to agree on things that are demonstrably false is a point against me. This fact is precisely why I am turned-off by the current state of ethical thought, as it seems infested with examples of this circumstance. I am not impressed by people who will agree to an intellectual point because it is convenient. I take truth first, at least that is the point of this inquiry.

I am asking a single question: Is there (or can we build) a morality that can be derived with logic from first principles that are obvious to everyone and require no Faith?

Comment author: TheOtherDave 22 February 2011 11:25:12PM 0 points [-]

I agree with your basic point that moral intuitions reflect psychological realities, and that attempts to derive moral truths without explicitly referring to those realities will inevitably turn out to implicitly embed them.

That said, I think you might be introducing unnecessary confusion by talking about "subjective" and "individual." To pick a simple and trivial objection, it might be that two people, by happenstance, share a set of moral intuitions, and those intuitions might include references to other people. For example, they might each believe "it is best to satisfy the needs of others," or "it is best to believe things believed by the majority" or "it is best to believe things confirmed by experiment." Indeed, hundreds of people might share those intuitions, either by happenstance or by mutual influence. In this case, the intuition would not be inter-subjective and non-individual, but still basically the kind of thing we're talking about.

I assume you mean to contrast it with objective, global things like, say, gravity. Which is fine, but it gets tricky to say that precisely.

It seems this domain can include only those which involve human beings in some fashion.

Here, again, things get slippery. First, I can have moral intuitions about non-humans... for example, I can believe that it's wrong to club cute widdle baby seals. Second, it's not obvious that non-humans can't have moral intuitions.

if I desire to make of myself a creature of reason alone, what care have I for it, but as a curiosity of anthropology?

If that is in fact your desire, then you haven't a care for it. Or, indeed, for much of anything else.

Speaking personally, though, I would be loathe to give up my love of pie, despite acknowledging that it is a consequence of my own biology and history.

Agreed that imprinting an AI with human notions of moral judgments, especially doing so with the same loose binding to actual behavior humans demonstrate, would be relatively foolish. This is, of course, different from building an AI that is constrained to behave consistently with human moral intuitions.

Agreed that such an AI would easily conclude that humans are not bound by the same constraints that it is bound by. Whether this would elicit disgust or not depends on a lot of things. Sharks are not bound by my moral intuitions, but they don't disgust me.

Comment author: rohern 23 February 2011 06:49:24AM 0 points [-]

I think we might still be talking past each other, but here goes:

The reason I posit and emphasize a distinction between subjective judgments and those that are otherwise -- I have a weak reason for not using the term "objective" here -- is to highlight a particular feature of moral claims that is lacking, and in thus being lacked, weakens them. That is, I take a claim to be subjective if to hold it myself I must come upon it by chance. I cannot be brought to it through reason alone. It is an opinion or intuition that I cannot trace logically in my own thought, so I cannot communicate it to you by guiding you down the same line. The reason I think that this distinction matters, is that without this logical structure, it not possible for someone to bring me to experience the same intuition through reasoned argument or demonstration. Without this feature, morality must be an island state. This is ruinous, because morality inevitably and necessarily touches upon interactions between people. If it cannot do this, it cannot do much.

Perhaps we should come to common agreement, are at least agreed-upon disagreement on this point before we try other things.

Other Things:

I suspect -- this is an idea I have only recently invented have not entirely examined -- that any idea that is irrational needs must be essentially incommunicable. How could it be otherwise? If you can lay out the logic behind a thought and give support to its predicates carefully and patiently, and of course your logic is valid and your predicates sound, how can I not, if I am open to reason, not accept what you say as true? That is, if you can demonstrate your ideas as the logical consequences of some set of known truths, I must, because that is what logical consequence is, accept your ideas as true.

I have not witnessed with done with moral notions. Hence my doubt about there existence as rational ideas. I do not doubt that people have moral ideas, but I doubt that they can be communicated to people who have not already come upon them by chance, and who then can only be partially sure that you are of common mind.

Perhaps I can draw a parallel with the distinction between Greek and Babylonian mathematics. The difference between demonstration by proof and attempted demonstration by repeated example. The first (except to mathematicians of the subtle variety), if done properly, seems to be able, in its nature, to be powered to accomplish the goal of communication in every case. Can this be said of the latter type? I think only in the case when the examples given are logically structured so as to be a form of the first type.

 "I agree with your basic point that moral intuitions reflect psychological realities, and that attempts to derive moral truths without explicitly referring to those realities will inevitably turn out to implicitly embed them."

I have not wanted to make this claim. What I am claiming is that this claim does appear, thus far, to hold water. However, absence of evidence is not evidence of absence, etc. etc. I am asking for someone to show me the light, as it were.

 "First, I can have moral intuitions about non-humans... for example, I can believe that it's wrong to club cute widdle baby seals. Second, it's not obvious that non-humans can't have moral intuitions."

As for your first objection, have not you given precisely the sort of case I was talking about? The moral judgment stated is not about bears clubbing baby seals, it is about humans doing it! Clearly that does involve humans. Come up with a moral judgment about trees overusing carbon dioxide and you'll have me pinned.

 "If that is in fact your desire, then you haven't a care for it. Or, indeed, for much of anything else."

That is just silly, is it not? I must at least care for reason itself. The desire to be rational is a passion indeed. If I must be paradoxical at least that far, I will take it and move on. As for your love of pie, if it is really a consequence of your biology and history, then you CANNOT give it up. You cannot will yourself to unlove it, or it must thus not be the product of the aforesaid forces alone.

Comment author: David_Gerard 21 February 2011 08:44:05AM -1 points [-]

I would predict, based on human nature, that a if the 3^^^3 people were asked if they wanted to inflict a dust speck in each one of their eyes, in exchange for not torturing another individual for 50 years, they would probably vote for dust specks.

I think you've nailed my problem with this scenario: anyone who wouldn't go for this, I would be disinclined to listen to.

Comment author: rohern 22 February 2011 07:58:23AM 1 point [-]

Perhaps this is just silliness, but I am curious how you would feel if the question were:

"You have a choice: Either one person gets to experience pure, absolute joy for 50 years, or 3^^^3 people get to experience a moment of pleasure on the level experienced when eating a popsicle."

Do you choose popsicle?

View more: Next