All of rohern's Comments + Replies

rohern40

When I lived in China, drinking as a group over dinner was a common social interaction. The one acceptable excuse, on which no one would press you, was to claim that your doctor has forbid it, which is another form of "health reasons". If people do press you on it, give them a quick cold glance that says "you are being rude" and then get back to the conversation.

rohern30

I do not mean this to be flippant, but Richard Feynman's -- who quit drinking when he thought he might be showing early signs of alcoholism and did not want to risk damaging his brain -- wife would ask you this:

What do you care what other people think?

If you are at a bar or a party and you determine that other people are looking down on you for not drinking, why should you care about such silliness? It's your body and your health and damn people who cannot respect that.

Good on you for not drinking.

rohern60

Should we not have at least some good evidence that the world has been measurably changed by charitable actions before positing this? Can we also establish that the making of as much money as possible does not itself have costs and do damage?

It can be easily, even sleepily argued that many of the popular vehicles for becoming wealthy are quite destructive. We can happily found charities to ameliorate this damage, but what of it?

You may have excellent arguments to support this charity statement, but these are not at all apparent to me. Please do enumer... (read more)

0DanielLC
In any case, Give Well looks into a lot of charities. There's many where the difference is quite obvious. I don't know of a specific charity that does the same thing but better, which would be an ideal counterargument. That said, raising a child can cost hundreds of thousands of dollars. Is it worth hundreds of lives? Thousands of peoples' sight? Also, it seems to be based on the idea that what you do is more important that what charity you donate to. It seems like it would be better to raise them to donate large amounts of money to charity. Or to try to convince people you know to donate.
rohern20

If you think that people working in synthetic biology and bioengineering are doing worthwhile work (and I entirely agree that they are), then go help them. Why the ennui? Set yourself to spend a month investigating these fields and find if you are able to suss out interesting ideas that might (how can you know?) be of use. If your imagination is sparked, then you should find a job in a lab on a trial basis and take your investigations further. I would encourage anyone with a good mind to go into this area of research, as it will doubtless benefit me (I... (read more)

rohern00

I think this objection, though I empathize with your bringing it up, is not really worth our time in considering.

Look, we all know, if we are honest, that there is a kind of skepticism (the result of realizing the problem of solipsism and following through on its logical consequences) that cannot be eliminated from the system. It is universal and infects everything.

For this reason, we really need to know more about why these folks have objections to these conclusions. Why we should give particular credence to the opinions of members of the philosophica... (read more)

rohern00

You might read Nicholas Taleb's book The Black Swan for more ideas on this topic, as he agrees with you on your main point. He argues, I think strongly, that the best way to go about discovering new ideas and methods is to obsessively tinker with things, and thus to expose oneself to the lucky accident, which is generally the real reason for insight or original invention.

rohern00

I think an important part of our disagreement, at least for me, is that you are interested in people generally and morality as it is now --- at least your examples come from this set --- while I am trying to restrict my inquiry to the most rational type of person, so that I can discover a morality that all rational people can be brought to through reason alone without need for error or chance. If such a morality does not exist among people generally, then I have no interest for the morality of people generally. To bring it up is a non sequitur in such a ... (read more)

0Prolorn
Perhaps you've already encountered this, but your question calls to mind the following piece by Yudkowsky: No Universally Compelling Arguments, which is near the start of his broader metaethics sequence. I think it's one of Yudkowsky's better articles. (On a tangential note, I'm amused to find on re-reading it that I had almost the exact same reaction to The Golden Transcendence, though I had no conscious recollection of the connection when I got around to reading it myself.)
1TheOtherDave
You're right, I'm concerned with morality as it applies to people generally. If you are exclusively concerned with sufficiently rational people, then we have indeed been talking past each other. Thanks for clarifying that. As to your question: I submit that for that community, there are only two principles that matter: 1. Come to agreement with the rest of the community about how to best optimize your shared environment to satisfy your collective preferences. 2. Abide by that agreement as long as doing so is in the long-term best interests of everyone you care about. ...and the justification for those principles is fairly self-evident. Perhaps that isn't a morality, but if it isn't I'm not sure what use that community would have for a morality in the first place. So I say: either of course there is, or there's no reason to care. The specifics of that agreement will, of course, depend on the particular interests of the people involved, and will therefore change regularly. There's no way to build that without actually knowing about the specific community at a specific point in time. But that's just implementation. It's like the difference between believing it's right to not let someone die, and actually having the medical knowledge to save them. That said, if this community is restricted to people who, as you implied earlier, care only for rationality, then the resulting agreement process is pretty simple. (If they invite people who also care for other things, it will get more complex.)
rohern00

I think we might still be talking past each other, but here goes:

The reason I posit and emphasize a distinction between subjective judgments and those that are otherwise -- I have a weak reason for not using the term "objective" here -- is to highlight a particular feature of moral claims that is lacking, and in thus being lacked, weakens them. That is, I take a claim to be subjective if to hold it myself I must come upon it by chance. I cannot be brought to it through reason alone. It is an opinion or intuition that I cannot trace logically i... (read more)

0TheOtherDave
I am fairly sure that we aren't talking past each other, I just disagree with you on some points. Just to try and clarify those points... * You seem to believe that a moral theory must, first and foremost, be compelling... if moral theory X does not convince others, then it can't do much worth doing. I am not convinced of this. For example, working out my own moral theory in detail allows me to recognize situations that present moral choices, and identify the moral choices I endorse, more accurately... which lowers my chances of doing things that, if I understood better, I would reject. This seems worth doing, even if I'm the only person who ever subscribes to that theory. * You seem to believe that if moral theory X is not rationally compelling, then we cannot come to agree on the specific claims of X except by chance. I'm unconvinced of that. People come to agree on all kinds of things where there is a payoff to agreement, even where the choices themselves are arbitrary. Heck, people often agree on things that are demonstrably false. * Relatedly, you seem to believe that if X logically entails Y, then everyone in the world who endorses X necessarily endorses Y. I'd love to live in that world, but I see no evidence that I do. (That said, it's possible that you are actually making a moral claim that having logically consistent beliefs is good, rather than a claim that people actually do have such beliefs. I'm inclined to agree with the former.) * I can have a moral intuition that bears clubbing baby seals is wrong, also. Now, I grant you that I, as a human, am less likely to have moral intuitions about things that don't affect humans in any way... but my moral intuitions might nevertheless be expressible as a general principle which turns out to apply to non-humans as well. * You seem to believe that things I'm biologically predisposed to desire, I will necessarily desire. But lots of biological predispositions are influenced by local environment. My desir
rohern10

Perhaps this is just silliness, but I am curious how you would feel if the question were:

"You have a choice: Either one person gets to experience pure, absolute joy for 50 years, or 3^^^3 people get to experience a moment of pleasure on the level experienced when eating a popsicle."

Do you choose popsicle?

0TheOtherDave
I don't think it's silly at all. Personally, I experience more or less the same internal struggle with this question as with the other: I endorse the idea that what matters is total utility, but my moral intuitions aren't entirely aligned with that idea, so I keep wanting to choose the individual benefit (joy or non-torture) despite being unable to justify choosing it. Also, as David Gerard says, it's a different function... that is, you can't derive an answer to one question from an answer to the other... but the numbers we're tossing around are so huge that the difference hardly matters.
2David_Gerard
I suspect I would. But not only does utility not add linearly, you can't just flip the sign, because positive and negative are calculated by different systems.
rohern10

Forgive me for being sloppy with my language. Given what I wrote, your objection is entirely reasonable.

The idea that I meant to express is that, while it seems safe to assume that virtually everyone who has ever lived long enough to become a thinking person has encountered some kind of moral question in his life, we cannot say that an appreciable percentage of these people has sat and carefully analyzed these questions.

Even if we restrict ourselves only to people alive today and living in the United States -- an enormous restriction considering the per... (read more)

rohern00

If I take you correctly, you are pointing out that thought experiments, now abstract, can become actual through progress and chance of time, circumstance, technology, etc., and thus are useful in understanding morality.

If this is an unfair assessment, correct me!

I agree with you, but I also hold to my original claim, as I do not think that they contradict. I agree that the thought experiment can be a useful tool for talking about morality as a set of ideas and reactions out-of-time. However, I do not agree that the thought experiments I have read have ... (read more)

3NihilCredo
I... don't think the point of such thought experiments was ever to predict what a human will do. That we do not make the same choices under pressure that we do when given reflection and distance is quite obvious. If you are interested in predicting what people will do, you should look at psychological battery tests, which (should) strive to strike a balance between realism and measurability. The point of train tracks-type experiments was to force one to demand some coherence from their "moral intuition", and to this end the fact that you're making such choices sitting in a café is a feature, not a bug, because it lets you carefully figure out logical conclusions on which (at least in theory) you will then be able to unthinkingly rely once you're in the heat of the moment (probably not an actual train track scenario, but situations like giving money to beggars or voting during jury duty where you only have seconds or hours to make a choice). When you're actually in the trenches, as you put it, your brain is going to be overwhelmed by a zillion more cognitive biases than usual, so it's very much in your interest to try and pre-make as many choices as possible while you have the luxury of double-checking every one of your assumptions and implications.
rohern10

I think we may indeed be talking past each other, so I will try to state my case more cogently.

I am not denying that people do possess ideas about something named "morality". It would be absurd to claim otherwise, as we are here discussing such ideas.

I am denying that, even if I accept all of their assumptions, individuals who claim these ideas as more-than-subjective --- by that I think I mean that they claim their ideas able to be applied to a group rather than only to one man, the holder of the ideas --- can convince me that these ideas are... (read more)

0TheOtherDave
I agree with your basic point that moral intuitions reflect psychological realities, and that attempts to derive moral truths without explicitly referring to those realities will inevitably turn out to implicitly embed them. That said, I think you might be introducing unnecessary confusion by talking about "subjective" and "individual." To pick a simple and trivial objection, it might be that two people, by happenstance, share a set of moral intuitions, and those intuitions might include references to other people. For example, they might each believe "it is best to satisfy the needs of others," or "it is best to believe things believed by the majority" or "it is best to believe things confirmed by experiment." Indeed, hundreds of people might share those intuitions, either by happenstance or by mutual influence. In this case, the intuition would not be inter-subjective and non-individual, but still basically the kind of thing we're talking about. I assume you mean to contrast it with objective, global things like, say, gravity. Which is fine, but it gets tricky to say that precisely. Here, again, things get slippery. First, I can have moral intuitions about non-humans... for example, I can believe that it's wrong to club cute widdle baby seals. Second, it's not obvious that non-humans can't have moral intuitions. If that is in fact your desire, then you haven't a care for it. Or, indeed, for much of anything else. Speaking personally, though, I would be loathe to give up my love of pie, despite acknowledging that it is a consequence of my own biology and history. Agreed that imprinting an AI with human notions of moral judgments, especially doing so with the same loose binding to actual behavior humans demonstrate, would be relatively foolish. This is, of course, different from building an AI that is constrained to behave consistently with human moral intuitions. Agreed that such an AI would easily conclude that humans are not bound by the same constraints t
rohern10

I find it impossible to engage thoughtfully with philosophical questions about morality because I remain unconvinced of the soundness of the first principles that are applied in moral judgments. I am not interested in a moral claim that does not have a basis in some fundamental idea with demonstrable validity. I will try to contain my critique to those claims that do attempt at least what I think to be this basic level of intellectual rigor.

Note 1: I recognize that I introduced many terms in the above statement that are open to challenge as loaded and... (read more)

5TheOtherDave
Moral intuitions demonstrably exist. That is, many people demonstrably do endorse and reject certain kinds of situations in a way that we are inclined to categorize as a moral (rather than an aesthetic or arbitrary or pragmatic) judgment, and demonstrably do signal those endorsements and rejections to one another. All of that behavior has demonstrable influences on how people are born and live and die, suffer and thrive, are educated and remain ignorant, discover truths and believe falsehoods, are happy and sad, etc. I believe all of that stuff matters, so I believe how the moral intuitions that influence that stuff are formed, and how they can be influenced, is worth understanding. And, of course, the SIAI folks believe it matters because they want to engineer an artificial system whose decisions have predictable relationships to our moral intuitions even when it doesn't directly consult those intuitions. Now, maybe none of that stuff is what you're talking about... it's hard to tell precisely what you are rejecting the study of, actually, so I may be talking past you. If what you mean to reject is the study of morality as something that exists in the world outside of our minds and our behavior, for example, I agree with you. I suspect the best way to encourage the rejection of that is to study the actual roots of our moral judgments; as more and more of our judgments can be rigorously explained, there will be less and less room for a "god of the moral gaps" to explain them. And I agree with NihilCredo that the distinction between "applied morality" and "theoretical morality" is not a stable one -- especially when considering large-scale engineering projects -- so refusing to consider theoretical questions simply ensures that we're unprepared for the future. Also, thought experiments are often useful tools to clarify what our intuitions actually are.
1Perplexed
Near zero? The number of people who have given careful and analytic thought to the foundational principles of ethics easily numbers in the thousands. Comparable to the number of people who have given equivalent foundational consideration to logic or thermodynamics or statistics. Given that there are seven billion people in the world, it is understandable that you have not yet had a conversation with one of these people. But it is easy enough to find the things they have written - online or in libraries. Give it a look. I can't promise it will be convincing, or even that it will improve your opinion of the field. But you will find that a lot of serious thought has been given to the subject.
9NihilCredo
I support and agree with every paragraph except the last one. I cannot come up with any sensible, useful separation between "actual scenarios" and "thought experiments". Consider the following question: "You have the ability to instantly make an arbitrarily high number of copies of a book, and distribute them to billions of people, at a negligible cost to everyone. Should you compensate the author before doing so?". To us this is an everyday matter, but to a medieval scholar it is about as much of an abstraction as "torture vs. dust specks". It is likely more abstract to him than "push the fat guy on the train tracks" is to us. While I don't believe that morality is well-founded, I do believe that the word "morality" has some meaning, even if it is incorrect - in the same way that theism isn't well-founded, but "theism" still means something. And I consider that the word "morality" must indicate a function (or a set of functions) having its domain included in the set of logically possible universes, and its codomain in some sort of algebraic structure (utilitarians should think it's a totally ordered field, but it needn't be such a powerful structure). I do not think that whether we might actually experience a given universe is a relevant criterion to morality as most people intend the word.
rohern40

Speaking as an undergraduate student in a computer science department, I can confirm your observation. I have also observed that while coding, the philosophical pumps start working and good -- or at least interesting -- ideas about other subjects are often produced. The most interesting off-topic conversations I have had with other students in any class have been had in computer science classes.

I have also noticed that my ability to deal with mathematical problems that are generally algorithmic mentally has been improving rapidly. I suspect the regular practice of holding a process in one's mind while encoding it is related to this.