This isn't so much a critique against consequentialism as the attempt at creating objective moral systems in general. I would love for the world to follow a particular moral order (namely mine). But there are people who, for what I would see as being completely sane reasons, disagree with me. On the edges, I have no problem writing mass murderers off as being insane. Beyond that, though, in the murky middle, there are a number of moral issues (and how is that dividing line drawn? Is it moral to have anything above a sustenance level meal if others are starving i the world, for instance?) that I see as leading only to endless argument. This doesn't indicate one of the sides is being disingenuous, just that they have different values that cannot be simultaneously optimized. The Roman gladiator post by another commenter is an example. I view the Romans as PETA members would view me. I have justifications for my actions, as I'm sure Romans had for their actions. That's just the nature of the human condition. Academic moral philosophizing always comes across to me as trying to unearth a cosmic grading scale, even if there isn't a cosmic grader.
I request an explanation of why my comment telling Luke he did a good job is more highly upvoted than the post Luke did a good job on. If you agree with me that Luke did a good job strongly enough to upvote the statement, why not upvote Luke?
Couldn't that just be due to a higher number of total votes (both up an down) for the OP? I would assume fewer people read each comment, and downvoters may have decided to only weigh in on the OP. A hypothetical controversial post could have a karma of 8, with 10 downvotes negating 10 upvotes, and a supportive comment could have 9 upvotes due to half of the upvotes of the first post giving it their vote. The comment has higher karma, but lower volatility, so to speak.
I won't be attending*, but just out of curiosity, what did you have in mind for the social effectiveness curriculum? Any particular authors that you recommend for things like body language, communication, etc.?
*Due to life constraints, but it sounds very interesting!
Fixed your link markup for you. Toplevel posts do not use Markdown (I know, it's annoying), they use WYSIWYG or HTML.
Thank you, very much appreciated.
I would think that after years of doing routine checkups like these, even if the doctor is hungry it's unlikely that it would affect the doctor so much that a mistake would be made because of it.
Though if there's any doubts of the doctor's competency, or if it becomes a more difficult procedure, that would definitely be something to watch out for.
Prior to reading that one study, I would be in complete agreement. After, though, I'm not so sure. Really for any job where routine judgements are being made, I would have just naturally assume that habit would take over. That's why the study was jarring for me; it really does seem to demonstrate that at different times, supposedly expert decision makers came to different conclusions based on their physiology. Now, it could be that legal issues are more based on personal opinion and biases, and really don't rely on making decisions based on rational standards. My thinking, though, is that these are two domains (medicine and law) that share the common element of making a decision based on certain pre-established criteria.
Possible personal implications of the Israeli Hunger-Probation study
I'll let Psychohistorian provide the set-up, in case the reference is unfamiliar. Anyway, my wife's expecting, and that means many, many tests, ultrasounds, etc. that she ends up going through. She had an ultrasound done yesterday that was a follow-up form one two weeks ago. There was a particular measurement that was "top normal" (the doctor's words) that they needed to keep an eye on*. The first ultrasound was done at her OBGYN, the next at the hospital she'll be delivering at.
It was during the second ultrasound that I noticed the time; it was about 11:35 when the ultrasound tech finished up and called the doctor in. Being-the-probably-over-worried-parent to be that I am, immediately the study jumped into my head. What if the doc's blood sugar is too low? Do doctors make better decisions when they're hungry, because they're more alert, or worse ones, because they're distracted? Was it better that I came in earlier so that the technician was more alert, so that she took better pictures and measurements on the ultrasound?
Both the doctor and the tech seemed to be very alert and competent, and as the follow up involved a specific measurement, the were very careful of being thorough in checking and rechecking it. The thought about the timing of the visit, though, and meetings with experts in general, is something I've been thinking a lot about since the appointment. I would really like to see a follow-up study in the medical field. In the meantime, I'm trying to consider when the bests times are to schedule appointments.
All of this is based on what seems to me to be very concrete evidence that people's thinking is affected by their hunger, and that organizational structures don't pay much attention to outcomes related to that. Is this being premature, or overly broad? Are there other factors that could come into play? My main thinking is to stay away from lunchtime and closing time, because those are the two periods where I believe people would be most distracted.
*"Top normal" in this case could also translate to "low abnormal", which shows why numbers provide much better means than words in thinking about these things. The baby's fine as of the latest ultrasound, btw (and thanks for asking!)
As would I.
Thirded, especially because I have daughter on the way!
- Strict, dependable schedule.
- Everybody has a role, and invests planning and effort into it.
- Stage, lights, music, camera.
- Excellent speeches, excellent feedback.
- The members take improvement very seriously, but have a lot of fun.
- A wealth of mature, active, long-term members with deep knowledge and experience.
- Constant participation in all the regional and sometimes higher-level Toastmasters competitions.
But you have to take a Scientologist class to join? You couldn't just join a Toastmasters somewhere else and then show up, for instance?
I guess the only quibble I would have, and I don't know that it really changes your critique much, is that I wrote that neurons would be some sort of gate equivalent. I wouldn't say that neurons have a simple gate model (that they're simply an AND or an XOR, for instance). But I do see them as being in some sense Boolean. Anyway, I would just try to clarify my fairly short answer to say that I believe that computation can always be broken down into smaller Boolean steps, and these steps could be rendered in many different media.
Computationality in any fashion needs to be reified by physics doesn't it? Otherwise it wouldn't exist. Now, I would say it's an emergent feature; physics doesn't need to provide anything beyond what is provided for anything else to explain it. Maybe that's the point of contention?
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
What would it mean for the PETA member to be right? Does it just mean that the PETA member has sympathy for chickens, whereas you and I do not? Or is there something testable going on here?
It doesn't seem to me that the differences between the PETA members, us, and the Romans, are at all unclear. They are differences in the parties' moral universe, so to speak: the PETA member sees a chicken as morally significant; you and I see a Scythian, Judean, or Gaul as morally significant; and the Roman sees only another Roman as morally significant. (I exaggerate slightly.)
A great deal of moral progress has been made through the expansion of the morally significant; through recognition of other tribes (and kinds of beings) as relevant objects of moral concern. Richard Rorty has argued that it is this sympathy or moral sentiment — and not the knowledge of moral facts — which makes the practical difference in causing a person to act morally; and that this in turn depends on living in a world where you can expect the same from others.
This is an empirical prediction: Rorty claims that expanding people's moral sympathies to include more others, and giving them a world in which they can expect others to do the same in turn, is a more effective way of producing good moral consequences, than moral philosophizing is. I wonder what sort of experiment would provide evidence one way or the other.
That's an interesting link to Rorty; I'll have to read it again in some more detail. I really appreciated this quote:
That really seems to hit it for me. That flexibility, the sense that we can step beyond being warlike, or even calculating, seems to be critical to what morals are all about. I don't want to make it sound like I'm against a generally moral culture, where happiness is optimized (or some other value I like personally). I just don't think moral philosophizing would get us there. I'll have to read up more on the moral sentiments approach. I have read some of Rorty's papers, but not his major works. I would be interested to see these ideas of his paired with meme theory. Describing moral sentiment as a meme that enters a positive feedback loop where groups that have it survive longer than ones that don't seems very plausible to me.
I'll have to think more about your PETA question. I think it goes beyond sympathy. I don't know how to test the positions though. I don't think viewing chickens as being equally morally significant would lead to a much better world (for humans - chickens are a different matter). Even with the moral sentiment view, I don't see how each side could come to a clear resolution.