As I've been saying for some time, what organizations really need is a CRO, Chief Risk Assessment Officer, who would also be an expert in probabilities and simulation.
Just thought I'd point out, actuaries can also do enterprise risk management. Also, a lot of organizations do have a Chief Risk Officer.
I've seen too many comments like yours on Facebook or on Reddit without a hint of irony to think that it's a joke. Or maybe I'm just terribly dull when it comes to differentiating what is and isn't humor.
I suspect it's the latter.
I think it's fair to say that most of us here would prefer not to have most Reddit or Facebook users included on this site, the whole "well-kept garden" thing. I like to think LW continues to maintain a pretty high standard when it comes to keeping the sanity waterline high.
Amongst the sophisticated theists I know (Church of England types who have often actually read large chunks of the Bible and don't dispute that something called "evolution" happened), they will detail their objections to The God Delusion at length ... without, it turns out, having actually read it. This appears to be the religious meme defending itself. I point them at the bootleg PDF and suggest they actually read it, then complain ... at which point they usually never mention it ever again.
This is part of why I tend to think that for the most part, these works aren't (or if they are, they shouldn't be) aimed at de-converting the faithful (who have already built up a strong meme-plex to fall back on), but rather for interception and prevention for young potential converts and people who are on the fence. Particularly college kids who have left home and are questioning their belief structure.
The side effect is that something that is marketed well towards this group (imo, this is the case with "The God Delusion") comes across as shocking and abrasive to the older converts (and this also plays into its marketability to a younger audience). So there's definitely a trade-off, but getting the numbers right to determine the actual payoff is difficult.
I think a more effective way to increase secular influence is through lobbying. I think in the U.S. there is a great need for a well-funded secular lobby to keep things in check. I found one such lobby but I haven't had the chance to look into it yet.
Consider it my opinion, and the whole of this article as the substantive argument for why. If you have an issue with the argument, of course, you're free to present it. Alternatively, if you know somebody who was converted from religious belief to atheism by The God Delusion, that would be an evidence-based argument as to why I am wrong on the matter as a whole.
I've met both sorts, people turned off by "The God Delusion" who really would have benefited from something like "Greatest Show on Earth", and people who really seemed to come around because of it (both irl and in a wide range of fora). The unfortunate side-effect of successful conversion, in my experience, has been that people who are successfully converted by rhetoric frequently begin to spam similar rhetoric, ineptly, resulting mostly in increased polarization among their friends and family.
It seems pretty hard to control for enough factors to see what kind of impact popular atheist intellectuals actually have on de-conversion rates and belief polarization (much less with specific subset of abrasive works), and I can't find any clear numbers on it. Seems like opinion mining facebook could potentially be useful here.
Evolutionary Biology might be good at telling us what we value. However, as GE Moore pointed out, ethics is about what we SHOULD value. What evolutionary ethics will teach us is that our mind/brains are maleable. Our values are not fixed.
And the question of what we SHOULD value makes sense because our brains are malleable. Our desires - just like our beliefs - are not fixed. They are learned. So, the question arises, "Given that we can mold desires into different forms, what SHOULD we mold them into?"
Besides, evolutionary ethics is incoherent. "I have evolved a disposition to harm people like you; therefore, you deserve to be harmed." How does a person deserve punishment just because somebody else evolved a disposition to punish him.
Do we solve the question of gay marriage by determining whether the accusers actually have a genetic disposition to kill homosexuals? And if we discover they do, we leap to the conclusion that homosexuals DESERVE to be killed?
Why evolve a disposition to punish? That makes no sense.
What is this practice of praise and condemnation that is central to morality? Of deserved praise and condemnation? Does it make sense to punish somebody for having the wrong genes?
What, according to evolutionary ethics, is the role of moral argument?
Does genetics actually explain such things as the end of slavery, and a woman's right to vote? Those are very fast genetic changes.
The reason that the Euthyphro argument works against evolutionary ethics because - regardless of what evolution can teach us about what we do value, it teaches us that our values are not fixed. Because values are not genetically determined, there is a realm in which it is sensible to ask about what we should value, which is a question that evolutionary ethics cannot answer. Praise and condemnation are central to our moral life precisely because these are the tools for shaping learned desires - resulting in an institution where the question of the difference between right and wrong is the question of the difference between what we should and should not praise or condemn.
First, I do have a couple of nitpicks:
Why evolve a disposition to punish? That makes no sense.
That depends. See here for instance.
Does it make sense to punish somebody for having the wrong genes?
This depends on what you mean by "punish". If by "punish" you mean socially ostracize and disallow mating privileges, I can think of situations in which it could make evolutionary sense, although as we no longer live in our ancestral environment and have since developed a complex array of cultural norms, it no longer makes moral sense.
In any event, what you've written is pretty much orthogonal to what I've said; I'm not defending what you're calling evolutionary ethics (nor am I aware of indicating that I hold that view, if anything I took it to be a bit of a strawman). Descriptive evolutionary ethics is potentially useful, but normative evolutionary ethics commits the naturalistic fallacy (as you've pointed out), and I think the Euthyphro argument is fairly weak in comparison to that point.
The view you're attacking doesn't seem to take into account the interplay between genetic, epigenetic and cultural/mememtic factors in how moral intuitions are shaped and can be shaped. It sounds like a pretty flimsy position, and I'm a bit surprised that any ethicist actually holds it. I would be interested if you're willing to cite some people who currently hold the viewpoint you're addressing.
The reason that the Euthyphro argument works against evolutionary ethics because - regardless of what evolution can teach us about what we do value, it teaches us that our values are not fixed.
Well, really it's more neuroscience that tells us that our values aren't fixed (along with how the valuation works). It also has the potential to tell us to what degree our values are fixed at any given stage of development, and how to take advantage of the present degree of malleability.
Because values are not genetically determined, there is a realm in which it is sensible to ask about what we should value, which is a question that evolutionary ethics cannot answer.
Of course; under your usage of evolutionary ethics this is clearly the case. I'm not sure how this relates to my comment, however.
Praise and condemnation are central to our moral life precisely because these are the tools for shaping learned desires
I agree that it's pretty obvious that social reinforcement is important because it shapes moral behavior, but I'm not sure if you're trying to make a central point to me, or just airing your own position regardless of the content of my post.
IMO, what each of us values to themselves may be relevant to morality. What we intuitively value for others is not.
I have to admit I have not read the metaethics sequences. From your tone, I feel I am making an elementary error. I am interested in hearing your response.
Thanks
I'm not sure if it's elementary, but I do have a couple of questions first. You say:
what each of us values to themselves may be relevant to morality
This seems to suggest that you're a moral realist. Is that correct? I think that most forms of moral realism tend to stem from some variant of the mind projection fallacy; in this case, because we value something, we treat it as though it has some objective value. Similarly, because we almost universally hold something to be immoral, we hold its immorality to be objective, or mind independent, when in fact it is not. The morality or immorality of an action has less to do with the action itself than with how our brains react to hearing about or seeing the action.
Taking this route, I would say that not only are our values relevant to morality, but the dynamic system comprising all of our individual value systems is an upper-bound to what can be in the extensional definition of "morality" if "morality" is to make any sense as a term. That is, if something is outside of what any of us can ascribe value to, then it is not moral subject matter, and furthermore; what we can and do ascribe value to is dictated by neurology.
Not only that, but there is a well-known phenomenon that makes naive (without input from neuroscience) moral decision making: the distinction between liking and wanting. This distinction crops up in part because the way we evaluate possible alternatives is lossy - we can only use a very finite amount of computational power to try and predict the effects of a decision or obtaining a goal, and we have to use heuristics to do so. In addition, there is the fact that human valuation is multi-layered - we have at least three valuation mechanisms, and their interaction isn't yet fully understood. Also see Glimcher et al. Neuroeconomics and the Study of Valuation From that article:
10 years of work (that) established the existence of at least three interrelated subsystems in these brain areas that employ distinct mechanisms for learning and representing value and that interact to produce the valuations that guide choice (Dayan & Balliene, 2002; Balliene, Daw, & O’Doherty, 2008; Niv & Montague, 2008).
The mechanisms for choice valuation are complicated, and so are the constraints for human ability in decision making. In evaluating whether an action was moral, it's imperative to avoid making the criterion "too high for humanity".
One last thing I'd point out has to do with the argument you link to, because you do seem to be being inconsistent when you say:
What we intuitively value for others is not.
Relevant to morality, that is. The reason is that the argument cited rests entirely on intuition for what others value. The hypothetical species in the example is not a human species, but a slightly different one.
I can easily imagine an individual from species described along the lines of the author's hypothetical reading the following:
If it is good because it is loved by our genes, then anything that comes to be loved by the genes can become good. If humans, like lions, had a disposition to not eat their babies, or to behead their mates and eat them, or to attack neighboring tribes and tear their members to bits (all of which occurs in the natural kingdom), then these things would not be good. We could not brag that humans evolved a disposition to be moral because morality would be whatever humans evolved a disposition to do.
And being horrified at the thought of such a bizarre and morally bankrupt group. I strongly recommend you read the sequence I linked to in the quite if you haven't. It's quite an interesting (relevant) short story.
So, I have a bit more to write but I'm short on time at the moment. I'd be interested to hear if there is anything you find particularly objectionable here though.
I am saying evolutionary morality as a whole is an invalid concept that is irrelevant to the subject of morality.
Actually, I can think of a minutely useful aspect of evolutionary morality: It tells us the evolutionary mechanism by which we got our current intuitions about morality is stupid because it is also the same mechanism that gave lions the intuition to (quoting the article I linked to) 'slaughter their step children, or to behead their mates and eat them, or to attack neighboring tribes and tear their members to bits (all of which occurs in the natural kingdom)'.
If the mechanism by which we got our intuitions about morality is stupid, then we learn that our intuitions are completely irrelevant to the subject of morality. We also learn that we should not waste our time studying such a stupid mechanism.
I initially wrote up a bit of a rant, but I just want to ask a question for clarification:
Do you think that evolutionary ethics is irrelevant because the neuroscience of ethics and neuroeconomics are much better candidates for understanding what humans value (and therefore for guiding our moral decisions)?
I'm worried that you don't because the argument you supplied can be augmented to apply there as well: just replace "genes" with "brains". If your answer is a resounding 'no', I have a lengthy response. :)
Since f(n)>>exp(g(n)), and the theory T proves in n symbols that "T can't prove a falsehood in f(n) symbols", we see that T proves in slightly more than n symbols that the program R won't find any proofs.
Could you elaborate on this? What is the slightly-more-than-n-symbols-long proof?
As I understand it, because T proves in n symbols that "T can't prove a falsehood in f(n) symbols", taking the specification of R (program length) we could do a formal verification proof that R will not find any proofs, as R only finds a proof if T can prove a falsehood within g(n)<exp(g(n)<<f(n) symbols. So I'm guessing that the slightly-more-than-n-symbols-long is on the order of:
n + Length(proof in T that R won't print with the starting true statement that "T can't prove a falsehood in f(n) symbols")
This would vary some with the length of R and with the choice of T.
Typically you make a "sink" post with these sorts of polls.
ETA: BTW, I went for the paper. I tend to skim blogs and then skip to the comments. I think the comments make the information content on blogs much more powerful, however.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
From Wikipedia:
Unfortunately, this description is missing the point. Main existential risks come from the inside, like over-optimistic projections, sunk cost-based decisions, NIH syndrome behavior, rotting corporate culture, etc.
I see your point here, although I will say that decision science is ideally a major component in the skill set for any person in a management position. That being said, what's being proposed in the article here seems to be distinct from what you're driving at.
Managing cognitive biases within an institution doesn't necessarily overlap with the sort of measures being discussed. A wide array of statistical tools and metrics isn't directly relevant to, e.g. battling sunk-cost fallacy or NIH. More relevant to that problem set would be a strong knowledge of known biases and good training in decision science and psychology in general.
That isn't to say that these two approaches can't overlap, they likely could. For example stronger statistical analysis does seem relevant to the issue of over-optimistic projections you bring up in a very straightforward way.
From what I gather you'd want a CRO that has a complimentary knowledge base in relevant areas of psychology alongside more standard risk analysis tools. I definitely agree with that.