I just wanted to say thank you for for including the links to the TED talk and other actionable info (i.e. which plants to buy and how many per person). I have a tendency to see things like the main post and go "oh, that's interesting," but then never really follow-up on them, but knowing that I have a list of which plants to buy was enough additional motivation to make me take the issue more seriously. I'm intending to do a bit more research and get a air quality monitor in the next few days.
Since you mentioned other plants, I am wondering if t...
I think this is an excellent summary. Having read John L. Mackie's free will argument and Plantinga's transworld depravity free will defense, I think that a theodicy based on free will won't be successful. Trying to define free will such that God can't ensure using his foreknowledge that everyone will act in a morally good way leads to some very odd definitions of free will that don't seem valuable at all, I think.
You're right about the cost per averted headache, but we aren't trying to minimize the cost per averted headache; otherwise we wouldn't use any drug. We're trying to maximize utility. Unless avoiding several hours of a migraine is worth less to you than $5 (which a basic calculation using minimum wage would indicate that it is not, even excluding the unpleasantness of migraines -- and as someone who gets migraines occasionally, I'd gladly pay a great deal more than $5 to avoid them), you should get Drug A.
I largely agree with this answer. My view is that reductionist materialism implies that names are just a convenient way of discussing similar things, but there isn't something that inherently makes what we label a "car"; it's just an object made up of atoms that pattern matches what we term a "car." I suppose that likely makes me lean toward nominalism, but I find the overall debate generally confused.
I've taken several philosophy courses, and I'm always astonished by the absence of agreement or justification that either side can posit...
Took the survey. It was quite interesting! I'll be curious to see what the results look like . . . .
You could make it an explicit "either . . . or." I.e. "I think that people who are not made happier by having things either have the wrong things or have them incorrectly."
I agree. For those familiar with RationalWiki, I actually thought that it provided a nice contrasting example, honestly. Eliezer's definition for rationality is (regrettably, in my opinion) rare in a general sense (insofar as I encounter people using the term), and I think the example is worthwhile for illustrative purposes.
But how do you know if someone wanted to upvote your post for cleverness, but didn't want to express the message that they were mugged successfully? Upvoting creates conflicting messages for that specific comment.
How are you defining morality? If we use a shorthand definition that morality is a system that guides proper human action, then any "true moral dilemmas" would be a critique of whatever moral system failed to provide an answer, not proof that "true moral dilemmas" existed.
We have to make some choice. If a moral system stops giving us any useful guidance when faced with sufficiently difficult problems, that simply indicates a problem with the moral system.
ETA: For example, if I have completely strict sense of ethics based upon deontology...
(Double-post, sorry)
That's certainly a fair point.
I suppose it's primarily important to know what your own inclinations are (and how they differ in different areas) and then try to adjust accordingly.
I think that quote is much too broad with the modifier "might." If you should procrastinate based on a possibility of improved odds, I doubt you would ever do anything. At least a reasonable degree of probability should be required.
Not to mention that the natural inclination of most people toward procrastination means that they should be distrustful of feelings that delaying will be beneficial; it's entirely likely that they are misjudging how likely the improvement really is.
That's not, of course, to say that we should always do everything as soon as possible, but I think that to the extent that we read the plain meaning from this quote, it's significantly over-broad and not particularly helpful.
Systems that don't require people to work are only beneficial if non-human work (or human work not motivated by need) is still producing enough goods that the humans are better off not working and being able to spend their time in other ways. I don't think we're even close to that point. I can imagine societies in a hundred years that are at that point (I have no idea whether they'll happen or not), but it would be foolish for them to condemn our lack of such a system now since we don't have the ability to support it, just as it would be foolish for us to...
Well, if that was the position, then it wouldn't be any more immoral not to help an unconscious person than to not help a broken swing. That seems fairly problematic, so I doubt that's a successful solution.
I have taken the survey.