ChrisBillington

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

Did the survey!

Minor quibble:

Number of Current Partners (for example, 0 if you are single, 1 if you are in a monogamous relationship ...)

Seems like bad wording - what if you're in exactly one polyamorous relationship? Your partner could be seeing other people, and even if you're not seeing anyone else you wouldn't call it monogamous.

I read most of this post with a furrowed brow, wondering what you were getting at, until I got to the point on free will, which I think makes some sense.

If good choices are relative to states of knowledge and abilities, then how are not all choices good choices, given that these things are beyond our control?

I think, yes, in order to have the concept of 'good' and 'bad' choices in hindsight, one has to assume the person could have acted differently, even though in a very strict free-will sense, they couldn't have.

However there are fundamental limits to how differently they could have acted — nobody can predict the outcome of a lottery for example. So I suppose we draw the line at what reasonable expectations for a human being are. But we still make individual exceptions — if you were to find out someone had a cognitive disability, you're not going to judge them as harshly for making a bad decision. This is different to saying it's not a bad decision — it is — it's just you're not going to hold them responsible for it. It still should not be emulated, as Protagoras put it.

I'm also pretty convinced that large scale random events are more often than not quantum random (that is, quantum randomness, though initially small in classical systems, is amplified by classical chaos such that different Everett branches get different lottery results and coin flips). So if you ask yourself "If I were in that persons position, should I have bought the lottery ticket?", well, the outcome is actually totally not predetermined. Not that I think any argument here should rely on the quantum vs classical randomness distinction, but I thought I'd mention it anyway.

But it seems like it's not even a coherent concept, to judge based on actual results rather than expected, so apart from the free will angle and pointing out that some people might have badly calculated expectations, I don't think it's an idea worth putting too much thought into, and I think that those interpreting consequentialist ethics in this way must be very confused people indeed.

I believe CFAR workshops address a lot of these issues, a huge focus of them being the interplay between high-level, logically thought out cognition (system 2) and the lower level, intuitive thinking (system 1). One of the major points was that system 1 is actually very useful at providing information and making decisions, so long as you ask the question right. Smart people I think tend to under-utilise system 1, often ignoring their gut feeling when it is providing useful information.

To use your fashion example, If I consider dressing up nicely, part of me says "This is all pointless, it's just signalling!", and wants me to squash the little voice that is telling me that actually, I value looking nice and should just give in to that desire because there are actual upsides and no downsides!

Again, I found that the CFARian way is to provisionally accept your terminal goals and gut feelings as legitimate, and go about satisfying them rather than criticising them too much (criticising them being a job for epistemic rationality, CFAR being more about instrumental rationality)

I'm hesitant to link you to Julia Galef's "The Straw Vulcan" talk, since everything you're writing seems so in line with it that I suspect you've already seen it! But if you haven't, it's incredibly relevant to this topic.

Another thing I'm reminded of is the post Reason as a Memetic Disorder. Basically, sometimes there are good cultural practises that smart people fail to see the logic behind (because it's subtle, or because it's inconsistent with some false belief they have), and so they drop the practise, to their detriment. Less smart people keep doing it, since they're happy to simply conform without having reasons for everything.

I'm not saying that ordinary usages of the word "should" are statements of morality, rather the opposite: statements of morality can be translated into ordinary usage, and if they can't they probably aren't coherent statements.

"I am morally obliged to treat this person's injury" "Why?" "Because it would stop their suffering"

Perhaps we prefer to call it a moral statement when it's about other people's utility functions, rather than our own. Then again, we usually don't feel morally obliged to cater to others' preferences except to the extent that we have a preference of our own for their preferences to be satisfied, which, thankfully, most people do.

I think the two senses are really the same: if you accept consequentialist ethics, then the moral debt meaning can be translated as "If you want utility for person/group X, you should do B".

Whenever people use this word "should" in a sneaky way in a debate, I always find myself reminding them that it only has meaning with respect to someone or some group's preferences, and by glossing over exactly who's preferences we're talking about, people can get away with making bad arguments.