TheAncientGeek comments on Moral Anti-Epistemology - Less Wrong

0 Post author: Lukas_Gloor 24 April 2015 03:30AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (36)

You are viewing a single comment's thread. Show more comments above.

Comment author: TheAncientGeek 01 May 2015 08:03:34AM *  1 point [-]

only matter of concern is hedonic tone. Hence my confusion about what you meant.

I don't think that fixes the problem, so I didn't think that the distinction was worth making. We can't objectively measure subjective feelings, so aggregating them across species is guesswork.

but at worst these questions require the stipulation of a finite number of tradeoff values. 

That sounds like guesswork to me,

In addition, I would say it also fails for preference utilitarianism, because I would imagine that these problems are trying hard to find decision-criteria that cover all conceivable situations

Inter species aggregation comes in when you are considers vegetarianism, vivisection, etc, which are uncontrived real world issues.

I don't think deontology necessarily does a lot better, -- I am actually a hybrid theorist-- but I don't think you are  giving deontology a fair trial, in that you are not considering its most sophisticated arguments, or allowing it to guess its way out of problems.

Comment author: Lukas_Gloor 01 May 2015 09:19:17AM 0 points [-]

That sounds like guesswork to me,

If you care about suffering, you don't stop caring just because you learn that there are no objectively right numerical tradeoff-values attached to the neural correlates of consciousness. Things being "arbitrary" or "guesswork" just means that the answer you're looking for depends on your own intuitions and cognitive machinery. This is only problematic if you want to do something else, e.g. find a universally valid solution that all other minds would also agree with. I suspect that this isn't possible.

I don't think deontology necessarily does a lot better, -- I am actually a hybrid theorists-- but I don't think you are giving deontology a fair trial, in that you are not considering its mist sophisticated arguments, or allowing it to guess its way out of problems.

I don't see how hybrid theorists would solve the problem of things being "guesswork" either. In fact, there are multiple layers of guesswork involved there: you first need to determine in which cases which theories apply and to what extent, and then you need to solve all the issues within a theory.

I still don't see any convincing objections to all the arguments I gave when I explained why I consider it likely that deontology is the result of moral rationalizing. The objection you gave about aggregation doesn't hold, because it applies to most or all moral views.

To give more support to my position: Joshua Greene has done a lot of interesting work that suggests that deontological judgments rely on system-1 thinking, whereas consequentialist judgments rely on system-2 thinking. In non-ethical contexts, these results would strongly suggest the presence of biases, especially if we consider situations were evolved heuristics are not goal-tracking.

Comment author: TheAncientGeek 02 May 2015 01:03:30PM 0 points [-]

to give more support to my position: Joshua Greene has done a lot of interesting work that suggests that deontological judgments rely on system-1 thinking, whereas consequentialist judgments rely on system-2 thinking. In non-ethical contexts, these results would strongly suggest the presence of biases, especially if we consider situations were evolved heuristics are not goal-tracking.

Biases are only unconditionally bad in the case of epistemic rationality, and ethics is about action in the world, not massively rejecting truth. To expand:

Rationality is (at least) two different things called by one name. Moreover, while there is only one epistemic rationality, the pursuit of objective truth, there are many instrumental rationalities aiming at different goals.

Biases are regarded as obstructions to rationality ... but which rationality? Any bias is a stumbling block to epistemic rationalism ... but in what way would, for instance, egoistic bias be an impediment to the pursuit of selfish aims? The goal, in that case is the bias, and the bias the goal. But egotism is still a stumbling block to epistemic rationality, and to the pursuit of incompatible values, such as altruism.

That tells us two things: one is that what counts as a bias is relative, or context dependent. The other -- in conjunction the reasonable supposition that humans don't follow a single set of values all the time -- is where bias comes from.

If humans are a messy hack with multiple value systems, and with a messy, leaky way of switching between them, then we would expect to see something like egotistical bias as a kind of hangover when switching to altruistic mode, and so on.

Comment author: Lukas_Gloor 02 May 2015 01:39:33PM 0 points [-]

I think if you read all my comments here again, you will see enough qualifications in my points that suggest that I'm aware of and agree with the point you just made. My point on top of that is simply that often, people would consider these things to be biases under reflection, after they learn more.

Comment author: TheAncientGeek 02 May 2015 02:57:41PM 0 points [-]

My argument was that on reflection, not all biases are bad.