knb comments on Dissenting Views - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (207)
Overall I think my views are pretty orthodox for LW/OB. But (and this is just my own impression) it seems like the LW/OB community generally considers utilitarian values to be fundamentally rational. My own view is that our goal values are truly subjective, so there isn't a set of objectively rational goal values, although I personally prefer utilitarianism myself.
There probably is for each individual, but none that are universal.
True, there are rational goals for each individual, but those depend on their own personal values. My point was there doesn't seem to be one set of objective goal values that every mind can agree on.
All minds can't have common goals, but every human, and minds we choose to give life to, can.
Values aren't objective, but can well be said to be subjectively objective.
Um, the referenced The Psychological Unity of Humankind article isn't right. Humans vary considerably - from total vegetables up to Einstein. There are many ways for the human brain to malfunction as a result of developmental problems or pathologies.
Similarly, humans have many different goals - from catholic priests to suicide bombers. That is partly as a result of the influence of memetic brain infections. Humans may share similar genes, but their memes vary considerably - and both contribute a lot to the adult phenotype.
That brings me to a LessWrong problem. Sure, this is Eliezer's blog - but there seems to be much more uncritical parroting of his views among the commentators than is healthy.
And also many ways for human brains to develop differently, says the autistic woman who seems to be doing about as well at handling life as most people do.
Didn't we even have a post about this recently? Really, once you get past "maintain homeostasis", I'm pretty sure there's not a lot that can be said to be universal among all humans, if we each did what we personally most wanted to do. It just looks like there's more agreement than there is because of societal pressure on a large scale, and selection bias on an individual scale.
AdeleneDawner, I'm being off-topic for this thread, but have you posted on the intro thread?
I have now...
You don't take into account that people can be wrong about their own values, with randomness in their activities not reflecting the unity of their real values.
Yes, but why expect unity? Clearly there is psychological variation amongst humans, and I should think it a vastly improbable coincidence that none of it has anything to do with real values.
Well, of course I don't mean literal unity, but the examples that immediately jump to mind of different things about which people care (what Tim said) are not representative of their real values.
As for the thesis above, its motivation can be stated thusly: If you can't be wrong, you can never get better.
How do you know what their real values are? Even after everyone's professed values get destroyed by the truth, it's not at all clear to me that we end up in roughly the same place. Intellectuals like you or I might aspire to growing up to be a superintelligence, while others seem to care more about pleasure. By what standard are we right and they wrong? Configuration space is vast: however much humans might agree with each other on questions of value compared to an arbitrary mind (clustered as we are into a tiny dot of the space of all possible minds), we still disagree widely on all sorts of narrower questions (if you zoom in on the tiny dot, it becomes a vast globe, throughout which we are widely dispersed). And this applies on multiple scales: I might agree with you or Eliezer far more than I would with an arbitrary human (clustered as we are into a tiny dot of the space of human beliefs and values), but ask a still yet narrower question, and you'll see disagreement again. I just don't see how the granting of veridical knowledge is going to wipe away all this difference into triviality. Some might argue that while we can want all sorts of different things for ourselves, we might be able to agree on some meta-level principles on what we want to do: we could agree to have a diverse society. But this doesn't seem likely to me either; that kind of type distinction doesn't seem to be built into human values. What could possibly force that kind of convergence?
Okay, I'm writing this one down.
Your conclusion may be right, but the HedWeb isn't strong evidence -- as far as I recall David Pearce holds a philosophically flawed belief called "psychological hedonism" that says all humans are motivated by is pleasure and pain and therefore nothing else matters, or some such. So I would say that his moral system has not yet had to withstand a razing attempt from all the truth hordes that are out there roaming the Steppes of Fact.
If "the thesis above" is the unity of values, this is not an argument. (I agree with ZM.)
It's an argument for it's being possible that behavior isn't representative of the actual values. That actual values are more united than the behaviors is a separate issue.
Human values are frequently in conflict with each other - which is the main explanation for all the fighting and wars in human history.
The explanation for this is pretty obvious: humans are close relatives of animals whose main role in life has typically been ensuring the survival and reproducion of their genes.
Unfortunately, everyone behaves as though they want to maximise the representation of their own genome - and such values conflict with the values of practically every other human on the planet, except perhaps for a few close relatives - which explains cooperation within families.
This doesn't seem particularly complicated to me. What exactly is the problem?
Are you suggesting that you still think that the cited material is correct?!?
The supporting genetic argument is wrong as well. I explain in more detail here:
http://alife.co.uk/essays/species_unity/
As far as I can tell, it is based on a whole bunch of wishful thinking intended to make the idea of Extrapolated Volition seem more plausible, by minimising claims that there will be goal conflicts between living humans. With a healthy dose of "everyone's equal" political-corectness mixed in for the associated warm fuzzy feelings.
All fun stuff - but marketing, not science.
I recommend making this a top level post, but expand a little more on this implications of your view versus Eliezer's and C&T's. This could be done in a follow-up post.
It would be great if you could expand on this.
Some people tend to value things that people happen to have in common, others are more likely to value things which people have less in common.
You may be right. If so, fixing it requires greater specificity. If you have time to write top-level posts that would be great. Regardless, I value the contributions you make in the comments.
I contend otherwise. The utilitarian model comes down to a subjective utility calculation which is impossible (I use the word impossible realizing the extremity of the word) to do currently. This can be further explicated somewhere else but without an unbiased consciousness - without one which does not fall prey to random changes of desires and mis-interpretations or mis-calculations (in other words the AI we wish to build), there cannot be a reasonable calculation of utility such that it would accurately model a basket of preferences. As a result it is not a reasonable nor reliable method for determining outcomes or understanding individual goals.
True there may be instances in which a crude utilitarian metric can be devised which accurately represents reality at one point in time, however the consequential argument seems to divine that the accumulated outcome of any specific action taken through consequential thought will align reasonably if not perfectly with the predicted outcome. This is how utilitarianism epistemologically fails - the outcomes are impossible to predict. Exogeny anyone?
In fact, what seems to hold truest to form in terms of long term goal and short term action setting is the virtue ethics which Aristotle so eloquently explicated. This is how in my view people come to their correct conclusions while falsely attributing their positive outcomes to other forms such as utilitarianism. E.G. someone thinking "I think that the outcomes of this particular decision will be to my net benefit in the long run because from this will lead to this etc..". To be sure it is possible that a utilitarian calculation could be in agreement with the virtue of the decision if the known variables are finite and the exogenous variables are by and large irrelevant, however it would seem to me that when the variables are complicated past current available calculations understanding the virtue behind an action, behavior or those which are indigenous to the actor will yield better long term results.
It is odd because objective Bayesian probability is rooted in Aristotelian logic which is predicated on virtue ethics, and since Eliezer seems to be very focused on Bayesian probability that would seem to conflict with consequential utilitarianism.
However I may be read the whole thing wrong.
ED: If there is significant disagreement please explicate so I can see how my reasoning is not clear or believed to be flawed.
Whether a given process is computationally feasible or not has no bearing on whether it's morally right. If you can't do the right thing (whether due to computational constraints or any other reason), that's no excuse to go pursue a completely different goal instead. Rather, you just have to find the closest approximation of right that you can.
If it turns out that e.g. virtue ethics produce consistently better consequences than direct attempts at expected utility maximization, then that very fact is a consquentialist reason to use virtue ethics for your object-level decisions. But a consequentialist would do so knowing that it's just an approximation, and be willing to switch if a superior heuristic ever shows up.
See Two-Tier Rationalism for more discussion, and Ethical Injunctions for why you might want to do a little of this even if you can directly compute expected utility.
Just because Aristotle founded formal logic doesn't mean he was right about ethics too, any more than about physics.
This assumes that we know on which track the right thing to do is. You cannot approximate if you do not even know what it is you are trying to approximate.
You can infer, or state that maximizing happiness is what you are trying to approximate however that may not be indeed what is the right thing.
I am familiar with two tier rationalism and all other consequentialist philosophies. All must boil down eventually to a utility calculation or an appeal to virtue - as the second tier does. One problem with the Two Tier solution as it is presented is that it's solutions to the consequentialist problems are based on vague terms:
Ok, WHICH moral principals, and based on what? How are we to know the right action in any particular situation?
Or on virtue:
I do take issue with Alicorns definition of virtue-busting, as it relegates virtue to simply patterns of behavior.
Therefore in order to be a consequentialist you must first answer "What consequence is right/correct/just?" The answer then is the correct philosophy, not simply how you got to it.
Consequentialism then may be the best guide to virtue but it cannot stand on its own without an ideal. That ideal in my mind is best represented as virtue. Virtue ethics then are the values to which there may be many routes - and consequentialism may be the best.
Ed; Seriously people, if you are going to down vote my reply then explain why.