Lately I'd gotten jaded enough that I simply accepted that different rules apply to the elite class. As Hanson would say, most rules are there specifically to curtail those who don't have the ability to avoid them and to be side-stepped by those who do - it's why we evolved such big, manipulative brains. So when this video recently made the rounds it shocked me to realize how far my values had drifted over the past several years.

(the video is not about politics, it is about status. My politics are far from those of Penn)
http://www.youtube.com/watch?v=wWWOJGYZYpk&feature=sharek

 

It's good we have people like Penn around to remind us what it was like to be teenagers and still expect the world to be fair, so our brains can be used for more productive things.

By the measure our society currently uses, Obama was winning. Penn was not. Yet Penn’s approach is the winning strategy for society. Brain power is wasted on status games and social manipulation when it could be used for actually making things better. The machinations of the elite class are a huge drain of resources that could be better used in almost any other pursuit. And yet the elites are admired high-status individuals who are viewed as “winning” at life. They sit atop huge piles of utility. Idealists like Penn are regarded as immature for insisting on things as low-status as “the rules should be fair and apply identically to every one, from the inner-city crack-dealer to the Harvard post-grad.”

The “Rationalists Should Win” meme is a good one, but it risks corrupting our goals. If we focus too much on “Rationalist Should Win” we risk going for near-term gains that benefit us. Status, wealth, power, sex. Basically hedonism – things that feel good because we’ve evolved to feel good when we get them. Thus we feel we are winning, and we’re even told we are winning by our peers and by society. But these things aren’t of any use to society. A society of such “rationalists” would make only feeble and halting progress toward grasping the dream of defeating death and colonizing the stars.

It is important to not let one’s concept of “winning” be corrupted by Azathoth.

 

ADDED 5/23:

It seems the majority of comments on this post are people who disagree on the basis of  rationality being a tool for achieving ends, but not for telling you what ends are worth achieving.

 

I disagree. As is written "The Choice between Good and Bad is not a matter of saying 'Good!'  It is about deciding which is which." And rationality can help to decide which is which. In fact without rationality you are much more likely to be partially or fully mistaken when you decide.

 

New to LessWrong?

New Comment
60 comments, sorted by Click to highlight new comments since: Today at 8:57 AM

Azathoth should probably link here. I think using our jargon is fine, but links to the source help keep it discoverable for newcomers.

[-][anonymous]12y170

"You win the game you are playing"

Play the right game.

Which game is that?

[-][anonymous]12y130

I don't know. That's your problem.

It seems the OP thinks that the right game for the group as a whole and the right game for the individuals within that group are different. So if it's up to the individual which game to play, they will play the one that benefits them and the group will lose.

[-][anonymous]12y20

Humans aren't purely selfish. If we all play our individual game, the group will do just fine. As evidenced by the fact that we are even talking about the group as if it matters.

Even with selfish agents, the best strategy is to cooperate under certain (our) conditions.

In game theory, whether social or evolutionary, a stable outcome usually (I'm tempted to say almost always) includes some level of cheaters/defectors.

That's not really a good answer, so I down voted.

The right game for you will be dependant on your utility function, no?

Not just, else we say that defectors in PD are winning.

I don't understand the relevance of your comment; could you explain? (Expected payout for all agents in PD increases if they can find a way to cooperate AFAIK, even if all are completely selfish.)

Expected payout for one agent increases even more if they can convince everyone else to cooperate while they defect. This is the game you want to keep the other agents from playing, and while TDT works when all the other agents use a similar decision strategy, it fails in situations where they don't. Which is exactly the problem Eneasz was getting at.

[-][anonymous]12y10

defect-defect is not a win by anyone's utility function. What are you getting at?

[-][anonymous]12y00

Fair enough. It is the correct-but-nonuseful lazy answer.

Could you please include a summary of what goes on in the video? That would make it easier for those of us who can read but not listen (noisy room and all that).

Penn Jillette comments at length and with great anger that Obama nonchalantly talked about drug use in college and yet continues to enforce federal drug laws. Penn rants that Obama wouldn't be nearly so nonchalant about it if he was treated like those of the lower classes who would have served jail time for the same actions and been left with a permanent record that would make them nearly unemployable and certainly never able to go into politics.

That's an interesting selection effect - only those will be elected to high office who have never been targeted by the War on Drugs or escaped without a scratch.

Obama's drug use happened in high school, not college. At the time he was living with his grandparents in Hawaii and attending Punahou School, an elite private school, on scholarship.

It's even worse in Russia. Like most things. (Not that I'm unpatriotic.)

One's concept of "winning" comes from Azathoth. There's no avoiding that. However, not everything from Azathoth is a bad thing, so I doubt that your real objection is that we follow an evolved morality. Is this an accurate interpretation of what you're saying?

"Often, the 'rational' choice for agents is to defect in Prisoner's Dilemma-type situations. Those that do are rewarded, but their actions are a net negative for society. Despite this, we say that those who do antisocial things are winning, while those that do prosocial things are not winning. Shouldn't we reward those who sacrifice their own personal goals to help the group?"

This is why I have much lower expectations for individual rationality than group rationality. Sometimes, strong individual rationality in a low rationality population can't really do better than implementing solutions like defecting in a tragedy of the commons that a more rational group could have avoided in the first place.

I'm confused about your opening statement regarding 'different rules apply to the elite class.' Drug usage is not limited to the upper class, nor is admitting that you've used drugs limited to the upper class either. Obama could have hardly been called elite when he was using drugs, and barely even when he was writing that book. My friends and acquaintances are equally open about their past drug use.

To put it more succinctly, he was treated the same way most lower class drug users are. They receive no punishment and eventually grow up and do fine in life.

Your paragraph on 'Obama Winning, Penn is not' is similarly confusing. Obama is the President of the USA and presumably sitting on the hugest pile of utility on earth, but Penn Jillette is exceedingly rich sitting atop an estimated $175 million net worth. By my estimation, both are winning.

he was treated the same way most lower class drug users are. They receive no punishment and eventually grow up and do fine in life.

1) I think the op knows that, and maybe what he's saying is more like: isn't that people don't care about drug use, they like their tribal leaders to be "effective" rule breakers. An Obama who never did drugs might be less popular and less cool.

2) I assume you're saying that 'treated the same way' means not caught. Most poor and rich escape being caught, but that is very different than equal treatment once caught.

1 - yes exactly. Thank you.

2 - also in agreement. In the video Penn mentions a couple times that if Obama had been caught he'd be screwed, which is absolutely laughable. He would have been let off with a wink and a nod, due to his elite status. But I didn't want to side-track the post.

but Penn Jillette is exceedingly rich sitting atop an estimated $175 million net worth. By my estimation, both are winning.

In general perhaps, this particular case I don't think so. Penn's vision for a fair society is being frustrated by the old boy's club, and without them even putting much effort into it. Obama is being rewarded for his place in the game, Penn is being handicapped by his (and fortunately has enough resources that the handicap is more than tolerable).

This seems like an overbroad definition of "winning"...

Let "winning" be "increasing one's own utility function". Or "achieving one's goals".

Utility functions can wildly differ (think psychopath vs saint), giving the appearance of an overly broad definition for "winning". But I think that's a proper one.

Well, okay, I just mean... the biggest factor in fulfilling that particular part of their utility functions is what utility function they have, not any particular ability or choice on their part.

[-][anonymous]12y30

I ask the OP: Have you read the meta-ethics sequence? It answers many of your questions.

Other than that, your title is misleading and your post can be summarized as "society sucks because humans are stupid." If you care, look into raising the sanity waterline.

  1. Yes (altho it's been a long time now, could probably reread to refresh)
  2. Yeah, I'm trying. A lot of the point of the original post was "Let's not get hung up on winning, and rather let's get society to be more sane, cuz winning in an insane society is not something I'd consider "winning" at all"
[-][anonymous]12y20

You seem to be confused as to what 'winning' means. Because it literally only means 'achieving arbitrary goal X' and rational action furthers that goal in the most effective way possible.

I do however agree that society's current goal set is morally wrong, and that seen with my eyes, winning is doing what is right.

"Winning" means achieving your goals. It doesn't mean optimizing society unless optimizing society fits into your utility function. If you value hedonism above all, then you win by experiencing pleasure. Its generally agreed here that hedonism alone is insufficient for achieving happiness. You say that resources are wasted on status games instead of making things "better". What does "better" mean? "Better" for who? Rationalism is NOT about holding onto moral obligations. It is about eliminating cognitive flaws so as to improve your ability as an agent to achieve your goals. What you are essentially arguing here is that certain values are somehow inferior to others because they do not correspond to your sense of ethics.

I can feel this post triggering a little BlueVsGreen thinking habits. Instead I'm going to attempt to stay Bruce Banner, and simply ask for clarification, but if my comments appear frustrated/insulting- please forgive me.

Can someone, OP or otherwise, explain to me, directly, the connection being made between Penn's rant and rationalists loving hedonism? Even if I accept each assertion, the materials don't construct a train-track capable of being traveled for my brain:

  • What does Penn's rant have to do with the nature of the goals we choose and should choose?
  • Winning short term goals can be destructive to long term goals; I got it. Again though, not seeing the connection to prior prior paragraphs. The seduction of short term, even wanting seemingly human-long(years) instead of generations-long goals. Got it. Important topic. However, again: how does this relate to previous paragraphs of Obama/Penn/society/elites/etc?

My inklings-

  • There is a lot of talk about Penn- or is this a hidden discussion of high-utility Obama and HIS hedonistic behavior? Not accusing, but when I supported a color team, I found it difficult to directly associate faults the team leader.
  • Am I over analyzing due to repeated pattern-exposure/anchoring to difficult not-how-homo-sapians-were-evolved-to-think bias Articles of Truth+3? Should this instead be taken as a loose interior monologue to explain how one event (the video) sparked a series of associations to bring up a topic worthy of further discussion (devilish attraction of short-term/winnable goals)?

Again, ending topic is very worthy of discussion- but I'm not seeing how it fits together

Edit: fixed link error with a bigger error, then fixed again.

The high-status elites Eneasz refers to are rewarded by society with praise, respect, worship, etc. for playing the game in near mode, focusing mainly on maintaining their high status-profiles with little ulterior motives (at least, little that have a high probability of creating net world-wide utility). Such would be the same feedback loop for near-mode winning rationalist hedons.

That's how I understood the transition, anyhow.

I agree the danger is certainly worth considering, and think we should remember Machiavelli's position on the role of princes: The Prince's duty is to attain power and maintain it, by whatever means necessary to ensure the benefit of his people. *Paraphrased

Id est, the ends justify the means, but only so long as the ends benefit the people; purely status oriented games only benefit those who play them.

Yes to the "loose interior monologue" bit. The video sparked a question about what society considers to be "winning" and how the meme of winning rationalists relates to that. And in the future you may want to include your inklings right after you mention your suspicions of blue/green thinking, I almost downvoted you because I thought you were just another complainer. Upvoted because I found your first inkling interesting and not obviously wrong.

Rationality doesn't tell you what to care about. It can tell you how to be the best paperclip-maximiser equally well. "Winning" depends on what you care about; and most of us do care about the fate of society, not just maximising our own wealth and status. So being a hedonist wouldn't necessarily be "winning".

What is a problem is forgetting that you care about long-term things, or far away people. So we certainly do need to be on guard against short-termism, and if "winning" connotes focussing on short-term benefit to you, then perhaps that's an argument to stop using the word. But it's not a deep problem.

"Winning" refers to making progress towards whatever goals you set for yourself. Rationality can help you achieve your goals but - unless you'res suffering from akrasia - offers little guidance in figuring out what goals you should have.

It's a rule of epistemic rationality that, all other things being equal, one should adopt simpler theories. Why shouldn't this also extend to practical rationality, and to the determination of our goals in particular? If our ultimate values involve arbitrary and ad hoc distinctions, then they are irrational. Consider, for instance, Parfit's example of Future Tuesday Indifference:

A certain hedonist cares greatly about the quality of his future experiences. With one exception, he cares equally about all the parts of his future. The exception is that he has Future-Tuesday-Indifference. Throughout every Tuesday he cares in the normal way about what is happening to him. But he never cares about possible pains or pleasures on a future Tuesday.

I think that any account of practical rationality that does not rule Future Tuesday Indifference an irrational ultimate goal is incomplete. Consider also Eliezer's argument in Transhumanism as Simplified Humanism.

Of course, this doesn't apply directly to the point raised by Eneasz, since the distinction between values he is talking about can't obviously be cashed out in terms of simplicity. But I think there's good reason to reject the general Humean principle that our ultimate values are not open to rational criticism (except perhaps on grounds of inconsistency), and once that is allowed, positions like the one held by Eneasz are not obviously wrong.

Having a high quality experience at all times other than Tuesdays seems to be a strange goal, but one that a person could coherently optimize for (given a suitable meaning of "high quality experience). The problem with Future Tuesday Indifference is that at different times, the person places different values on the same experience on the same Tuesday.

Yeah, I see that Future Tuesday Indifference is a bad example. Not precisely for the reason you give, though, because that would also entail that any discounting of future goods is irrational, and that doesn't seem right. But Future Tuesday Indifference would involve the sort of preference switching you see with hyperbolic discounting, which is more obviously irrational and might be confounding intuitions in this case.

So here's a better example: a person only assigns value to the lives of people who were born within a five-mile radius of the Leaning Tower of Pisa. This is an ultimate value, not an instrumental one. There's no obvious incoherence involved here. A person could coherently optimize for this goal. But my point is that this does not exhaust our avenues for rational criticism of goals. The fact that this person has an ultimate value that relies on such a highly specific and arbitrary distinction is grounds for criticism, just as it would be if the person adopted a scientific theory which (despite being empirically adequate) postulated such a distinction.

Not precisely for the reason you give, though, because that would also entail that any discounting of future goods is irrational, and that doesn't seem right.

Discounting of future goods does not involve assigning different values to the same goods at the same time.

So here's a better example: a person only assigns value to the lives of people who were born within a five-mile radius of the Leaning Tower of Pisa. This is an ultimate value, not an instrumental one. There's no obvious incoherence involved here. A person could coherently optimize for this goal. But my point is that this does not exhaust our avenues for rational criticism of goals. The fact that this person has an ultimate value that relies on such a highly specific and arbitrary distinction is grounds for criticism, just as it would be if the person adopted a scientific theory which (despite being empirically adequate) postulated such a distinction.

I would not criticize this goal for being "irrational", though I would oppose it because it conflicts with my own goals. My opposition is not because it is arbitrary, I am perfectly happy with arbitrariness in goal systems that aligns with my own goals.

Discounting of future goods does not involve assigning different values to the same goods at the same time.

The qualifier "at the same time" is ambiguous here.

If you mean that different values are assigned at the same time, so that the agent has conflicting utilities for a goal at a single time, then you're right that discounting does not involve this. But neither does Future Tuesday Indifference,. so I don't see the relevance.

If "at the same time" is meant to modify "the same goods", so that what you're saying is that discounting does not involve assigning different values to "good-g-at-time-t", then this is false. Depending on the time at which the valuation is made, discounting entails that different values will be assigned to "good-g-at-time-t".

[This comment is no longer endorsed by its author]Reply

If "at the same time" is meant to modify "the same goods", so that what you're saying is that discounting does not involve assigning different values to "good-g-at-time-t", then this is false. Depending on the time at which the valuation is made, discounting entails that different values will be assigned to "good-g-at-time-t".

Suppose an agend with exponential time discounting assigns goods G at a time T a utility of U0(G)*exp(a*(T0-T)). Then that is the utility the agent at any time assigns those goods at that time. You may be thinking that the agent at time TA assigns a utility to the goods G at the same time T of U0(G)*exp(a*(TA-T)) and thus the agent at different times is assigning different utilities, but these utility functions differ only by the constant (over states of the universe) factor exp(a*(TA-T0)), which being an affine transformation, doesn't matter. The discounting agent's equivalency class of utility functions representing its values really is constant over the agent's subjective time.

Ah, I see. You're right. Comment retracted.

It's my contention that rationality should offer guidance in figuring out what goals you should have. A rationalist society will have goals closer to "defeat death and grasp the stars" than "gain ALL the status". It's not just rationalists who should win, it's rational societies who should win. If you're in a society that is insane then you may not be able to "win" as a rationalist. In that case your goal should not be "winning" in the social-traditional sense, it should be making society sane.

You're priveliging your values when you judge which society - the status game players versus the immortal starfarers - is "winning".

I don't think that that's a bad thing. The immortal starfarers necessarily go somewhere; the status game players don't necessarily go anywhere. Hence "winning". The point of the post was to warn that not only answering our questions but figuring out which questions we should ask is an issue we have to tackle. We have to figure out what winning should be.

The reason that the immortal starfarers are better is that they're trying to do that, so if all values aren't created equally, they're more likely to find out about it.

The immortal starfarers necessarily go somewhere; the status game players don't necessarily go anywhere. Hence "winning".

Deciding that going somewhere is "winning" comes from your existing utility function. Another person could judge that the civilization with the most rich and complex social hierarchy "wins".

Rationality can help you search the space of actions, policies, and outcomes for those which produce the highest value for you. It cannot help you pass objective judgment on your values, or discover "better" ones.

I think that's almost completely wrong. Being human offers guidance in figuring out what goals and values we should have. If the values of the society would be seen as insane by us, a rationalist will still be more likely to win over more of those socieities than average.

If the values of the society would be seen as insane by us, a rationalist will still be more likely to win over more of those socieities than average.

I suspect that, if rigorously formulated, this claim will run afoul of something like the No Free Lunch Theorem.

Can you explain this suspicion? I'm not saying that "Rationalists always win": I am saying that they win more often than average.

Say you are in society X, which maximizes potential values [1, 2, 7] though mechanism P and minimzies potential values [4, 9, 13] through mechanism Q.

A rationalist (A) who values [1, 4, 9] will likely not do as well as a random agent (B) that values [1, 2, 7] under X, because the rationalist will only get limited help from P while having to counteract Q, while the other agent (rationalist or not) will recieve full benefit from P and no harm from Q. So it's trivially true that a rationalist does not always do better than other agents: sometimes the game is set against them.

A rationalist (A) will do better than a non-rationalist (C) with values [1, 4, 9] if having an accurate perception of P allows you to maximize P for 1 or having an accurate perception of Q allows you to minimize Q for [4, 9]. In the world we live in, at least, this usually proves true.

But A will also do better than B in any society that isn't X, unless B is also a rationalist. They will have a more accurate perception of the reality of the society they are in and thus be better able to maximize the mechanisms that aid their values while minimizing the mechanisms that countermand them.

That's what I meant by "more likely to win over more of those societies than average."

I haven't thought about this carefully, so this may be a howler, but here is what I was thinking:

"Winning" is an optimization problem, so you can conceive of the problem of finding the winning strategy in terms of efficiently minimizing some cost function. Different sets of values -- different utility functions -- will correspond to different cost functions. Rationalism is a particular algorithm for searching for the minimum. Here I associate "rationalism" with the set of concrete epistemic tools recommended by LW; you could, of course, define "rationalism" so that whichever strategy most conduces to winning in a particular context is the rational one, but then your claim would be tautological.

The No Free Lunch Theorem for search and optimization says that all algorithms that search for the minimum of a cost function perform equally well when you average over all possible cost functions. So if you're really allowing the possibility of any set of values, then the rationalism algorithm is no more likely to win on average than any other search algorithm.

Again, this is a pretty hasty argument, so I'm sure there are holes.

I suspect you are right if we are talking about epistemic rationality, but not instrumental rationality.

In practice, when attempting to maximize a value, once you know what sort of system you are in, most of your energy has to go into gaming the system: finding the cost of minimizing the costs and looking for exploits. This is more true the more times a game is iterated: if a game literally went on forever, any finite cost becomes justifiable for this sort of gaming of the system: you can spend any bounded amount of bits. (Conversely, if a game is unique, you are less justified in spending your bits on finding solutions: your budget roughly becomes what you can afford to spare.)

If we apply LW techniques of rationalism (as you've defined it) what we get is general methods, heuristics, and proofs on ways to find these exploits, a summation of this method being something like "know the rules of the world you are in" because your knowledge of a game directly affects your ability to manipulate its rules and scoring.

In other words, I suspect you are right if what we are talking about is simply finding the best situation for your algorithm: choosing the best restaurant in the available solution space. But when we are in a situation where the rules can be manipulated, used, or applied more effectively I believe this dissolves. You could probably convince me pretty quickly with a more formal argument, however.

I only have far goals to get more of my near goals at some point.

It seems the majority of people who disagree with this post do so on the basis of rationality being a tool for achieving ends, but not for telling you what ends are worth achieving.

I disagree. As is written, "The Choice between Good and Bad is not a matter of saying 'Good!' It is about deciding which is which." And rationality can help to decide which is which. In fact without rationality you are much more likely to be partially or fully mistaken when you decide.

What does "better" mean? "Better" for who?

That's part of the question we're trying to answer. As for the "for who" part I would answer with "ideally, all sentient beings."

As often happens, it is to quite an extent a matter of definitions. If by an "end" you mean a terminal value, then no purely internal process can change that value, because otherwise it wouldn't be terminal. This is essentially the same as the choice of reasoning priors, in that anything that can be chosen is, by definition, not a prior, but a posterior of the choice process.

Obviously, if you split the reasoning process into sections, then posteriors of a certain sections can become priors of the sections following. Likewise, certain means can be more efficiently thought as ends, and in this case rationality can help you determine what those ends would be.

The problem with humans is that the evolved brain cannot directly access either core priors or terminal values, and there is not guarantee that they are even coherent enough to be said to properly exists. So every "end" that rises high enough into the conscious mind to be properly reified is necessarily an extrapolation, and hence not a truly terminal end.

If by an "end" you mean a terminal value, then no purely internal process can change that value, because otherwise it wouldn't be terminal.

A notion of "terminal value" should allow possibility of error in following it, including particularly bad errors that cause value drift (change which terminal values an agent follows).

Some of your terminal values can modify other terminal values though. Rational investigation can inform you about optimal trade-offs between them.

Edit: Tradeoffs don't change that you want more of both A and B. Retracted.

[This comment is no longer endorsed by its author]Reply

Winning is achieving your ends, not achieving them better than the other guy achieves his.

Also, I'd suggest that you'd improve your analysis if you stopped anthropomorphizing society.

And you should also distinguish between instrumental and epistemic rationality, which I think a lot of people around here should do more of as well. One sense of Rationalists Should Win is I want to Win, and don't want any part of a Rationality that makes me lose. Another sense is Epistemic Rationality helps you Win, which is usually true, but I'm against making a fetish of Epistemic Rationality and treating it as synonymous with Winning.