Something I’ve been thinking about for a while is the dual relationship between optimization and indifference, and the relationship between both of them and the idea of freedom.

Optimization: “Of all the possible actions available to me, which one is best? (by some criterion).  Ok, I’ll choose the best.”

Indifference: “Multiple possible options are equally good, or incommensurate (by the criterion I’m using). My decision algorithm equally allows me to take any of them.”

Total indifference between all options makes optimization impossible or vacuous. An optimization criterion which assigns a total ordering between all possibilities makes indifference vanishingly rare. So these notions are dual in a sense. Every dimension along which you optimize is in the domain of optimization; every dimension you leave “free” is in the domain of indifference.

Being “free” in one sense can mean “free to optimize”.  I choose the outcome that is best according to an internal criterion, which is not blocked by external barriers.  A limit on freedom is a constraint that keeps me away from my favorite choice. Either a natural limit (“I would like to do that but the technology doesn’t exist yet”) or a man-made limit (“I would like to do that but it’s illegal.”)

There’s an ambiguity here, of course, when it comes to whether you count “I would like to do that, but it would have a consequence I don’t like” as a limit on freedom.  Is that a barrier blocking you from the optimal choice, or is it simply another way of saying that it’s not an optimal choice after all?

And, in the latter case, isn’t that basically equivalent to saying there is no such thing as a barrier to free choice? After all, “I would like to do that, but it’s illegal” is effectively the same thing as “I would like to do that, but it has a consequence I don’t like, such as going to jail.” You can get around this ambiguity in a political context by distinguishing natural from social barriers, but that’s not a particularly principled distinction.

Another issue with freedom-as-optimization is that it’s compatible with quite tightly constrained behavior, in a way that’s not consistent with our primitive intuitions about freedom.  If you’re only “free” to do the optimal thing, that can mean you are free to do only one thing, all the time, as rigidly as a machine. If, for instance, you are only free to “act in your own best interests”, you don’t have the option to act against your best interests.  People in real life can feel constrained by following a rigid algorithm even when they agree it’s “best”; “but what if I want to do something that’s not best?”  Or, they can acknowledge they’re free to do what they choose, but are dismayed to learn that their choices are “dictated” as rigidly by habit and conditioning as they might have been by some human dictator.

An alternative notion of freedom might be freedom-as-arbitrariness.  Freedom in the sense of “degrees of freedom” or “free group”, derived from the intuition that freedom means breadth of possibility rather than optimization power.  You are only free if you could equally do any of a number of things, which ultimately means something like indifference.

This is the intuition behind claims like Viktor Frankl’s: “Between stimulus and response there is a space. In that space is our power to choose a response. In our response lies our growth and our freedom.”  If you always respond automatically to a given stimulus, you have only one choice, and that makes you unfree in the sense of “degrees of freedom.”

Venkat Rao’s concept of freedom is pretty much this freedom-as-arbitrariness, with some more specific wrinkles. He mentions degrees of freedom (“dimensionality”) as well as “inscrutability”, the inability to predict one’s motion from the outside.

Buddhists also often speak of freedom more literally in terms of indifference, and there’s a very straightforward logic to this; you can only choose equally between A and B if you have been “liberated” from the attractions and aversions that constrain you to choose A over B.  Those who insist that Buddhism is compatible with a fairly normal life say that after Buddhist practice you still will choose systematically most of the time — your utility function cannot fully flatten if you act like a living organism — but that, like Viktor Frankl’s ideal human, you will be able to reflect with equinamity and consider choosing B over A; you will be more “mentally flexible.”  Of course, some Buddhist texts simply say that you become actually indifferent, and that sufficient vipassana meditation will make you indistinguishable from a corpse.

Freedom-as-indifference, I think, is lurking behind our intuitions about things like “rights” or “ownership.” When we say you have a “right” to free speech — even a right bounded with certain limits, as it of course always is in practice — we mean that within those limits, you may speak however you want.  Your rights define a space, within which you may behave arbitrarily.  Not optimally. A right, if it’s not to be vacuous, must mean the right to behave “badly” in some way or other.  To own a piece of property means that, within whatever limits the concept of ownership sets, you may make use of it in any way you like, even in suboptimal ways.

This is very clearly illustrated by Glen Weyl’s notion of radical markets, which neatly disassociates two concepts usually both considered representative of free-market systems: ownership and economic efficiency.  To own something just is to be able to hang onto it even when it is economically inefficient to do so.  As Weyl says, “property is monopoly.”  The owner of a piece of land can sit on it, making no improvements, while holding out for a high price; the owner of intellectual property can sit on it without using it; in exactly the same way that a monopolist can sit on a factory and depress output while charging higher prices than he could get away with in a competitive market.

For better or for worse, rights and ownership define spaces in which you can destroy value.  If your car was subject to a perpetual auction and ownership tax as Weyl proposes, bashing your car to bits with a hammer would cost you even if you didn’t personally need a car, because it would hurt the rental or resale value and you’d still be paying tax.  On some psychological level, I think this means you couldn’t feel fully secure in your possessions, only probabilistically likely to be able to provide for your needs. You only truly own what you have a right to wreck.

Freedom-as-a-space-of-arbitrary-action is also, I think, an intuition behind the fact that society (all societies, but the US more than other rich countries, I think) is shaped by people’s desire for more discretion in decisionmaking as opposed to transparent rubrics.  College admissions, job applications, organizational codes of conduct, laws and tax codes, all are designed deliberately to allow ample discretion on the part of decisionmakers rather than restricting them to following “optimal” or “rational”, simple and legible, rules.  Some discretion is necessary to ensure good outcomes; a wise human decisionmaker can always make the right decision in some hard cases where a mechanical checklist fails, simply because the human has more cognitive processing power than the checklist.  This phenomenon is as old as Plato’s Laws and as current as the debate over algorithms and automation in medicine.  However, what we observe in the world is more discretion than would be necessary, for the aforementioned reasons of cognitive complexity, to generate socially beneficial outcomes.  We have discretion that enables corruption and special privileges in cases that pretty much nobody would claim to be ideal — rich parents buying their not-so-competent children Ivy League admissions, favored corporations voting themselves government subsidies.  Decisionmakers want the “freedom” to make illegible choices, choices which would look “suboptimal” by naively sensible metrics like “performance” or “efficiency”, choices they would prefer not to reveal or explain to the public.  Decisionmakers feel trapped when there’s too much “accountability” or “transparency”, and prefer a wider sphere of discretion.  Or, to put it more unfavorably, they want to be free to destroy value.

And this is true at an individual psychological level too, of course — we want to be free to “waste time” and resist pressure to account for literally everything we do. Proponents of optimization insist that this is simply a failure mode from picking the wrong optimization target — rest, socializing, and entertainment are also needs, the optimal amount of time to devote to them isn’t zero, and you don’t have to consider personal time to be “stolen” or “wasted” or “bad”, you can, in principle, legibilize your entire life including your pleasures. Anything you wish you could do “in the dark”, off the record, you could also do “in the light,” explicitly and fully accounted for.  If your boss uses “optimization” to mean overworking you, the problem is with your boss, not with optimization per se.

The freedom-as-arbitrariness impulse in us is skeptical.

I see optimization and arbitrariness everywhere now; I see intelligent people who more or less take one or another as ideologies, and see them as obviously correct.

Venkat Rao and Eric Weinstein are partisans of arbitrariness; they speak out in favor of “mediocrity” and against “excellence” respectively.  The rationale being, that being highly optimized at some widely appreciated metric — being very intelligent, or very efficient, or something like that — is often less valuable than being creative, generating something in a part of the world that is “dark” to the rest of us, that is not even on our map as something to value and thus appears as lack of value.  Ordinary people being “mediocre”, or talented people being “undisciplined” or “disreputable”, may be more creative than highly-optimized “top performers”.

Robin Hanson, by contrast, is a partisan of optimization; he speaks out against bias and unprincipled favoritism and in favor of systems like prediction markets which would force the “best ideas to win” in a fair competition.  Proponents of ideas like radical markets, universal basic income, open borders, income-sharing agreements, or smart contracts (I’d here include, for instance, Vitalik Buterin) are also optimization partisans.  These are legibilizing policies that, if optimally implemented, can always be Pareto improvements over the status quo; “whatever degree of wealth redistribution you prefer”, proponents claim, “surely it is better to achieve it in whatever way results in the least deadweight loss.”  This is the very reason that they are not the policies that public choice theory would predict would emerge naturally in governments. Legibilizing policies allow little scope for discretion, so they don’t let policymakers give illegible rewards to allies and punishments to enemies.  They reduce the scope of the “political”, i.e. that which is negotiated at the personal or group level, and replace it with an impersonal set of rules within which individuals are “free to choose” but not very “free to behave arbitrarily” since their actions are transparent and they must bear the costs of being in full view.

Optimization partisans are against weakly enforced rules — they say “if a rule is good, enforce it consistently; if a rule is bad, remove it; but selective enforcement is just another word for favoritism and corruption.”  Illegibility partisans say that weakly enforced rules are the only way to incorporate valuable information — precisely that information which enforcers do not feel they can make explicit, either because it’s controversial or because it’s too complex to verbalize. “If you make everything explicit, you’ll dumb everything in the world down to what the stupidest and most truculent members of the public will accept.  Say goodbye to any creative or challenging innovations!”

I see the value of arguments on both sides. However, I have positive (as opposed to normative) opinions that I don’t think everybody shares.  I think that the world I see around me is moving in the direction of greater arbitrariness and has been since WWII or so (when much of US society, including scientific and technological research, was organized along military lines).  I see arbitrariness as a thing that arises in “mature” or “late” organizations.  Bigger, older companies are more “political” and more monopolistic.  Bigger, older states and empires are more “corrupt” or “decadent.”

Arbitrariness has a tendency to protect those in power rather than out of power, though the correlation isn’t perfect.  Zones that protect your ability to do “whatever” you want without incurring costs (which include zones of privacy or property) are protective, conservative forces — they allow people security.  This often means protection for those who already have a lot; arbitrariness is often “elitist”; but it can also protect “underdogs” on the grounds of tradition, or protect them by shrouding them in secrecy.  (Scott thought “illegibility” was a valuable defense of marginalized peoples like the Roma. Illegibility is not always the province of the powerful and privileged.)  No; the people such zones of arbitrary, illegible freedom systematically harm are those who benefit from increased accountability and revealing of information. Whistleblowers and accusers; those who expect their merit/performance is good enough that displaying it will work to their advantage; those who call for change and want to display information to justify it; those who are newcomers or young and want a chance to demonstrate their value.

If your intuition is “you don’t know me, but you’ll like me if you give me a chance” or “you don’t know him, but you’ll be horrified when you find out what he did”, or “if you gave me a chance to explain, you’d agree”, or “if you just let me compete, I bet I could win”, then you want more optimization.

If your intuition is “I can’t explain, you wouldn’t understand” or “if you knew what I was really like, you’d see what an impostor I am”, or “malicious people will just use this information to take advantage of me and interpret everything in the worst possible light” or “I’m not for public consumption, I am my own sovereign person, I don’t owe everyone an explanation or justification for actions I have a right to do”, then you’ll want less optimization.

Of course, these aren’t so much static “personality traits” of a person as one’s assessment of the situation around oneself.  The latter cluster is an assumption that you’re living in a social environment where there’s very little concordance of interests — people knowing more about you will allow them to more effectively harm you.  The former cluster is an assumption that you’re living in an environment where there’s a great deal of concordance of interests — people knowing more about you will allow them to more effectively help you.

For instance, being “predictable” is, in Venkat’s writing, usually a bad thing, because it means you can be exploited by adversaries. Free people are “inscrutable.”  In other contexts, such as parenting, being predictable is a good thing, because you want your kids to have an easier time learning how to “work” the house rules.  You and your kid are not, most of the time, wily adversaries outwitting each other; conflicts are more likely to come from too much confusion or inconsistently enforced boundaries.  Relationship advice and management advice usually recommends making yourself easier for your partners and employees to understand, never more inscrutable.  (Sales advice, however, and occasionally advice for keeping romance alive in a marriage, sometimes recommends cultivating an aura of mystery, perhaps because it’s more adversarial.)

A related notion: wanting to join discussions is a sign of expecting a more cooperative world, while trying to keep people from joining your (private or illegible) communications is a sign of expecting a more adversarial world.

As social organizations “mature” and become larger, it becomes harder to enforce universal and impartial rules, harder to keep the larger population aligned on similar goals, and harder to comprehend the more complex phenomena in this larger group.  . This means that there’s both motivation and opportunity to carve out “hidden” and “special” zones where arbitrary behavior can persist even when it would otherwise come with negative consequences.

New or small organizations, by contrast, must gain/create resources or die, so they have more motivation to “optimize” for resource production; and they’re simple, small, and/or homogeneous enough that legible optimization rules and goals and transparent communication are practical and widely embraced.  “Security” is not available to begin with, so people mostly seek opportunity instead.

This theory explains, for instance, why US public policy is more fragmented, discretionary, and special-case-y, and less efficient and technocratic, than it is in other developed countries: the US is more racially diverse, which means, in a world where racism exists, that US civil institutions have evolved to allow ample opportunities to “play favorites” (giving special legal privileges to those with clout) in full generality, because a large population has historically been highly motivated to “play favorites” on the basis of race.  Homogeneity makes a polity behave more like a “smaller” one, while diversity makes a polity behave more like a “larger” one.

Aesthetically, I think of optimization as corresponding to an “early” style, like Doric columns, or like Masaccio; simple, martial, all form and principle.  Arbitrariness corresponds to a “late” style, like Corinthian columns or like Rubens: elaborate, sensual, full of details and personality.

The basic argument for optimization over arbitrariness is that it creates growth and value while arbitrariness creates stagnation.

Arbitrariness can’t really argue for itself as well, because communication itself is on the other side.  Arbitrariness always looks illogical and inconsistent.  It kind of is illogical and inconsistent. All it can say is “I’m going to defend my right to be wrong, because I don’t trust the world to understand me when I have a counterintuitive or hard-to-express or controversial reason for my choice.  I don’t think I can get what I want by asking for it or explaining my reasons or playing ‘fair’.”  And from the outside, you can’t always tell the difference between someone who thinks (perhaps correctly!) that the game is really rigged against them a profound level, and somebody who just wants to cheat or who isn’t thinking coherently.  Sufficiently advanced cynicism is indistinguishable from malice and stupidity.

For a fairly sympathetic example, you see something like Darkness at Noon, where the protagonist thinks, “Logic inexorably points to Stalinism; but Stalinism is awful! Therefore, let me insist on some space free from the depredations of logic, some space where justice can be tempered by mercy and reason by emotion.” From the distance of many years, it’s easy to say that’s silly, that of course there are reasons not to support Stalin’s purges, that it’s totally unnecessary to reject logic and justice in order to object to killing innocents.  But from inside the system, if all the arguments you know how to formulate are Stalinist, if all the “shoulds” and “oughts” around you are Stalinist, perhaps all you can articulate at first is “I know all this is right, of course, but I don’t like it.”

Not everything people call reason, logic, justice, or optimization, is in fact reasonable, logical, just, or optimal; so, a person needs some defenses against those claims of superiority.  In particular, defenses that can shelter them even when they don’t know what’s wrong with the claims.  And that’s the closest thing we get to an argument in favor of arbitrariness. It’s actually not a bad point, in many contexts.  The counterargument usually has to boil down to hope — to a sense of “I bet we can do better.”

 

New Comment
31 comments, sorted by Click to highlight new comments since:

Reminded me of this tweet from Julia Galef:

https://twitter.com/juliagalef/status/878788641400553472

"There's no way to tell which choice is higher value!!" feels paralyzing "The expected value across choices is similar" feels liberating"

Which makes me think that sometimes people can find freedom in both the arbitrary and the optimal. If there's no obvious choice, we can find freedom in the choosing between. If there is an obvious choice, we can find freedom in the choosing to take.

It's helpful to keep in mind the human hubris in thinking anyone knows what's optimal for themselves, let alone others. Add in actual individual divergence in goals and beliefs and it's kind of ludicrous to try to make many decisions for others, or to accept others' decisions about your behaviors. Note that policy and rulemaking is always about enforcement/influence on others.

I don't believe it's possible for normal humans to fully distinguish "what's good for my personal indexical experiences" and "what's good for the average or median human". It's _always_ a mix of cooperative and adversarial. I do believe it's possible to acknowledge both motives and to be humble about what limits I'll impose on others. When I talk about "freedom" in that context, this is what it means to me: very minimal human imposition of additional consequences for actions which don't have obvious, immediate harm.

Choosing "optimal for my current beliefs and preferences" vs "what others will judge as optimal for what they think my beliefs and preferences should be" is very different, and I lean toward the former as my definition of "freedom".

cf https://wiki.lesswrong.com/wiki/Other-optimizing

Yes. Even if what I actually want is "freedom to do the optimal thing", it is strategically better to fight for "freedom to do the arbitrary thing". The latter allows me to do the former. But if we only have the freedom to do the optimal thing, and the people with power disagree with me about what is optimal, I get neither.

But how do the two things in the last paragraph mix if I have (1) a preference for others to judge me well, (2) a belief that others will judge me well if they believe I am doing what they believe is optimal for what they think my beliefs and preferences should be, and (3) a belief that the extrapolated cost of convincing them that I am doing such a thing without actually doing the thing is so incredibly high as to make plans involving that almost never show up in decision-making processes?

Put another way, it seems like the two definitions can collapse in a sufficiently low-privacy conformist environment—which can be unified with the emotion of “freedom”, but at least in most Western contexts, that seems infrequent. The impression I get is that most people obvious-patch around this by trying to extrapolate “what a version of me completely removed from peer pressures would prefer” and using that as the preference baseline, but I both think and feel that that's incoherent. (Further meta, I also get the impression that many people don't feel that it's incoherent even if they would agree cognitively that it is, and that that leads to a lot of worldmodel divergence down the line.)

(I realize this might be a bit off-track from its parent comment, but I think it's relevant to the broader discussion.)

This is the intuition behind claims like Viktor Frankl’s: “Between stimulus and response there is a space. In that space is our power to choose a response. In our response lies our growth and our freedom.”  If you always respond automatically to a given stimulus, you have only one choice, and that makes you unfree in the sense of “degrees of freedom.” [...]
Buddhists also often speak of freedom more literally in terms of indifference, and there’s a very straightforward logic to this; you can only choose equally between A and B if you have been “liberated” from the attractions and aversions that constrain you to choose A over B.  Those who insist that Buddhism is compatible with a fairly normal life say that after Buddhist practice you still will choose systematically most of the time — your utility function cannot fully flatten if you act like a living organism — but that, like Viktor Frankl’s ideal human, you will be able to reflect with equinamity and consider choosing B over A; you will be more “mentally flexible.” 

I would frame the Frankl/Buddhist thing somewhat differently. While I agree with your characterization, I think that the F/B thing is also compatible with freedom as optimization.

Looking at your descriptions through the frame of subagents, the automatic response / constraint thing is describing something like the thing that I discussed on my post on coherence in humans: a situation where a single subagent or a coalition of them seizes control in order to force a particular reaction, without the rest of the mind-system having a chance to properly evaluate the need to do so. The extreme case is a series of forced moves by protector subagents which see those as the only possible actions, in a way which is poorly adapted to the person's current environment and only keeps getting them into a worse and worse mess.

Many Buddhist techniques, as well as contemporary therapy methods as well as various CFAR techniques, are IMO aimed at disassembling these kinds of forced moves, by enabling more subagents in the mind-system to be brought in to evaluate the action before taking it, rather than going by the judgment of just one subagent.

But... subjectively and theoretically, this feels like it's moving things towards optimization and indifference. When all subagents get to participate in the decision and evaluate the relevant information, they will agree on the optimal decision in light of their current state of knowledge, and then there is nothing to do except to execute that optimal decision. They are unconstrained and indifferent in the sense of being able to consider any action during their decision-making process, but optimizing in the sense of eventually converging on a single clearly optimal decision.

To make this more concrete, here's an example from the book Rethinking Positive Thinking, discussing this kind of an integration process:

Try this exercise for yourself. Think about a fear you have about the future that is vexing you quite a bit and that you know is unjustified. Summarize your fear in three to four words. For instance, suppose you’re a father who has gotten divorced and you share custody with your ex-wife, who has gotten remarried. For the sake of your daughter’s happiness, you want to become friendly with her stepfather, but you find yourself stymied by your own emotions. Your fear might be “My daughter will become less attached to me and more attached to her stepfather.” Now go on to imagine the worst possible outcome. In this case, it might be “I feel distanced from my daughter. When I see her she ignores me, but she eagerly spends time with her stepfather.” Okay, now think of the positive reality that stands in the way of this fear coming true. What in your actual life suggests that your fear won’t really come to pass? What’s the single key element? In this case, it might be “The fact that my daughter is extremely attached to me and loves me, and it’s obvious to anyone around us.” Close your eyes and elaborate on this reality.

Now take a step back. Did the exercise help? I think you’ll find that by being reminded of the positive reality standing in the way, you will be less transfixed by the anxious fantasy. When I conducted this kind of mental contrasting with people in Germany, they reported that the experience was soothing, akin to taking a warm bath or getting a massage. “It just made me feel so much calmer and more secure,” one woman told me. “I sense that I am more grounded and focused.”

Mental contrasting can produce results with both unjustified fears as well as overblown fears rooted in a kernel of truth. If as a child you suffered through a couple of painful visits to the dentist, you might today fear going to get a filling replaced, and this fear might become so terrorizing that you put off taking care of your dental needs until you just cannot avoid it. Mental contrasting will help you in this case to approach the task of going to the dentist. But if your fear is justified, then mental contrasting will confirm this, since there is nothing preventing your fear from coming true. The exercise will then help you to take preventive measures or avoid the impending danger altogether.

Before doing mental contrasting, your actions might feel forced in a way which is constraining: you have a fear of something, and that fear is forcing you to act in ways which feel like they are against your better judgment. That is, some subagents feel like the fear is correct, while others feel that it's unjustified. When you do mental contrasting by finding a reassuring mental image, you are taking the point of view of some subagents (the ones that think "this fear is unjustified") and translating it into a language which the other subagents (that think that the fear is justified) can understand. By integrating information across them, they may come into agreement that there is nothing that needs to be done. Alternatively, failing to find any counter-evidence to the fear, may convince the subagents that were trying to dismiss the fear that they were mistaken, and then you find yourself being compelled to take some kind of action.

You might notice that while this is optimizing, it feels very different from some of your descriptions of optimizing, such as this paragraph:

Another issue with freedom-as-optimization is that it’s compatible with quite tightly constrained behavior, in a way that’s not consistent with our primitive intuitions about freedom.  If you’re only “free” to do the optimal thing, that can mean you are free to do only one thing, all the time, as rigidly as a machine. If, for instance, you are only free to “act in your own best interests”, you don’t have the option to act against your best interests.  People in real life can feel constrained by following a rigid algorithm even when they agree it’s “best”; “but what if I want to do something that’s not best?”  Or, they can acknowledge they’re free to do what they choose, but are dismayed to learn that their choices are “dictated” as rigidly by habit and conditioning as they might have been by some human dictator.

Optimized-behavior-as-produced-by-mental-contrasting does not feel like this. If you were afraid that your daughter might abandon you, and then became convinced that your daughter is always going to deeply love you and that it would be silly to waste time worrying about it, then your mind-system has decided that the optimal thing to do is to stop worrying. That does not feel like you are following a rigid algorithm which your choices are dictated by: it just feels like doing anything else would be pointless and silly. You are in a sense constrained by the underlying optimization process - focusing your time and effort on this issue would feel pointless, so you don't do it - but it also feels like you are just following your own judgment of what makes sense. The algorithm feels like it could take actions intended to win the daughter's love back, but that it doesn't want to - which, looking from the outside, means that the algorithm can't.

And while you were doing the mental contrasting you were indifferent in a sense: had the contrasting produced the opposite the result, you would now feel that it's important to do something to ensure that your daughter does continue to love you, and not doing anything about it would feel horrifying and impossible. So that too would have locked you into one optimal approach.

Under this interpretation, the Frankl/Buddhist notion of indifference is pointing to the fact that your evaluation process was indifferent. It was free to settle on either end result, rather than being dominated by a single subagent with such a strong fear of abandonment that it was unwilling to fairly evaluate the relevant evidence. But once the evaluation has concluded, you are optimizing; and in a sense you were actually never free to decide, since your mind-system as a whole is optimizing some implicit utility function. The eventual end result of your evaluation was already dictated by the information that was present in your mind-system, but had not yet been integrated.

I believe that the thing which you are describing, where people feel constrained by following a rigid algorithm even when they agree that it's best, is a situation where such subagent agreement is lacking. Some subagents have become convinced that following the algorithm is the best course of action; other subagents feel like it is not taking into account all information, or that it is walking over some of your needs, and they are communicating their disagreement as a feeling of being constrained or externally dictated. (In particular, your description sounds like one is optimizing by trying to execute a fully legible algorithm, as opposed to optimizing by also listening to their intuitive feelings.)

this feels like it's moving things towards optimization and indifference.

I came here to say something like "I feel like this post sets up a false dichotomy" and I think you've done a better job than I would have at explicating why it feels to me optimization and indifference go together and are not really in opposition, except from within the prisons of our own minds thinking that they are in opposition.

If your car was subject to a perpetual auction and ownership tax as Weyl proposes, bashing your car to bits with a hammer would cost you even if you didn’t personally need a car, because it would hurt the rental or resale value and you’d still be paying tax.

I don't think this is right. COST stands for "Common Ownership Self-Assessed Tax". The self-assessed part refers to the idea that you personally state the value you'd be willing to sell the item for (and pay tax on that value). Once you've destroyed the item, presumably you'd be willing to part with the remains for a lower price, so you should just re-state the value and pay a lower tax.

It's true that damaging the car hurts the resale value and thus costs you (in terms of your material wealth), but this would be true whether or not you were living under a COST regime.

Yeah. I think the only connection here (though it's very tenuous) is that under a COST car market (although I've never seen Glen talk about applying COST to markets like that, usually it's for markets with a lot of scarcity and interdependence) every car is up for sale at all times, so other people are threatening to buy your car if you don't value it highly enough, and you can buy a new one yourself with very low transaction costs (because of the size of the market), so you are a bit less likely to want to own one yourself at any given time.

For freedom-as-arbitrariness, see also: Slack

Curated.

This post seems to do a fairly complicated thing, that some of the old Eliezer posts did for free will – take a concept that is often deeply confusing, and examine it closely enough to actually see the moving parts beneath it.

After reading this post I still feel a bit confused about freedom as a concept... but I have an additional lens to look at that concept through that might (eventually) be clear enough for me to actually manipulate the underlying gears on purpose.

I think there may be additional lenses to look at freedom through, which'd probably have more to do with cognitive science and evo-psych (why do people care about freedom, either as optimization or as arbitrariness, in the first place?)

This post was fairly long and I bounced off it the first time I read it. I feel like there is some room to improve it's structure and comprehensibility (possibly giving the different sections headings, so that you can after the fact skim it to more easily recall the overall structure).

But I could imagine a legitimate case that "no, this is actually a complex topic, if you try to simplify it for comprehensibility you're more likely to oversimplify than learn the right thing. And it's better to force the reader to carefully read through the whole thing."

That's not how I read Venkat's post. To me he seems to be talking about freedom as a certain style or manner which comes across to others as "free". It's not about your circumstances - famously, even in concentration camps there have been people behaving visibly free and inspiring others. I find this view fruitful: it encourages you to do something free, not just be happy with how many options you have.

On the first, more philosophical part of your post: I think your notion of "freedom-as-arbitrariness" is actually also what allows for "freedom-as-optimization", in the following way.

Suppose I have an abstract set of choices. These can be instantiated in a concrete situation, which then carries its own set of considerations. When I go to do my optimizing in a given concrete situation, the more constrained or partisan my choice is in the abstract, the more difficult is my total optimization. Conversely, the freer, the more arbitrary the choice is in the abstract, the less constrained my optimization is in any concrete situation, and the better I can do.

For example, if I were hiring a programmer for a project, then (all else equal) I'd rather have someone who knew a variety of technologies and wasn't too strongly attached to any, so that they could simply use whatever the situation called for.

You could state this as system design principle: if you're designing a subsystem that's going to be doing something, but you don't really know what yet, optimize the subsystem for being able to potentially do anything (arbitrariness).

I feel there's much more to say along these lines about systems being well-factored (the pattern of concrete-abstract, as above, is a kind of factorization (as in lambda abstraction)), but I'm having trouble putting it into words at the moment.

It's taken me a while, and this is an old post, but I think I've found what I've wanted to say:

Where does risk figure in all this? It almost sounds like you equate "optimal" with the path of zero risk, or at least the path of zero unknown risk.

Such an attitude towards risk would not be "optimal" if you wanted to say, play Elon Musk's game of experimental rocket technology.

And if we want to talk about "value", then a person who is willing to take on risk is very often viewed as more valuable.

Musk says:

I think I feel fear quite strongly. So it’s not as though I just have the absence of fear, I feel it quite strongly. But there are just times when something is important enough, you believe in it enough, that you do it in spite of fear. In starting SpaceX, I thought the odds of success were less than 10%, and I just accepted that actually, probably, I would just lose everything.
“But that maybe would make some progress, if we could just move the ball forward, even if we died, maybe some other company could pick up the baton and keep moving it forward; so that would still do some good. Yeah, same with Tesla. I thought the odds of a car company succeeding were extremely low. If you just accept the probabilities, then that diminishes fear.
“Like people shouldn’t think ‘Well I feel fear about this, and therefore I shouldn’t do it.’ It’s normal to feel fear. Like you’d have to … there’d have to be something mentally wrong if you didn’t feel fear.

In many areas, optimization is a process of failing forwards. A process of consequentialism, rather than utilitarianism.

Sometimes, nobody has the information that we need in order to optimize. We simply need to risk experimenting and finding out that info for ourselves.

Note, when speaking of risk, I don't only mean big things. I more generally mean - proceeding even when there is a space of uncertainty.

In other questions, I've noticed you've covered what is called "ask vs guess culture". When we have to ask someone before we can do something, often what bothers us is not the asking, but the potential of hearing a no.

But is hearing a no such a bad thing? Is having to negotiate a boundary such a bad thing? I think one often too strongly assumes that people will answer no.

If one looks into "rejection therapy", there are many videos of someone experimenting with asking people for things - like, the person goes to a store that cuts and washes the hair of dogs and cats, and he asks the people at the counter if they can cut his hair.

They take it in great humor and laugh, and then say no.

In another, the person asks a stranger if they can play soccer in the stranger's backyard. This complete stranger says yes!

The person continues to ask many weird and wonderful things, and gets a variety of different rejections and approvals. And I'd say that this process of rejection therapy was one of the person "optimizing" their own sense of becoming comfortable with asking for what is wanted, and comfortable with the potential for rejection.

A recent book, "The Courage To Be Disliked" covers a very similar thing.

To have the courage to take risks, and become comfortable with rejection and boundaries is to have the courage to be one's own separate self, with one's own separate boundaries.

Freedom is not only being free from the boundaries of others.

To be free also means to be able to deploy and retract one's own boundaries. Even when there is a risk that others do not like them.

Proponents of ideas like radical markets, universal basic income, open borders, income-sharing agreements, or smart contracts (I’d here include, for instance, Vitalik Buterin) are also optimization partisans.


In the case of UBI, what is optimization from the viewpoint of the decision makers is freedom from the viewpoint of those concerned by the decision.

After all : money is needed to fulfill the basic needs of life in society. Without UBI, little people are forced to look for money on the job market, where they are perpetually reminded that they must prove their usefulness by joining a group of sufficient efficiency on the global market (a company).
On the other hand, UBI frees these people to pursue their own, possibly wildly creative goals, however inefficient these are deemed by others.

So I'm thinking : maybe freedom is a limited (and highly valued) goods. If some have leeway to apply arbitrary decisions, then necessarily others don't. I need airplanes to be very reliable so that I can travel at my fancy. Freedom is based on top of reliability (which is equivalent to optimization in this context). Even at the individual level : I'm free to do what I want today because my body is highly optimized to obey my mental commands.

This idea seems to pervade your article (e.g. when you mention corruption as a typical sign of freedom), but it wasn't really explicited anywhere.

...

[This comment is no longer endorsed by its author]Reply

...

[This comment is no longer endorsed by its author]Reply

There are three main issues I have here with optimising.

1. It's not simple except in abstract situations, and even then only in some. Optimising the estimate of a gradient of a curve is one thing, but even something 'simple' in the real world like optimising the profits of a company is difficult beyond computability. Optimising something like useful life expectancy is even more absurdly difficult, especially if time spent on optimisation is deducted from the total.

2. What you optimise is, ultimately, arbitrary. Any 6 year old can prove this, with the 'why' game, and eventually the only answer is 'because I say so'. So while optimisation of a given thing may be possible, that optimisation is itself nested inside arbitrariness.

3. What is optimal varies across environments/perspectives. Now you can call that a lack of concordance of interests, but it doesn't require deliberate conflict, as implied in that paragraph. The best diet for my health is probably not the same as the best diet for a diabetic, nor is it necessarily the best diet for the environment, etc.

Then there is the issue of optimisation being nested in a social context, and in time. People who have already made choices will want to see the world in such a way that those choices were optimal. For example, doctors who perform circumcisions don't want to believe circumcision is harmful, because they don't want to see themselves as baby-mutilators. The optimal beliefs for them to hold to continue living their lives are not necessarily the same as the optimal beliefs for them to hold to be factually correct. This means that often when people talk about what is optimal, they're actually optimising for a whole lot of past and context that isn't visible to their audience, which for all intents and purposes to a listener are arbitrary.

That was more like four points, but oh well. In all, I think it's less of a dichotomy than it seems at first glance, and the people who favour optimisation are either unaware or in denial of its infirmity.

Attraction, humor, joy and love are very often irrational and arbitrary. They are also some of humanity's favorite things. Depending on who you ask, these things also often feel the best when they operate outside of materialistic or coldly utilitarian ends.

As to how this applies elsewhere - interviewers conducting a job interview are often potential future co-workers, right? People liking each other is a good predictor of co-operation. Thus, candidates are often measured against "fitting in with company culture" in this way.

So, if increased co-operation is "optimal", yet "likability" stems more from subjective arbitrary feeling than rational criteria, then our process of determining what is "optimal" is not, and perhaps should not be simply derived from what is most rational.

Attraction, humor, joy and love are very often irrational and arbitrary.

This is a category error. These things are not “irrational”; they’re things we value, and as such, are orthogonal to epistemic rationality (loving your child or spouse or best friend is neither “true” nor “false”) and prior to instrumental rationality (which is about how to best achieve your goals and satisfy your preferences, not about what your goals and preferences should be).

Don’t make the mistake of equating rationality with some sort of Hollywood Spock stereotype where you’re supposed to go around saying things like “this ‘love’ you speak of is most illogical, Captain”.[1]

[1] Actually, even Spock never said anything like this, but the stereotype persists nonetheless…

Edit:

then our process of determining what is “optimal” is not, and perhaps should not be simply derived from what is most rational

Given that instrumental rationality is defined as the business of determining what actions are optimal (in expectation), given your goals, this quoted part is manifestly nonsensical.

For an example of what I meant: it seems that it would be far easier to write a guide on how to bake a cake or complete an equation than it would be to write a guide on performing 5 minutes of stand-up comedy that an audience would appreciate.

If this isn't related to how much of a rational process such a thing is, how else would you explain it?

I… think you are using the word “rational” in a radically different way than how it’s used on Less Wrong. As far as I’m concerned, the term does not even apply in this context; “cake-baking is a rational process” seems like gibberish, akin to “cake-baking is a blue process” or “cake-baking is a triangular process”.

Are you, perhaps, referring to the degree to which an activity or skill draws upon conscious, as opposed to unconscious, knowledge? (Or, relatedly but distinctly, to declarative vs. procedural knowledge?) But if so, then that really has very little to do with rationality (as the term is used on Less Wrong and in related spaces).

I'd like to see a reply that focuses on "arbitrary" instead of "irrational" (from the phrase, "Attraction, humor, joy and love are very often irrational and arbitrary"), or maybe there is a better word still, considering the standup example.

Comedy seems like a fruitful domain to explore this frame, since there is no apparent criteria to optimize for that isn't "arbitrary" fundamentally: jokes age poorly, and don't translate to other cultures or contexts well. They rely on surprise to be funny, but also predictability to be legible as jokes. Jokes both are constrained by and escape their formal properties, necessarily.

"I bet we can do better" can't be the domain of optimization alone, it has to come equally through indifference/the arbitrary; it's a dialectic.

Yes, rationality is (mostly) value/preference-agnostic, as I said earlier. Optimization is always optimization with respect to a goal. This is quite a basic idea, and has been discussed many, many, many times on Less Wrong.

I think (but am not sure) that treesurgency is replying about a somewhat different point, wherein jokes exist in an interesting space where properly optimizing them involves understanding them through a lens of arbitrariness. (noting that optimization != rationality)

Where, yeah, you can describe the formula of how to optimize a joke (which includes accounting for both predictability and unpredictability in an anti-inductive fashion). But... there's something like, to tell a good joke, I can imagine it turning out to be the case that you need to at least have access to modalities of thinking that are (as-implemented-in-humans) rooted in arbitrariness.

(Perhaps more generally – yes, all optimization is technically optimization (in the schema sarah articulates in the OP), but sometimes to get certain kinds of creativity, human psychology demands indulging in arbitrariness. Which is not the same thing as irrationality)

(I'm not sure this is true, nor that it's what treesurgency meant, but it seemed an idea worth considering, and while I'm pretty sure it's been discussed on LessWrong, I don't think it's been addressed through the lens saraconstantin has put forth here)

Are you suggesting that optimization and arbitrariness are somehow at odds? That seems wrong. There can exist multiple optima, such that the choice of them is arbitary (and if the domain is an anti-inductive one, as humor is, then optimization can be a continuous or iterative process of arbitrarily choosing from among multiple available options, such that any one of them is a “correct”, i.e. optimal, choice).

I'm (attempting to) respond through the framework that Sarah put forth, not necessarily because I think it makes most sense ultimately but because in this thread I'm entering a state where I consider the lens as fully as I can.

In that framework, as I understand it, you can have freedom to optimize, or freedom to be arbitrary, and (potentially? not specified in the post?) the freedom to have both, but having both is indeed somewhat contradictory. It's not impossible but it's harder.

In particular, freedom for arbitrariness is not just "there are multiple optimal things, and you can arbitrarily pick between them." It's the freedom to actively make bad choices according to all your criteria.

And while technically you can argue that this is secretly just another form of optimization, on some humans the psychology motions being made are very different.

Ah, I see, thanks. Yes, I do find the framework given in the OP somewhat odd, and I hadn’t realized you were answering from that perspective. My comments do not really apply in that case, I guess.

I guess what I was reacting to were the inklings of a B.F. Skinner-ish behaviourist attack on individual autonomy and free will.

Anybody who adheres to that needs to read Karl Popper, and then throw their gnosticism in the trash where it belongs. Terrified of uncertainty? Too bad. It isn't going away, no matter how much you or "we" "optimize".

The more that mistake theorists proclaim themselves as wiser and "criticize democracy...because it gives too much power to the average person", the more conflict theorists (extremists, Trump voters, etc...) they will nurture.

They are also some of humanity's favorite things

Then rationality pursues/preserves them, and peoples' intuition about what rationality does is wrong.

A utility function can value anything.

Total indifference between all options makes optimization impossible or vacuous. An optimization criterion which assigns a total ordering between all possibilities makes indifference vanishingly rare. So these notions are dual in a sense.

This gap may be bridged by measuring the difference in value/expected value of 2 actions given a utility function. In order to find the exact utility of an action, we must invest the time necessary to calculate it, and acquire the information necessary to so. In order to make the best decision (given a set of decisions), we need only work out the ordering on actions (their relative utility). However, the value in determining the optimal action is based on the difference between the utility of the actions. If there are 2 routes between A and B, and both of them take about 5 minutes, then investing the time to work out exactly how long they take (a la the distance) may not be worth it - ever*, or until we have optimized all the parts of our lives where more utility is at stake. (*The cost of optimizing may exceed the gain. This is an issue we don't expect to run into if we haven't optimized anything.)

You can get around this ambiguity in a political context by distinguishing natural from social barriers, but that’s not a particularly principled distinction.

Suppose, where you live, not smoking was made illegal, and all who do not do it, pay a tax of $10 per day. Intuitively, this seems very different from being unable to go to the moon, or do 1000 push ups.

Another issue with freedom-as-optimization is that it’s compatible with quite tightly constrained behavior, in a way that’s not consistent with our primitive intuitions about freedom. 

Perhaps freedom is the freedom to optimize (as you see fit).

We have discretion that enables corruption and special privileges in cases that pretty much nobody would claim to be ideal — rich parents buying their not-so-competent children Ivy League admissions, favored corporations voting themselves government subsidies. 

Woah, the first of those might be considered to be really efficient - the colleges don't usually get that much money from a student. If a student "isn't competent" but the college gets ample compensation, it's unclear who is being harmed. (Also, people were surprised by this? I was surprised by the amount of money involved.) The second one is redirecting taxpayers' dollars - as opposed to rich people spending their money how they please.

The rationale being, that being highly optimized at some widely appreciated metric — being very intelligent, or very efficient, or something like that — is often less valuable than being creative,

I think creativity is a widely appreciated metric, as evidenced by it being used as part of an argument here. It's not clear how optimizing creativity would be bad. (To argue against optimization on the grounds that these things A are good to optimize, but not as good as these things B, is not an argument against optimization, but an argument for optimizing B rather than A.)

universal basic income, open borders, income-sharing agreements, or smart contracts
These are legibilizing policies that
allow little scope for discretion, so they don’t let policymakers give illegible rewards to allies and punishments to enemies. 
They reduce the scope of the “political”, i.e. that which is negotiated at the personal or group level, and replace it with an impersonal set of rules within which individuals are “free to choose” but not very “free to behave arbitrarily” since their actions are transparent and they must bear the costs of being in full view.

The last sentence frames things differently than the sentences before it - costs being added, instead of a loss of power. It's also not clear how, say, universal basic income makes people less free to choose.

“If you make everything explicit, you’ll dumb everything in the world down to what the stupidest and most truculent members of the public will accept.

Our legal system begs to differ - who has read the whole of that edifice? And who would claim that all citizens have read it? (And that it could not be improved through simplification?)

A related notion: wanting to join discussions is a sign of expecting a more cooperative world, while trying to keep people from joining your (private or illegible) communications is a sign of expecting a more adversarial world.

This made sense to me in a way that the dichotomy before it did not (the mapping between "you would/wouldn't understand" and "optimization"/"illegibility").

The basic argument for optimization over arbitrariness is that it creates growth and value while arbitrariness creates stagnation.

This makes it sound like the disagreement is over who gets to optimize/where optimization happens. In leaders/implementation or in rule makers/rules.

Sufficiently advanced cynicism is indistinguishable from malice and stupidity.

Stupidity is indistinguishable from lack of information?

it’s totally unnecessary to reject logic and justice in order to object to killing innocents. 

One need only say "life should be preserved." (See below.)

Not everything people call reason, logic, justice, or optimization, is in fact reasonable, logical, just, or optimal; so, a person needs some defenses against those claims of superiority. 

Logic does not provide the direction, it only tells you where a direction goes.

[This comment is no longer endorsed by its author]Reply