Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Lumifer 16 March 2015 05:57:12PM 0 points [-]

in the real world it's probably very hard to find (non-contrived) instances of pure satisficing or pure maximizing.

That's not true -- for example, in cases where the search costs for the full space are trivial, pure maximizing is very common.

In reality, people fall on a continuum from pure satisficers to pure maximizers

My objection is stronger. The behavior of optimizing for (gain - cost) does NOT lie on the continuum between satisficing and maximizing as defined in your post, primarily because they have no concept of the cost of search.

Anna could be meaningfully described as a "cookie-maximizer"

Then define "maximizing" in a way that will let you call Anna a maximizer.

Comment author: wallowinmaya 17 March 2015 10:55:09AM *  4 points [-]

That's not true -- for example, in cases where the search costs for the full space are trivial, pure maximizing is very common.

Ok, sure. I probably should have written that pure maximizing or satisficing is hard to find in important, complex and non-contrived instances. I had in mind such domains as career, ethics, romance, and so on. I think it's hard to find a pure maximizer or satisficer here.

My objection is stronger. The behavior of optimizing for (gain - cost) does NOT lie on the continuum between satisficing and maximizing as defined in your post, primarily because they have no concept of the cost of search.

Sorry, I fear that I don't completely understand your point. Do you agree that there are individual differences in people, such that some people tend to search longer for a better solution and other people are more easily satisfied with their circumstances – be it their career, their love life or the world in general?

Maybe I should have tried an operationalized definition: Maximizers are people who get high scores on this maximization scale (page 1182) and satisficers are people who get low scores.

Comment author: thakil 16 March 2015 01:57:35PM 1 point [-]

You seem to have made a convincing argument that most people are epistemic satisficers. I certainly am. But you don't seem to have made a compelling argument that such people are worse off than epistemic maximisers. I don't really see what benefits I would get from making an additional effort to truly identify my "terminal values". If I found myself dissatisfied with my current situation, then that would be one thing, but if I was I would try and improve it under my satisficer behaviour anyway. What you are proposing is that someone with 40 utility should put in some effort and presumably gaining some disutility from doing so, perhaps dropping myself to 35 utility to see if they might be able to achieve 60 utility.

I actually think this is a fundamentally bad approach to how humans think. If we focus on obtaining a romantic life partner, something a lot of people value, and took this approach, it wouldn't be incredibly difficult to identify flaws with my current romantic situation, and perhaps think about whether I could achieve something better. At the end of this reasoning chain, I might determine that there is indeed someone better out there and take the plunge for the true romantic bliss I want. However, I might actually come to the conclusion that while my current partner and situation is not perfect, it's probably the best I can achieve given my circumstances. But this is terrible! I can hardly wipe my memory of the last week or so of thought in which I carefully examined the flaws in my relationship and situation, and now all those flaws are going to fly into my mind, and may end up causing the end of a relationship which was actually the best I could achieve! This might sound a very artificial reasoning pattern, but it's essentially the plot line of many the male protagonist in some sitcoms and films who overthink their relationships into unhappiness. Obviously if I have such behavioural patterns anyway then I may need to respond to them, but it doesn't seem like a good idea to encourage them where they don't currently exist!

I actually have similar thoughts towards many who hold religious beliefs. While I am aware that I am far more likely to be correct about the universe than them, those beliefs do many holding them fairly small harm and actually a lot of good: they provide a ready made supportive community for them. Examination of those beliefs could well be very destructive to them, and provided they are not leading them towards destructive behaviours currently, I see no reason to encourage them otherwise.

Comment author: wallowinmaya 16 March 2015 06:23:38PM *  3 points [-]

But you don't seem to have made a compelling argument that such people are worse off than epistemic maximisers.

If we just consider personal happiness, then I agree with you – it's probably even the case that epistemic satisficers are happier than epistemic maximizers. But many of us don't live for the sake of happiness alone. Furthermore, it's probably the case that epistemic maximizers are good for society as a whole. If every human had been an epistemic satisficer we never would have discovered the scientific method or eradicated small pox, for example.

Also, discovering and following your terminal values is good for you almost by definition, I would say, so either we are using terms differently or I'm misunderstanding you. Let's say one of your terminal values is to increase happiness and to reduce suffering. Because you are a Catholic you think the best way to do this is to convert as many people to Catholicism as possible (because then they won't go to hell and will go to heaven). However, if Catholicism is false, then your method is wholly suboptimal and then it lies in your interest to discover the truth and being an epistemic maximizer (rational) certainly would help with this.

With regards to your romantic example, I also agree. Romantic satisficers are probably happier than romantic maximizers. Therefore I wrote in the introduction:

For example, Schwartz et al. (2002) found "negative correlations between maximization and happiness, optimism, self-esteem, and life satisfaction, and positive correlations between maximization and depression, perfectionism, and regret."

Again: But in all those examples, we are only talking about your personal happiness. Satisficer are probably happier than maximizers, but they are less likely to reach their terminal values – if they value other things besides their own happiness, which many people do: Many people wouldn't enter the experience machine, for example. But sure, if your only terminal value is your happiness then you should definitely try hard to become a satisficer in every domain.

Comment author: Lumifer 16 March 2015 02:59:47PM 5 points [-]

Satisficing means selecting the first option that is good enough, i.e. that meets or exceeds a certain threshold of acceptability. In contrast, maximizing means the tendency to search for so long until the best possible option is found.

I see no mention of costs in these definitions.

Let's try a basic and, dare I say it, rational way of trying to achieve some outcome: you look for a better alternative until your estimate of costs for further search exceeds your estimate of the gains you would get from finding a superior option.

That's not satisficing because I don't take the first option alternative that is good enough. That's also not maximizing as I am not committed to searching for the global optimum.

Comment author: wallowinmaya 16 March 2015 05:43:16PM *  3 points [-]

Continuing my previous comment

That's not satisficing because I don't take the first option alternative that is good enough. That's also not maximizing as I am not committed to searching for the global optimum.

I agree: It's neither pure satisficing nor pure maximizing. Generally speaking, in the real world it's probably very hard to find (non-contrived) instances of pure satisficing or pure maximizing. In reality, people fall on a continuum from pure satisficers to pure maximizers (I did acknowledge this in footnotes 1 and 2, but I probably should have been clearer).

But I think it makes sense to assert that certain people exhibit more satisficer-characteristics and others exhibit more maximizer-characteristics. For example, imagine that Anna travels to 127 different countries and goes to over 2500 different cafes to find the best chocolate cookie. Anna could be meaningfully described as a "cookie-maximizer", even if she gave up after 10 years of cookie-searching although she wasn't able to find the best chocolate cookie on planet Earth. :)

Somewhat relatedly, someone might be a maximizer in a certain domain, but a satisficer in another domain. I'm for example a satisficer when it comes to food and interior decoration, but (more of) a maximizer in other domains.

Comment author: Lumifer 16 March 2015 02:59:47PM 5 points [-]

Satisficing means selecting the first option that is good enough, i.e. that meets or exceeds a certain threshold of acceptability. In contrast, maximizing means the tendency to search for so long until the best possible option is found.

I see no mention of costs in these definitions.

Let's try a basic and, dare I say it, rational way of trying to achieve some outcome: you look for a better alternative until your estimate of costs for further search exceeds your estimate of the gains you would get from finding a superior option.

That's not satisficing because I don't take the first option alternative that is good enough. That's also not maximizing as I am not committed to searching for the global optimum.

Comment author: wallowinmaya 16 March 2015 03:20:44PM *  3 points [-]

I see no mention of costs in these definitions.

Let's try a basic and, dare I say it, rational way of trying to achieve some outcome: you look for a better alternative until your estimate of costs for further search exceeds your estimate of the gains you would get from finding a superior option.

Agree. Thus in footnote 3 I wrote:

[3] Rational maximizers take the value of information and opportunity costs into account.

Continuation of this comment

In Praise of Maximizing – With Some Caveats

20 wallowinmaya 15 March 2015 07:40PM

Most of you are probably familiar with the two contrasting decision making strategies "maximizing" and "satisficing", but a short recap won't hurt (you can skip the first two paragraphs if you get bored): Satisficing means selecting the first option that is good enough, i.e. that meets or exceeds a certain threshold of acceptability. In contrast, maximizing means the tendency to search for so long until the best possible option is found.

Research indicates (Schwartz et al., 2002) that there are individual differences with regard to these two decision making strategies. That is, some individuals – so called ‘maximizers’ – tend to extensively search for the optimal solution. Other people – ‘satisficers’ – settle for good enough1. Satisficers, in contrast to maximizers, tend to accept the status quo and see no need to change their circumstances2.

When the subject is raised, maximizing usually gets a bad rap. For example, Schwartz et al. (2002) found "negative correlations between maximization and happiness, optimism, self-esteem, and life satisfaction, and positive correlations between maximization and depression, perfectionism, and regret."

So should we all try to become satisficers? At least some scientists and the popular press seem to draw this conclusion:

Maximisers miss out on the psychological benefits of commitment, leaving them less satisfied than their more contented counterparts, the satisficers. ...Current research is trying to understand whether they can change. High-level maximisers certainly cause themselves a lot of grief.

I beg to differ. Satisficers may be more content with their lives, but most of us don't live for the sake of happiness alone. Of course, satisficing makes sense when not much is at stake3. However, maximizing also can prove beneficial, for the maximizers themselves and for the people around them, especially in the realm of knowledge, ethics, relationships and when it comes to more existential issues – as I will argue below4.

Belief systems and Epistemology

Ideal rationalists could be thought of as epistemic maximizers: They try to notice slight inconsistencies in their worldview, take ideas seriously, beware wishful thinking, compartmentalization, rationalizations, motivated reasoning, cognitive biases and other epistemic sins. Driven by curiosity, they don't try to confirm their prior beliefs, but wish to update them until they are maximally consistent and maximally correspondent with reality. To put it poetically, ideal rationalists as well as great scientists don't content themselves to wallow in the mire of ignorance but are imbued with the Faustian yearning to ultimately understand whatever holds the world together in its inmost folds.

In contrast, consider the epistemic habits of the average Joe Christian: He will certainly profess that having true beliefs is important to him. But he doesn't go to great lengths to actually make this happen. For example, he probably believes in an omnipotent and beneficial being that created our universe. Did he impartially weigh all available evidence to reach this conclusion? Probably not. More likely is that he merely shares the beliefs of his parents and his peers. However, isn't he bothered by the problem of evil or Occam's razor? And what about all those other religions whose adherents believe with the same certainty in different doctrines?

Many people don’t have good answers to these questions. Their model of how the world works is neither very coherent nor accurate but it's comforting and good enough. They see little need to fill the epistemic gaps and inconsistencies in their worldview or to search for a better alternative. Thus, one could view them as epistemic satisficers. Of course, all of us exhibit this sort of epistemic laziness from time to time. In the words of Jonathan Haidt (2013):

We take a position, look for evidence that supports it, and if we find some evidence—enough so that our position “makes sense”—we stop thinking.

Usually, I try to avoid taking cheap shots at religion and therefore I want to note that similar points apply to many non-theistic belief systems.

Ethics

Let's go back to average Joe: he presumably obeys the dictates of the law and his religion and occasionally donates to (ineffective) charities. Joe probably thinks that he is a “good” person and many people would likely agree. This leads us to an interesting question: how do we typically judge the morality of our own actions?

Let's delve into the academic literature and see what it has to offer: In one exemplary study, Sachdeva et al. (2009) asked participants to write a story about themselves using either morally positive words (e.g. fair, nice) or morally negative words (e.g. selfish, mean). Afterwards, the participants were asked if and how much they would like to donate to a charity of their choice. The result: Participants who wrote a story containing the positive words donated only one fifth as much as those who wrote a story with negative words.

This effect is commonly referred to as moral licensing: People with a recently boosted moral self-concept feel like they have done enough and see no need to improve the world even further. Or, as McGonigal (2011) puts it (emphasis mine):

When it comes to right and wrong, most of us are not striving for moral perfection. We just want to feel good enough – which then gives us permission to do whatever we want.

Another well known phenomenon is scope neglect. One explanation for scope neglect is the "purchase of moral satisfaction" proposed by Kahneman and Knetsch (1992): Most people don't try to do as much good as possible with their money, they only spend just enough cash to create a "warm-fuzzy feeling" in themselves.

Phenomenons like "moral licensing" and "purchase of moral satisfaction" indicate that it is all too human to only act as altruistic as is necessary to feel or seem good enough. This could be described as "ethical satisficing" because people just follow the course of action that meets or exceeds a certain threshold of moral goodness. They don't try to carry out the morally optimal action or an approximation thereof (as measured by their own axiology).

I think I cited enough academic papers in the last paragraphs so let's get more speculative: Many, if not most people5 tend to be intuitive deontologists6. Deontology basically posits that some actions are morally required, and some actions are morally forbidden. As long as you do perform the morally required ones and don't engage in morally wrong actions you are off the hook. There is no need to do more, no need to perform supererogatory acts. Not neglecting your duties is good enough. In short, deontology could also be viewed as ethical satisficing (see footnote 7 for further elaboration).

In contrast, consider deontology's arch-enemy: Utilitarianism. Almost all branches of utilitarianism share the same principal idea: That one should maximize something for as many entities as possible. Thus, utilitarianism could be thought of as ethical maximizing8.

Effective altruists are an even better example for ethical maximizers because they actually try to identify and implement (or at least pretend to try) the most effective approaches to improve the world. Some conduct in-depth research and compare the effectiveness of hundreds of different charities to find the ones that save the most lives with as little money as possible. And rumor has it there are people who have even weirder ideas about how to ethically optimize literally everything. But more on this later.

Friendships and conversations

Humans intuitively assume that the desires and needs of other people are similar to their own ones. Consequently, I thought that everyone secretly yearns to find like-minded companions with whom one can talk about one’s biggest hopes as well as one’s greatest fears and form deep, lasting friendships.

But experience tells me that I was probably wrong, at least to some degree: I found it quite difficult to have these sorts of conversations with a certain kind of people, especially in groups (luckily, I’ve found also enough exceptions). It seems that some people are satisfied as long as their conversations meet a certain, not very high threshold of acceptability. Similar observations could be made about their friendships in general. One could call them social or conversational satisficers. By the way, this time research actually suggests that conversational maximizing is probably better for your happiness than small talk (Mehl et al., 2008).

Interestingly, what could be called "pluralistic superficiality" may account for many instances of small talk and superficial friendships since everyone experiences this atmosphere of boring triviality but thinks that the others seem to enjoy the conversations. So everyone is careful not to voice their yearning for a more profound conversation, not realizing that the others are suppressing similar desires.

Crucial Considerations and the Big Picture

On to the last section of this essay. It’s even more speculative and half-baked than the previous ones, but it may be the most interesting, so bear with me.

Research suggests that many people don’t even bother to search for answers to the big questions of existence. For example, in a representative sample of 603 Germans, 35% of the participants could be classified as existentially indifferent, that is they neither think their lives are meaningful nor suffer from this lack of meaning (T. Schnell, 2008).

The existential thirst of the remaining 65% is presumably harder to satisfy, but how much harder? Many people don't invest much time or cognitive resources in order to ascertain their actual terminal values and how to optimally reach them – which is arguably of the utmost importance. Instead they appear to follow a mental checklist containing common life goals (one could call them "cached goals") such as a nice job, a romantic partner, a house and probably kids. I’m not saying that such goals are “bad” – I also prefer having a job to sleeping under the bridge and having a partner to being alone. But people usually acquire and pursue their (life) goals unsystematically and without much reflection which makes it unlikely that such goals exhaustively reflect their idealized preferences. Unfortunately, many humans are so occupied by the pursuit of such goals that they are forced to abandon further contemplation of the big picture.

Furthermore, many of them lack the financial, intellectual or psychological capacities to ponder complex existential questions. I'm not blaming subsistence farmers in Bangladesh for not reading more about philosophy, rationality or the far future. But there are more than enough affluent, highly intelligent and inquisitive people who certainly would be able to reflect about crucial considerations. Instead, they spend most of their waking hours maximizing nothing but the money in their bank accounts or interpreting the poems of some arabic guy from the 7th century9.

Generally, many people seem to take the current rules of our existence for granted and content themselves with the fundamental evils of the human condition such as aging, needless suffering or death. Whatever the reason may be, they don't try to radically change the rules of life and their everyday behavior seems to indicate that they’ve (gladly?) accepted their current existence and the human condition in general. One could call them existential satisficers.

Contrast this with the mindset of transhumanism. Generally, transhumanists are not willing to accept the horrors of nature and realize that human nature itself is deeply flawed. Thus, transhumanists want to fundamentally alter the human condition and aim to eradicate, for example, aging, unnecessary suffering and ultimately death. Through various technologies transhumanists desire to create an utopia for everyone. Thus, transhumanism could be thought of as existential maximizing10.

However, existential maximizing and transhumanism are not very popular. Quite the opposite, existential satisficing – accepting the seemingly unalterable human condition – has a long philosophical tradition. To give some examples: The otherwise admirable Stoics believed that the whole universe is pervaded and animated by divine reason. Consequently, one should cultivate apatheia and calmly accept one's fate. Leibniz even argued that we already live in the best of all possible worlds. The mindset of existential satisficing can also be found in Epicureanism and arguably in Buddhism. Lastly, religions like Christianity or Islam are generally against transhumanism, partly because this amounts to “playing God”. Which is understandable from their point of view because why bother fundamentally transforming the human condition if everything will be perfect in heaven anyway?

One has to grant ancient philosophers that they couldn't even imagine that one day humanity would acquire the technological means to fundamentally alter the human condition. Thus it is no wonder that Epicurus argued that death is not to be feared or that the Stoics believed that disease or poverty are not really bad: It is all too human to invent rationalizations for the desirability of actually undesirable, but (seemingly) inevitable things – be it death or the human condition itself.

But many contemporary intellectuals can't be given the benefit of the doubt. They argue explicitly against trying to change the human condition. To name a few: Bernard Williams believed that death gives life meaning. Francis Fukuyama called transhumanism the world's most dangerous idea. And even Richard Dawkins thinks that the fear of death is "whining" and that the desire for immortality is "presumptuous"11:

Be thankful that you have a life, and forsake your vain and presumptuous desire for a second one.

With all that said, "run-off-the-mill" transhumanism arguably still doesn't go far enough. There are at least two problems I can see: 1) Without a benevolent superintelligent singleton "Moloch" (to use Scott Alexander's excellent wording) will never be defeated. 2) We are still uncertain about ontology, decision theory, epistemology and our own terminal values. Consequently, we need some kind of process which can help us to understand those things or we will probably fail to rearrange reality until it conforms with our idealized preferences.

Therefore, it could be argued that the ultimate goal is the creation of a benevolent superintelligence or Friendly AI (FAI) whose values are aligned with ours. There are of course numerous objections to the whole superintelligence strategy in general and to FAI in particular, but I won’t go into detail here because this essay is already too long.

Nevertheless – however unlikely – it seems possible that with the help of a benevolent superintelligence we could abolish all gratuitous suffering and achieve an optimal mode of existence. We could become posthuman beings with god-like intellects, our ecstasy outshining the surrounding stars, and transforming the universe until one happy day all wounds are healed, all despair dispelled and every (idealized) desire fulfilled. To many this seems like sentimental and wishful eschatological speculation but for me it amounts to ultimate existential maximizing12, 13.

Conclusion

The previous paragraphs shouldn’t fool one into believing that maximizing has no serious disadvantages. The desire to aim higher, become stronger and to always behave in an optimally goal-tracking way can easily result in psychological overload and subsequent surrender. Furthermore, it seems that adopting the mindset of a maximizer increases the tendency to engage in upward social comparisons and counterfactual thinking which contribute to depression as research has shown.

Moreover, there is much to be learnt from stoicism and satisficing in general: Life isn't always perfect and there are things one cannot change; one should accept one's shortcomings – if they are indeed unalterable; one should make the best of one's circumstances. In conclusion, better be a happy satisficer whose moderate productivity is sustainable than be a stressed maximizer who burns out after one year. See also these two essays which make similar points.

All that being said, I still favor maximizing over satisficing. If our ancestors had all been satisficers we would still be picking lice off each other’s backs14. And only by means of existential maximizing can we hope to abolish the aforementioned existential evils and all needless suffering – even if the chances seem slim.

[Originally posted a longer, more personal version of this essay on my own blog]

Footnotes

[1] Obviously this is not a categorical classification, but a dimensional one.

[2] To put it more formally: The utility function of the ultimate satisficer would assign the same (positive) number to each possible world, i.e. the ultimate satisficer would be satisfied with every possible world. The less possible worlds you are satisfied with (i.e. the higher your threshold of acceptability), the less possible worlds exist between which you are indifferent, the less of a satisficer and the more of a maximizer you are. Also note: Satisficing is not irrational in itself. Furthermore, I’m talking about the somewhat messy psychological characteristics and (revealed) preferences of human satisficers/maximizers. Read these posts if you want to know more about satisficing vs. maximizing with regard to AIs.

[3] Rational maximizers take the value of information and opportunity costs into account.

[4] Instead of "maximizer" I could also have used the term "optimizer".

[5] E.g. in the "Fat Man" version of the famous trolley dilemma, something like 90% of subjects don't push a fat man onto the track, in order to save 5 other people. Also, utilitarians like Peter Singer don't exactly get rave reviews from most folks. Although there is some conflicting research (Johansson-Stenman, 2012). Furthermore, the deontology vs. utilitarianism distinction itself is limited. See e.g. "The Righteous Mind" by Jonathan Haidt.

[6] Of course, most people are not strict deontologists. They are also intuitive virtue ethicists and care about the consequences of their actions.

[7] Admittedly, one could argue that certain versions of deontology are about maximally not violating certain rules and thus could be viewed as ethical maximizing. However, in the space of all possible moral actions there exist many actions between which a deontologist is indifferent, namely all those actions that exceed the threshold of moral acceptability (i.e. those actions that are not violating any deontological rule). To illustrate this with an example: Visiting a friend and comforting him for 4 hours or using the same time to work and subsequently donating the earned money to a charity are both morally equivalent from the perspective of (many) deontological theories – as long as one doesn’t violate any deontological rule in the process. We can see that this parallels satisficing.

Contrast this with (classical) utilitarianism: In the space of all possible moral actions there is only one optimal moral action for an utilitarian and all other actions are morally worse. An (ideal) utilitarian searches for and implements the optimal moral action (or tries to approximate it because in real life one is basically never able to identify, let alone carry out the optimal moral action). This amounts to maximizing. Interestingly, this inherent demandingness has often been put forward as a critique of utilitarianism (and other sorts of consequentialism) and satisficing consequentialism has been proposed as a solution (Slote, 1984). Further evidence for the claim that maximizing is generally viewed with suspicion.

[8] The obligatory word of caution here: following utilitarianism to the letter can be self-defeating if done in a naive way.

[9] Nick Bostrom (2014) expresses this point somewhat harshly:

A colleague of mine likes to point out that a Fields Medal (the highest honor in mathematics) indicates two things about the recipient: that he was capable of accomplishing something important, and that he didn't.

As a general point: Too many people end up as money-, academia-, career- or status-maximizers although those things often don’t reflect their (idealized) preferences.

[10] Of course there are lots of utopian movements like socialism, communism or the Zeitgeist movement. But all those movements make the fundamental mistake of ignoring or at least heavily underestimating the importance of human nature. Creating utopia merely through social means is impossible because most of us are, by our very nature, too selfish, status-obsessed and hypocritical and cultural indoctrination can hardly change this. To deny this, is to simply misunderstand the process of natural selection and evolutionary psychology. Secondly, even if a socialist utopia were to come true, there still would exist unrequited love, disease, depression and of course death. To abolish those things one has to radically transform the human condition itself.

[11] Here is another quote:

We are going to die, and that makes us the lucky ones. Most people are never going to die because they are never going to be born. [….] We privileged few, who won the lottery of birth against all odds, how dare we whine at our inevitable return to that prior state from which the vast majority have never stirred?

― Richard Dawkins in "Unweaving the Rainbow"

[12] It’s probably no coincidence that Yudkowsky named his blog "Optimize Literally Everything" which adequately encapsulates the sentiment I tried to express here.

[13] Those interested in or skeptical of the prospect of superintelligent AI, I refer to "Superintelligence: Paths, Dangers and Strategies" by Nick Bostrom.

[14] I stole this line from Bostrom’s “In Defense of Posthuman Dignity”.

References

Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford University Press.

Haidt, J. (2013). The righteous mind: Why good people are divided by politics and religion. Random House LLC.

Johansson-Stenman, O. (2012). Are most people consequentialists? Economics Letters, 115 (2), 225-228.

Kahneman, D., & Knetsch, J. L. (1992). Valuing public goods: the purchase of moral satisfaction. Journal of environmental economics and management, 22(1), 57-70.

McGonigal, K. (2011). The Willpower Instinct: How Self-Control Works, Why It Matters, and What You Can Do to Get More of It. Penguin.

Mehl, M. R., Vazire, S., Holleran, S. E., & Clark, C. S. (2010). Eavesdropping on Happiness Well-Being Is Related to Having Less Small Talk and More Substantive Conversations. Psychological Science, 21(4), 539-541.

Sachdeva, S., Iliev, R., & Medin, D. L. (2009). Sinning saints and saintly sinners the paradox of moral self-regulation. Psychological science, 20(4), 523-528.

Schnell, T. (2010). Existential indifference: Another quality of meaning in life. Journal of Humanistic Psychology, 50(3), 351-373.

Schwartz, B. (2000). Self determination: The tyranny of freedom. American Psychologist, 55, 79–88.

Schwartz, B., Ward, A., Monterosso, J., Lyubomirsky, S., White, K., & Lehman, D. R. (2002). Maximizing versus satisficing: happiness is a matter of choice. Journal of personality and social psychology, 83(5), 1178.

Slote, M. (1984). “Satisficing Consequentialism”. Proceedings of the Aristotelian Society, 58: 139–63.

Comment author: robertzk 15 March 2015 06:54:30AM 2 points [-]

Did you remove the vilification of proving arcane theorems in algebraic number theory because the LessWrong audience is more likely to fall within this demographic? (I used to be very excited about proving arcane theorems in algebraic number theory, and fully agree with you.)

Comment author: wallowinmaya 15 March 2015 08:33:47AM *  2 points [-]

You've got me there :)

Comment author: imuli 14 March 2015 08:53:59PM 3 points [-]

But what does one maximize?

We can not maximize more than one thing (except in trivial cases). It's not too hard to call the thing that we want to maximize our utility, and the balance of priorities and desires our utility function. I imagine that most of the components of that function are subject to diminishng returns, and such components I would satisfice. So I understand this whole thing as saying that these things have the potential for unbounded linear or superlinear utility?

  • epistemic rationality
  • ethics
  • social interaction
  • existance

I'm not sure if I'm confused.

Comment author: wallowinmaya 14 March 2015 10:03:14PM *  3 points [-]

But what does one maximize?

Expected utility :)

We can not maximize more than one thing (except in trivial cases).

I guess I have to disagree. Sure, in any given moment you can maximize only one thing but this is simply not true for larger time horizons. Let's illustrate this with a typical day of Imaginary John: He wakes up and goes to work at an investment bank to earn money (money maximizing) to donate it later to GiveWell (ethical maximizing). Later at night he goes on OKCupid/or to a party to find his true soulmate (romantic maximizing). He maximized three different things in just one day. But I agree that there are always trade-offs. John could had worked all day instead of going to the party.

I imagine that most of the components of that function are subject to diminishng returns, and such components I would satisfice. So I understand this whole thing as saying that these things have the potential for unbounded linear or superlinear utility?

I think that many components of my utility function are not subject to diminishing returns. Let's use your first example, "epistemic rationality". Epistemic rationality is basically about acquiring true beliefs or new (true) information. But sometimes learning new information can radically change your whole life and thus is not subject to diminishing marginal returns. To use an example: Let's imagine you are a consequentialist and donate to charities to help blind people in the USA. Then you learn about effective altruism and cost-effectiveness and decide to donate to the most effective charities. Reading such arguments has just increased your positive impact on the world by a hundredfold! (Btw, Bostrom uses the term "crucial consideration" exactly for such things.) One could make the same argument for, say, AGI programmers reading "Superintelligence" for the first time. Given that they understand this new information, it will probably be more useful to them than most of the stuff they've learnt before.

On to the next issue – Ethics: Let's say one value of mine is to reduce suffering (what could be called non-suffering maximizing). This value is also not subject to diminishing marginal returns. For example, imagine 10.000 people getting tortured (sorry). Saving the first 100 people from getting tortured is as valuable to me as saving the last 100 people.

Admittedly, with regards to social interactions there is probably an upper bound somewhere. But this upper bound is probably much higher than most seem to assume. Also, it occurred to me that one has to distinguish between the quality and the quantity of one's social interactions. The quality of one's social interactions is unlikely to be subject to diminishing marginal returns any time soon. However, the quantity of social interactions definitely is subject to diminishing marginal returns (see e.g. Dunbar's number).

Btw, "attention" is another resource that actually has increasing marginal returns (I've stolen this example from Valentine Smith who used it in a CFAR workshop).

But I agree that unbounded utility functions can be problematic (but bounded ones, too.) However, satisficing might not help you with this.

Comment author: Evan_Gaensbauer 11 March 2015 12:38:04AM 1 point [-]

Here are my thoughts having just read the summary above, not the whole essay yet.

They take the fundamental rules of existence and the human condition (the “existential status quo”) as a given and don’t try to change it.

This sentence confused me. I think it could be fixed with some examples of what would constitute an instance of challenging the "existential status quo" in action. The first example I was thinking of would be ending death or aging, except you've already got transhumanists in there.

Other examples might include: * mitigating existential risks * suggesting and working on civilization as a whole reaching a new level, such as colonizing other planets and solar systems. * trying to implement better design for the fundamental functions of ubiquitous institutions, such as medicine, science, or law.

Again, I'm just giving quick feedback. Hopefully you've already given more detail in essay. Other than that, your summary seems fine to me.

Comment author: wallowinmaya 12 March 2015 05:35:33PM 0 points [-]

Again, I'm just giving quick feedback. Hopefully you've already given more detail in essay. Other than that, your summary seems fine to me.

Thanks! And yeah, ending aging and death are some of the examples I gave in the complete essay.

Comment author: wallowinmaya 09 March 2015 09:57:22AM *  3 points [-]

I wrote an essay about the advantages (and disadvantages) of maximizing over satisficing but I’m a bit unsure about its quality, that’s why I would like to ask for feedback here before I post it on LessWrong.

Here’s a short summary:

According to research there are so called “maximizers” who tend to extensively search for the optimal solution. Other people — “satisficers” — settle for good enough and tend to accept the status quo. One can apply this distinction to many areas:

Epistemology/Belief systems: Some people, one could describe them as epistemic maximizers, try to update their beliefs until they are maximally coherent and maximally consistent with the available data. Other people, epistemic satisficers, are not as curious and are content with their belief system, even if it has serious flaws and is not particularly coherent or accurate. But they don’t go to great lengths to search for a better alternative because their current belief system is good enough for them.

Ethics: Many people are as altruistic as is necessary to feel good enough; phenomenons like “moral licensing” and “purchasing of moral satisfaction” are evidence in favor of this. One could describe this as ethical satisficing. But there are also people who try to extensively search for the best moral action, i.e. for the action that does the most good (with regards to their axiology). Effective altruists are good example for this type of ethical maximizing.

Social realm/relationships: This point is pretty obvious.

Existential/ big picture questions: I’m less sure about this point but it seems like one could apply the distinction also here. Some people wonder a lot about the big picture, spent a lot of time reflecting on their terminal values and how to reach them in an optimal way. Nick Bostrom would be good example for the type of person I have in mind here and what could be called “existential maximizing”. In contrast, other people, not necessarily less intelligent or curious, don’t spend much time thinking about such crucial considerations. They take the fundamental rules of existence and the human condition (the “existential status quo”) as a given and don’t try to change it. Relatedly, transhumanists could also be thought of as existential maximizers in the sense that they are not satisfied with the human condition and try to change it – and maybe ultimately reach an “optimal mode of existence”.

What is “better”? Well, research shows that satisficers are happier and more easygoing. Maximizers tend to be more depressed and “picky”. They can also be quite arrogant and annoying. On the other hand, maximizers are more curious and always try hard to improve their life – and the lives of other people, which is nice.

I would really love to get some feedback on it.

Comment author: wallowinmaya 25 February 2015 11:04:38AM *  0 points [-]

Great post. Some cases of "attempted telekinesis" seem to be similar to "shoulding at the universe".

To stay with your example: I can easily imagine that if I were in your place and experienced this stressful situation with CFAR, my system 1 would have became emotionally upset and "shoulded" at the universe: "I shouldn't have to do this alone. Someone should help me. It is so unfair that I have so much responsibility".

This is similar to attempted telekinesis in the sense that my system 1 somehow thinks that just by becoming emotionally upset it will magic someone (or the universe itself) into helping me and improving my situation.

Shoulding at the universe is also a paradigmatic example of a wasted motion. Realizing this helped me a lot because I used to should at the universe all the time ("I shouldn't have to learn useless stuff for university because I don't have enough time to do important work."; "This guy shouldn't be so irrational and strawman my arguments"; etc. etc.)

View more: Next