Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

[Link] Decision Theory and the Irrelevance of Impossible Outcomes

2 wallowinmaya 28 January 2017 10:16AM

[Link] Why Altruists Should Focus on Artificial Intelligence

1 wallowinmaya 16 December 2016 11:48AM
In response to Seven Apocalypses
Comment author: wallowinmaya 29 September 2016 04:30:29PM *  3 points [-]

I don't understand why you exclude risks of astronomical suffering ("hell apocalypses").

Below you claim that those risks are "Pascalian" but this seems wrong.

[Link] How the Simulation Argument Dampens Future Fanaticism

6 wallowinmaya 09 September 2016 01:17PM

Very comprehensive analysis by Brian Tomasik on whether (and to what extent) the simulation argument should change our altruistic priorities. He concludes that the possibility of ancestor simulations somewhat increases the comparative importance of short-term helping relative to focusing on shaping the "far future".

Another important takeaway: 

[...] rather than answering the question “Do I live in a simulation or not?,” a perhaps better way to think about it (in line with Stuart Armstrong's anthropic decision theory) is “Given that I’m deciding for all subjectively indistinguishable copies of myself, what fraction of my copies lives in a simulation and how many total copies are there?"

 

[Link] Suffering-focused AI safety: Why “fail-safe” measures might be particularly promising

9 wallowinmaya 21 July 2016 08:22PM

The Foundational Research Institute just published a new paper: "Suffering-focused AI safety: Why “fail-safe” measures might be our top intervention". 

It is important to consider that [AI outcomes] can go wrong to very different degrees. For value systems that place primary importance on the prevention of suffering, this aspect is crucial: the best way to avoid bad-case scenarios specifically may not be to try and get everything right. Instead, it makes sense to focus on the worst outcomes (in terms of the suffering they would contain) and on tractable methods to avert them. As others are trying to shoot for a best-case outcome (and hopefully they will succeed!), it is important that some people also take care of addressing the biggest risks. This perspective to AI safety is especially promising both because it is currently neglected and because it is easier to avoid a subset of outcomes rather than to shoot for one highly specific outcome. Finally, it is something that people with many different value systems could get behind.

Comment author: wallowinmaya 24 October 2015 09:39:46AM *  0 points [-]

Cool that you are doing this!

Is there also a facebook event?

Comment author: Lumifer 16 March 2015 05:57:12PM 0 points [-]

in the real world it's probably very hard to find (non-contrived) instances of pure satisficing or pure maximizing.

That's not true -- for example, in cases where the search costs for the full space are trivial, pure maximizing is very common.

In reality, people fall on a continuum from pure satisficers to pure maximizers

My objection is stronger. The behavior of optimizing for (gain - cost) does NOT lie on the continuum between satisficing and maximizing as defined in your post, primarily because they have no concept of the cost of search.

Anna could be meaningfully described as a "cookie-maximizer"

Then define "maximizing" in a way that will let you call Anna a maximizer.

Comment author: wallowinmaya 17 March 2015 10:55:09AM *  4 points [-]

That's not true -- for example, in cases where the search costs for the full space are trivial, pure maximizing is very common.

Ok, sure. I probably should have written that pure maximizing or satisficing is hard to find in important, complex and non-contrived instances. I had in mind such domains as career, ethics, romance, and so on. I think it's hard to find a pure maximizer or satisficer here.

My objection is stronger. The behavior of optimizing for (gain - cost) does NOT lie on the continuum between satisficing and maximizing as defined in your post, primarily because they have no concept of the cost of search.

Sorry, I fear that I don't completely understand your point. Do you agree that there are individual differences in people, such that some people tend to search longer for a better solution and other people are more easily satisfied with their circumstances – be it their career, their love life or the world in general?

Maybe I should have tried an operationalized definition: Maximizers are people who get high scores on this maximization scale (page 1182) and satisficers are people who get low scores.

Comment author: thakil 16 March 2015 01:57:35PM 1 point [-]

You seem to have made a convincing argument that most people are epistemic satisficers. I certainly am. But you don't seem to have made a compelling argument that such people are worse off than epistemic maximisers. I don't really see what benefits I would get from making an additional effort to truly identify my "terminal values". If I found myself dissatisfied with my current situation, then that would be one thing, but if I was I would try and improve it under my satisficer behaviour anyway. What you are proposing is that someone with 40 utility should put in some effort and presumably gaining some disutility from doing so, perhaps dropping myself to 35 utility to see if they might be able to achieve 60 utility.

I actually think this is a fundamentally bad approach to how humans think. If we focus on obtaining a romantic life partner, something a lot of people value, and took this approach, it wouldn't be incredibly difficult to identify flaws with my current romantic situation, and perhaps think about whether I could achieve something better. At the end of this reasoning chain, I might determine that there is indeed someone better out there and take the plunge for the true romantic bliss I want. However, I might actually come to the conclusion that while my current partner and situation is not perfect, it's probably the best I can achieve given my circumstances. But this is terrible! I can hardly wipe my memory of the last week or so of thought in which I carefully examined the flaws in my relationship and situation, and now all those flaws are going to fly into my mind, and may end up causing the end of a relationship which was actually the best I could achieve! This might sound a very artificial reasoning pattern, but it's essentially the plot line of many the male protagonist in some sitcoms and films who overthink their relationships into unhappiness. Obviously if I have such behavioural patterns anyway then I may need to respond to them, but it doesn't seem like a good idea to encourage them where they don't currently exist!

I actually have similar thoughts towards many who hold religious beliefs. While I am aware that I am far more likely to be correct about the universe than them, those beliefs do many holding them fairly small harm and actually a lot of good: they provide a ready made supportive community for them. Examination of those beliefs could well be very destructive to them, and provided they are not leading them towards destructive behaviours currently, I see no reason to encourage them otherwise.

Comment author: wallowinmaya 16 March 2015 06:23:38PM *  3 points [-]

But you don't seem to have made a compelling argument that such people are worse off than epistemic maximisers.

If we just consider personal happiness, then I agree with you – it's probably even the case that epistemic satisficers are happier than epistemic maximizers. But many of us don't live for the sake of happiness alone. Furthermore, it's probably the case that epistemic maximizers are good for society as a whole. If every human had been an epistemic satisficer we never would have discovered the scientific method or eradicated small pox, for example.

Also, discovering and following your terminal values is good for you almost by definition, I would say, so either we are using terms differently or I'm misunderstanding you. Let's say one of your terminal values is to increase happiness and to reduce suffering. Because you are a Catholic you think the best way to do this is to convert as many people to Catholicism as possible (because then they won't go to hell and will go to heaven). However, if Catholicism is false, then your method is wholly suboptimal and then it lies in your interest to discover the truth and being an epistemic maximizer (rational) certainly would help with this.

With regards to your romantic example, I also agree. Romantic satisficers are probably happier than romantic maximizers. Therefore I wrote in the introduction:

For example, Schwartz et al. (2002) found "negative correlations between maximization and happiness, optimism, self-esteem, and life satisfaction, and positive correlations between maximization and depression, perfectionism, and regret."

Again: But in all those examples, we are only talking about your personal happiness. Satisficer are probably happier than maximizers, but they are less likely to reach their terminal values – if they value other things besides their own happiness, which many people do: Many people wouldn't enter the experience machine, for example. But sure, if your only terminal value is your happiness then you should definitely try hard to become a satisficer in every domain.

Comment author: Lumifer 16 March 2015 02:59:47PM 5 points [-]

Satisficing means selecting the first option that is good enough, i.e. that meets or exceeds a certain threshold of acceptability. In contrast, maximizing means the tendency to search for so long until the best possible option is found.

I see no mention of costs in these definitions.

Let's try a basic and, dare I say it, rational way of trying to achieve some outcome: you look for a better alternative until your estimate of costs for further search exceeds your estimate of the gains you would get from finding a superior option.

That's not satisficing because I don't take the first option alternative that is good enough. That's also not maximizing as I am not committed to searching for the global optimum.

Comment author: wallowinmaya 16 March 2015 05:43:16PM *  3 points [-]

Continuing my previous comment

That's not satisficing because I don't take the first option alternative that is good enough. That's also not maximizing as I am not committed to searching for the global optimum.

I agree: It's neither pure satisficing nor pure maximizing. Generally speaking, in the real world it's probably very hard to find (non-contrived) instances of pure satisficing or pure maximizing. In reality, people fall on a continuum from pure satisficers to pure maximizers (I did acknowledge this in footnotes 1 and 2, but I probably should have been clearer).

But I think it makes sense to assert that certain people exhibit more satisficer-characteristics and others exhibit more maximizer-characteristics. For example, imagine that Anna travels to 127 different countries and goes to over 2500 different cafes to find the best chocolate cookie. Anna could be meaningfully described as a "cookie-maximizer", even if she gave up after 10 years of cookie-searching although she wasn't able to find the best chocolate cookie on planet Earth. :)

Somewhat relatedly, someone might be a maximizer in a certain domain, but a satisficer in another domain. I'm for example a satisficer when it comes to food and interior decoration, but (more of) a maximizer in other domains.

Comment author: Lumifer 16 March 2015 02:59:47PM 5 points [-]

Satisficing means selecting the first option that is good enough, i.e. that meets or exceeds a certain threshold of acceptability. In contrast, maximizing means the tendency to search for so long until the best possible option is found.

I see no mention of costs in these definitions.

Let's try a basic and, dare I say it, rational way of trying to achieve some outcome: you look for a better alternative until your estimate of costs for further search exceeds your estimate of the gains you would get from finding a superior option.

That's not satisficing because I don't take the first option alternative that is good enough. That's also not maximizing as I am not committed to searching for the global optimum.

Comment author: wallowinmaya 16 March 2015 03:20:44PM *  3 points [-]

I see no mention of costs in these definitions.

Let's try a basic and, dare I say it, rational way of trying to achieve some outcome: you look for a better alternative until your estimate of costs for further search exceeds your estimate of the gains you would get from finding a superior option.

Agree. Thus in footnote 3 I wrote:

[3] Rational maximizers take the value of information and opportunity costs into account.

Continuation of this comment

View more: Next