Comment author: imuli 14 March 2015 08:53:59PM 3 points [-]

But what does one maximize?

We can not maximize more than one thing (except in trivial cases). It's not too hard to call the thing that we want to maximize our utility, and the balance of priorities and desires our utility function. I imagine that most of the components of that function are subject to diminishng returns, and such components I would satisfice. So I understand this whole thing as saying that these things have the potential for unbounded linear or superlinear utility?

  • epistemic rationality
  • ethics
  • social interaction
  • existance

I'm not sure if I'm confused.

Comment author: wallowinmaya 14 March 2015 10:03:14PM *  3 points [-]

But what does one maximize?

Expected utility :)

We can not maximize more than one thing (except in trivial cases).

I guess I have to disagree. Sure, in any given moment you can maximize only one thing but this is simply not true for larger time horizons. Let's illustrate this with a typical day of Imaginary John: He wakes up and goes to work at an investment bank to earn money (money maximizing) to donate it later to GiveWell (ethical maximizing). Later at night he goes on OKCupid/or to a party to find his true soulmate (romantic maximizing). He maximized three different things in just one day. But I agree that there are always trade-offs. John could had worked all day instead of going to the party.

I imagine that most of the components of that function are subject to diminishng returns, and such components I would satisfice. So I understand this whole thing as saying that these things have the potential for unbounded linear or superlinear utility?

I think that many components of my utility function are not subject to diminishing returns. Let's use your first example, "epistemic rationality". Epistemic rationality is basically about acquiring true beliefs or new (true) information. But sometimes learning new information can radically change your whole life and thus is not subject to diminishing marginal returns. To use an example: Let's imagine you are a consequentialist and donate to charities to help blind people in the USA. Then you learn about effective altruism and cost-effectiveness and decide to donate to the most effective charities. Reading such arguments has just increased your positive impact on the world by a hundredfold! (Btw, Bostrom uses the term "crucial consideration" exactly for such things.) One could make the same argument for, say, AGI programmers reading "Superintelligence" for the first time. Given that they understand this new information, it will probably be more useful to them than most of the stuff they've learnt before.

On to the next issue – Ethics: Let's say one value of mine is to reduce suffering (what could be called non-suffering maximizing). This value is also not subject to diminishing marginal returns. For example, imagine 10.000 people getting tortured (sorry). Saving the first 100 people from getting tortured is as valuable to me as saving the last 100 people.

Admittedly, with regards to social interactions there is probably an upper bound somewhere. But this upper bound is probably much higher than most seem to assume. Also, it occurred to me that one has to distinguish between the quality and the quantity of one's social interactions. The quality of one's social interactions is unlikely to be subject to diminishing marginal returns any time soon. However, the quantity of social interactions definitely is subject to diminishing marginal returns (see e.g. Dunbar's number).

Btw, "attention" is another resource that actually has increasing marginal returns (I've stolen this example from Valentine Smith who used it in a CFAR workshop).

But I agree that unbounded utility functions can be problematic (but bounded ones, too.) However, satisficing might not help you with this.

Comment author: Evan_Gaensbauer 11 March 2015 12:38:04AM 1 point [-]

Here are my thoughts having just read the summary above, not the whole essay yet.

They take the fundamental rules of existence and the human condition (the “existential status quo”) as a given and don’t try to change it.

This sentence confused me. I think it could be fixed with some examples of what would constitute an instance of challenging the "existential status quo" in action. The first example I was thinking of would be ending death or aging, except you've already got transhumanists in there.

Other examples might include: * mitigating existential risks * suggesting and working on civilization as a whole reaching a new level, such as colonizing other planets and solar systems. * trying to implement better design for the fundamental functions of ubiquitous institutions, such as medicine, science, or law.

Again, I'm just giving quick feedback. Hopefully you've already given more detail in essay. Other than that, your summary seems fine to me.

Comment author: wallowinmaya 12 March 2015 05:35:33PM 0 points [-]

Again, I'm just giving quick feedback. Hopefully you've already given more detail in essay. Other than that, your summary seems fine to me.

Thanks! And yeah, ending aging and death are some of the examples I gave in the complete essay.

Comment author: wallowinmaya 09 March 2015 09:57:22AM *  3 points [-]

I wrote an essay about the advantages (and disadvantages) of maximizing over satisficing but I’m a bit unsure about its quality, that’s why I would like to ask for feedback here before I post it on LessWrong.

Here’s a short summary:

According to research there are so called “maximizers” who tend to extensively search for the optimal solution. Other people — “satisficers” — settle for good enough and tend to accept the status quo. One can apply this distinction to many areas:

Epistemology/Belief systems: Some people, one could describe them as epistemic maximizers, try to update their beliefs until they are maximally coherent and maximally consistent with the available data. Other people, epistemic satisficers, are not as curious and are content with their belief system, even if it has serious flaws and is not particularly coherent or accurate. But they don’t go to great lengths to search for a better alternative because their current belief system is good enough for them.

Ethics: Many people are as altruistic as is necessary to feel good enough; phenomenons like “moral licensing” and “purchasing of moral satisfaction” are evidence in favor of this. One could describe this as ethical satisficing. But there are also people who try to extensively search for the best moral action, i.e. for the action that does the most good (with regards to their axiology). Effective altruists are good example for this type of ethical maximizing.

Social realm/relationships: This point is pretty obvious.

Existential/ big picture questions: I’m less sure about this point but it seems like one could apply the distinction also here. Some people wonder a lot about the big picture, spent a lot of time reflecting on their terminal values and how to reach them in an optimal way. Nick Bostrom would be good example for the type of person I have in mind here and what could be called “existential maximizing”. In contrast, other people, not necessarily less intelligent or curious, don’t spend much time thinking about such crucial considerations. They take the fundamental rules of existence and the human condition (the “existential status quo”) as a given and don’t try to change it. Relatedly, transhumanists could also be thought of as existential maximizers in the sense that they are not satisfied with the human condition and try to change it – and maybe ultimately reach an “optimal mode of existence”.

What is “better”? Well, research shows that satisficers are happier and more easygoing. Maximizers tend to be more depressed and “picky”. They can also be quite arrogant and annoying. On the other hand, maximizers are more curious and always try hard to improve their life – and the lives of other people, which is nice.

I would really love to get some feedback on it.

Comment author: wallowinmaya 25 February 2015 11:04:38AM *  0 points [-]

Great post. Some cases of "attempted telekinesis" seem to be similar to "shoulding at the universe".

To stay with your example: I can easily imagine that if I were in your place and experienced this stressful situation with CFAR, my system 1 would have became emotionally upset and "shoulded" at the universe: "I shouldn't have to do this alone. Someone should help me. It is so unfair that I have so much responsibility".

This is similar to attempted telekinesis in the sense that my system 1 somehow thinks that just by becoming emotionally upset it will magic someone (or the universe itself) into helping me and improving my situation.

Shoulding at the universe is also a paradigmatic example of a wasted motion. Realizing this helped me a lot because I used to should at the universe all the time ("I shouldn't have to learn useless stuff for university because I don't have enough time to do important work."; "This guy shouldn't be so irrational and strawman my arguments"; etc. etc.)

Comment author: wallowinmaya 24 October 2014 12:54:33PM 5 points [-]

Two words: Interindividual differences.

They also recommend 8-9 hours sleep. Some people need more, some people need less. The same point applies to many different phenomena.

Comment author: wallowinmaya 13 August 2014 01:26:24PM 0 points [-]

I think Bostrom puts it nicely in his new book "Superintelligence":

A colleague of mine likes to point out that a Fields Medal (the highest honor in mathematics) indicates two things about the recipient: that he was capable of accomplishing something important, and that he didn't.

Comment author: wallowinmaya 05 August 2014 06:27:46PM 4 points [-]

I translated the essay Superintelligence and the paper In Defense of posthuman Dignity by Nick Bostrom into German in order to publish them on the blog of GBS Schweiz.

He thanked me by sending me a signed copy of his new book "Superintelligence". Which made me pretty happy.

Comment author: stared 23 March 2014 11:24:42AM 0 points [-]

This link does not work for me (it redirects to my event list). I am not sure if it is because of privacy settings or anything else? In any case: what is its full name as it appears on FB?

Comment author: wallowinmaya 23 March 2014 07:45:01PM 1 point [-]

I changed the privacy settings. Link should work now.

Comment author: stared 22 March 2014 07:43:46PM *  1 point [-]

I am interested, but not sure if I will be available on that date (40%?). Do you want to create a FB event? From my experience as an organizer of various stuff it works for seeing who is interested; of course it will attract more newbies and casual readers (I am in this category: up to date, my most serious interaction with LW is this post: http://stats.stackexchange.com/questions/28067/entropy-based-refutation-of-shalizis-bayesian-backward-arrow-of-time-paradox/28634#28634).

Comment author: wallowinmaya 22 March 2014 10:36:26PM 1 point [-]
Comment author: jkadlubo 22 March 2014 06:15:19PM *  2 points [-]

On a general level me and tkadlubo are interested, but not this time. AFAWK there are just over a dozen Polish LWers, scattered around the country and that makes any meetup difficult.

We are going to the Berlin Meetup in 3 weeks, Maybe see you there?

Comment author: wallowinmaya 22 March 2014 10:28:22PM 0 points [-]

Cool, yeah, I'm going to the Berlin Meetup. See you there!

View more: Prev | Next