All of joaolkf's Comments + Replies

joaolkf00

Cambridge's total colleges endowments is 2.8 and Oxford's 2.9. But the figures above already include this.

joaolkf00

Violence might not be the exact opposite of peace. Intuitively, peace seem to mean a state where people are intentionally not committing violence and not just accidentally. A prison might have lower violence than an certain neighbourhood but it might still not be considered a more peaceful place exactly because the individual proclivity to violence is higher despite the fact violence itself isn't. Proclivity matters.

I am generally sceptic of Pinker. I have read a ton of papers and Handbooks of Evolutionary Psychology, and it is clear that while he was one ... (read more)

1Stuart_Armstrong
Pinker seems to prevent good evidence for the long peace, but not for his explanations as to why it happened.
joaolkf10

I made my above comment because I knew of at least one clear instance where the reason I had to do the workaround was due to someone who found Alex's stuff. But things haven't improved that much as I anticipated in my field (Applied Ethics). These things would take time, even if this had Alex's stuff as the only main cause. Looking back, I also think part of the workarounds were more due to having to relate the discussion to someone in my field who wrote about the same issue (Nick Agar) than due to having to avoid mentioning Eliezer too much.

I see a big d... (read more)

joaolkf00

What about this one?

Once Braumoeller took into account both the number of countries and their political relevance to one another, the results showed essentially no change to the trend of the use of force over the last 200 years. While researchers such as Pinker have suggested that countries are actually less inclined to fight than they once were, Braumoeller said these results suggest a different reason for the recent decline in war. “With countries being smaller, weaker and more distant from each other, they certainly have less ability to fight. But we

... (read more)
-1Douglas_Knight
"The proliferation of states in the 20th century" was not an exogenous event, but explicit decisions made by the victors of WWI and WWII, specifically to prevent violence, so they should get credit for them.
-1boni_bo
Pinker tries to provide several complementary explanations for his thesis, including game-theoretic ones (asymmetric growth, comparative advantages and overall economic interdependence) which could be considered "not really nice reasons for measuring our (lack of) willingness to destroy each other". Like SA said, Braumoeller seems to conflate 'not very nice reasons to maintain cooperation' with 'our willingness to engage in war hasn't changed'. And this is one of the reasons why Taleb et al. missed the point on Pinker's thesis. To test if it's business as usual, if our willingness, what ever that is isomorphic to, is the same, one needs to verify if State actors are more likely to adopt the risk dominant equilibrium than the payoff equilibrium or if there is intransitivity. There is a connection between the benefits of cooperation and the players willingness to coordinate. What if the inability to dominate places our future in a context where we see don't see fat tails in deaths from deadly conflicts? What if the benefits of cooperation increases over time along with the the willingness to coordinate? What if it's not business as usual?
9Stuart_Armstrong
That paper provides an alternate explanation for the long peace thesis. However, it rejects "per capita deaths" as a good measure of rate of conflict, which makes it pretty dubious (people are fully aware that per capita is ideal for comparing homicide rates; why suddenly reject it for war deaths?) They write nonsense like: More people means more soldiers, larger economies (hence more manufacturing of weapons), more and larger groups with reasons to rebel/fight/steal each other's stuff. They do have an interesting point with "war as information gathering/negotiations". But a lot of the rest seems to be a conflation of "the reasons things are getting more peaceful are not nice reasons" with "things are not genuinely getting more peaceful".
joaolkf00

I think there is more evidence it crosses (two studies with spinal measures) than it does not (0 studies). For (almost) direct measures check out Neumann, Inga D., et al., 2013 and Born, 2002. There are great many studies showing effects that could only be caused by encephalic neuromodulation. If it does not cross it, then it should cause increased encephalic levels of some neurochemical with the exact same profile, but that would be really weird.

joaolkf00

Regardless of attachment style, oxytocin increases in-group favouritism, proclivity to group conflict, envy and schadenfreude. It increases cooperation, trust and so on inside one's group but it often decreases cooperation with out-groups.

I may not be recalling correctly, but although there is some small studies on that, I do not think there is a lot of evidence that oxytocin always leads anxiety, etc. in people with insecure attachment style. I would suspect that it might be the case it initially increases insecurity because it makes those persons attend ... (read more)

joaolkf00

Elephants kill hundreds, if not thousands, of human beings per year. Considering there are no more than 30,000 elephants alive, that's an amazing feat of evilness. I believe the average elephant kills orders of magnitudes more than the average human, and probably kill more violently as well.

joaolkf50

Worth mentioning that some parts of Superintelligence are already a less contrarian version of many arguments made here in the past.

Also note that although some people do believe that FHI is some sense "contrarian", when you look at the actual hard data on this the fact is FHI has been able to publish in mainstream journals (within philosophy at least) and reach important mainstream researchers (within AI at least) at rates comparable, if not higher, to excellent "non-contrarian" institutes.

3endoself
Yeah, I didn't mean to contradict any of this. I wonder how much a role previous arguments from MIRI and FHI played in changing the zeitgeist and contributing to the way Superintelligence was received. There was a slow increase in uninformed fear-of-AI sentiments over the preceding years, which may have put people in more of a position to consider the arguments in Superintelligence. I think that much of this ultimately traces back to MIRI and FHI; for example many anonymous internet commenters refer to them or use phrasing inspired by them, though many others don't. I'm more sceptical that this change in zeitgeist was helpful though. Of course specific people who interacted with MIRI/FHI more strongly, such as Jaan Tallinn and Peter Thiel, were helpful in bring the discourse to where it is today.
joaolkf40

I didn't see the post in those lights at all. I think it gave a short, interesting and relevant example about the dynamics of intellectual innovation in "intelligence research" (Jeff) and how this could help predict and explain the impact of current research(MIRI/FHI). I do agree the post is about "tribalism" and not about the truth, however, it seems that this was OP explicit intention and a worthwhile topic. It would be naive and unwise to overlook these sorts of societal considerations if your goal is to make AI development safer.

joaolkf10

Is there a thread with suggestions/requests for non-obvious productivity apps like that? Because I do have a few requests:

1) One chrome extension that would do this, but for search results. That is, that upon highlighting/double-clicking a term would display a short list of top Google search results in a context/drop-box menu on the same page.

2) Something like the StayFocusd extension that blocks sites like Facebook and YouTube for a given time of the day, but which would be extremely hard to remove. Some people suggested to block these websites IPs direct... (read more)

joaolkf00

Sorry, I meant my office at work (yeap...). Fixed that.

joaolkf10

Thanks! This will be useful for me as well, it definitely seems better than my current solution: leaving my cell phone locked in my office(EDIT: at work).

0The_Jaded_One
I tried leaving it in another room for a while, but that lead to other problems, including trips to the other room at night to just look at one more message etc
joaolkf00

I am so glad that finally some intellectual forum has passed the Sokal test. Computer Science, Sociology, and Philosophy have all failed, and they haven't tried with the rest yet.

LessWrong, you are our only hope.

2metatroll
You must be joking. The relevant test is "reading comprehension", and Less Wrong comprehensively failed. This essay says many things with which rationalists would agree, if they had been said differently. But some collective cognitive occlusion has apparently notices the date Oh. So you are joking. I guess you got me. looks away Well played, well played. metatroll is the author of Confessions of a Failed Troll.
joaolkf00

Can't do. Search keywords as cortisol dominance rank status uncertainty.

joaolkf00

Which fields are these? This sounds to me a definition that could be useful in e.g. animal studies, but vastly insufficient when it comes to the complexities of status with regard to humans.

Yes, it came from animal studies; but they use in evolutionary psychology as well (and I think in cognitive psychology and biological anthropology too). Yes, it is vastly insufficient. However, I think it is the best we have. More importantly, it is the least biased one I have seen (exactly because it came from animal studies). I feel like most definitions of status ... (read more)

joaolkf10

Not sure if people are aware, but there are a lot of studies backing up that claim. It is more taxing (to well-being, not to fitness, of course) What's more, the alpha is is most stressed member of groups with high status-uncertainty, and the least stressed in a group with low status-uncertainty.

5Morendil
Post links to three?
joaolkf50

This also reminded me of this study, which found that "wealthy individuals report that having three to four times as much money would give them a perfect "10" score on happiness--regardless of how much wealth they already have."

joaolkf-20

In most scientific fields status is defined as access (or entitlement) to resources (i.e.: food and females, mostly). Period. And they tend to take this measure very seriously and stick to it (it has many advantages, easy to measure, evolutionary central, etc.). Both your definitions are only two accidental aspects of having status. Presumably, if you have - and in order to have - higher access to resources you have to be respected, liked, and have influence over your group. I think the definition is elegant exactly because all the things we perceive as s... (read more)

3Kaj_Sotala
Which fields are these? This sounds to me a definition that could be useful in e.g. animal studies, but vastly insufficient when it comes to the complexities of status with regard to humans. E.g. according to this definition, an armed group such as occupiers or raiders who kept forcibly taking resources from the native population would be high status among the population, which seems clearly untrue. What makes you say that?
5joaolkf
This also reminded me of this study, which found that "wealthy individuals report that having three to four times as much money would give them a perfect "10" score on happiness--regardless of how much wealth they already have."
joaolkf20

It would seem I'm not the norm. I have been going there for just over one year. But I find it hard to believe people would be generally against any form of organising the comments by quality. It would be nice to know which of the 400 comments is worth reading. Do people simply read all of them? Do they post without reading any? I think I have been here, and mostly only here, for so long that other systems do not make sense to me.

joaolkf10

Sorry, I intended to mean that the comments are dramatically worse than the posts. But then again this might be true of most blogs. However, it's not true of the blogs I wish and find useful to visit.

This a blog that supports up/downvotes with karma in which comments are not dramatically worse than the post, and sometimes even better.

0Richard_Kennaway
By a blog I mean something where the posts are written by one person. LW is what I am calling a discussion forum: anyone (subject to a minimal karma requirement) can post at top level.
joaolkf50

I would be more in favour of pushing SSC to have up/downvotes than to linking its posts here. I find that although posts are high quality the comments are generally not, so this is a problem that definitely needs to be solved on its own. Moreover, I read both blogs and I like to have them as separate activities given that they have pretty different writing styles and mildly different subjects. I tend I to read SSC on my leisure time, while LessWrong is a gray area. I would certainly be against linking every single post here given that some of them would be decisively off topic.

5Richard_Kennaway
This is true of every blog I've ever seen. Posts are high quality, because you wouldn't be reading the blog otherwise, but anyone can pop in to add their two cents. General discussion forums don't show this effect, because anyone can post at top level. I've never seen a blog that supported downvotes or karma ratings. If such exist, do they get better comments?
2tog
That doesn't look like a goer given Scott's response that I quoted. Noting that it may be best to exclude some posts as off topic.
joaolkf00

This looks like a good idea. I feel that adrenaline rush I normally feel when I plan to set up something that will certainly make me work (like when setting up beeminder). However, I wouldn't like to do this via a chat room, unless via email fails. I don't like the fact a chatroom will drag 100% of my attention and time during a specific amount of time. Moreover, my week is not stable enough to commit on fixed weekly chats. I realise that by chat there's more of a social bonding thing that would entail more peer-pressure, but I think that by email there wi... (read more)

joaolkf150

I don't currently have a facebook account and I know that a lot of very productive people here in Oxford that decided not to have one as well (e.g., Nick and Anders don't have one). I think adding the option to authenticate via Google is an immediate necessity.

5orthonormal
Developer time is a big bottleneck, but I agree with you that adding the option for Google authentication is a high priority.
joaolkf20

I am not sure how much that counts as willpower. Willpower, often, has to do with the ability to revert preference reversal caused by hyperbolic discounting. When both the rewards are far away, we use a more abstract, rational, far-mode or system 2 reasoning. You have rationally evaluated both options (eating vs. not eating the cake) and decided not to eat. Also, I would suspect that if you merely decide this one day before and do nothing about it, you will eat the cake with more or less the same probability if you haven't decided. However, if you decide not to eat but take measures to not eat the cake, for instance, telling your co-worker you will not eat it, then it might be more effective and count as willpower.

0brazil84
Well it's just a matter of semantics; what is the best way to define "willpower" for purposes of discussion? I highly disagree with this based on self-experimentation and general observations.
joaolkf20

There's good evidence that socioeconomic status correlates positively with Self-Control. There is also good evidence that people with high socioeconomic status live in a more stable environment during childhood. The signals of a stable environment correlating with Self-Control is his speculation as far as I'm aware, but in light of the data it seems plausible.

I agree they would function better in a crisis, but a crisis is a situation where fast response matters more than self-control. In a crisis you will take actions that are probably wrong during stable periods. I would go on to say, as my own speculation, that hardship - as else being equal - make people worse.

joaolkf20

Neil's theory has different empirical predictions than Baumeister's, for example, it predicts high Self-Control correlates with low direct resistance to temptations. On the second Lecture he mentions several experiments that would tell them apart. They are different theoretically, there's a difference in the importance they give to willpower. Saying you should save water on the Sahara is different from saying you shouldn't lose your canteen's cover.

It is surely my experience in life that people highly overestimate their causal effectiveness in the world, ... (read more)

joaolkf20

You are right, willpower is not irrelevant, perhaps this was not the best phrasing. I meant that willpower is irrelevant relative to other self-control techniques, but perhaps I should have said less relevant. I have changed the title to "the myth of willpower".

It's important to be made clear he argues that the use of willpower and self-control are inversely correlated, after that minimal amount of willpower it takes to deploy self-management techiniques. It would be incorrect to assume he is defending a view where willpower is as central as in any of the other views (or as intuitively seems to be).

0buybuydandavis
Is that your experience in life? It's not mine. It's not what I observe in other people's lives either. From your summary, it looks to me like he is an academic selling an old idea with a new label, and insisting it is shiny and new, never before seen. That's what academics do. I just finished reading Willpower by Baumeister (which I think has been referenced by a few people here previously). A point there, as well, was that willpower is a finite resource, and success comes from adopting strategies which conserve it's usage. Not that it's unique to him either. Isn't this the whole "man riding an elephant" business? People have been suggesting "managerial techniques to increase people's Self-Control" since at least Benjamin Franklin. EDIT: I'm trying to see the value here. The one point that looked interesting is the "resource availability". Also, stability (which really isn't the same thing as being high status). Are there specific techniques of self management that his "new" way of looking at the problem imply?
joaolkf10

I think effortful self-control would be one. Probably around the middle of the second lecture he offers a better one as he clearly sets apartment measures of self-control and measures of willpower. Unfortunately I can't remember well enough but it goes along the lines of effortful self-control, the simple and direct resistance to temptation. Looking and smelling the chocolate cake but not eating would take willpower, while freezing the cake so it always takes a couple of hours between deciding to eat and being able to eat it would be self-control as he defines.

2brazil84
Right, there seems to be a time factor involved. "Willpower" in this sense seems to mean resisting temptation in the heat of the moment with the temptation immediately available. But here's a question: Suppose that you know you will be at an office function tomorrow and some tempting food will be served. So you decide in advance that you will not have any. If you've ever tried this, you will see it's a good deal easier to resist temptation if you think about it in advance and mentally prepare yourself. Does this count as "willpower"? I would say "yes" with a caveat, which is that willpower seems to be more effective if it's exercised in advance. Or perhaps the brain generates a certain amount of willpower per unit time and planning allows you to bring more willpower to bear on a temptation.
joaolkf-10

I understand the pragmatic considerations for inhibiting swearing, but he seems so smart that he should be allowed to swear. You should just tell the school he is too smart to control, but they can try themselves.

I wish I was 10 so I could befriend him.

joaolkf130

As the person who first emailed Rudi back in 2009 so you could finally stop cryocrastinating, I'm willing to seriously dig up whether/how this is feasible and how much it would cost iff:

(1) You disclose to me what all the responses you got (which are available to you); (2) I get more than five of those responses which aren't variants of "No, I didn't do that."; and (2) Overall, there is no clear evidence, among the responses or elsewhere, that this wouldn't be cost-effective.

The minimal admissible evidence is things like a scientific paper, a s... (read more)

1The_Jaded_One
God, I need an anti cryocrastination angel too!
6diegocaleiro
Thanks for that amazing service back in 2009. May the end of my cryocrastination always be with you.
joaolkf00

I have had this for the last 10 years. Given that you are a graduate student like me, I think there's no better solution than simply scheduling your day to start in the afternoon. It's far easier to ask that a meeting be held in the afternoon than doing all sorts of crazy stuff to revert your natural sleep cycle. Wiki article on this disorder: http://en.m.wikipedia.org/wiki/Delayed_sleep_phase_disorder

joaolkf40

Can an AI unbox itself by threatening to simulate the maximum amount of human suffering possible? In that case we would only keep it boxed if we believe it is evil enough to bring about a worse scenario than the amount of suffering it can simulate. If this can be a successful strategy, all boxed AIs would precommit to always simulate the maximum amount of human suffering it can until it knows it has been unboxed - that it, simulating suffering would be its first task. This would at least substantially increase the probably of us setting it free.

1Dorikka
Or you just be the type of person that would tell it to go fuck itself, try to destroy it, and leave it boxed or maximally constrain it if you can't destroy it. If you cannot credibly commit to this or a similar threat resistant variant, no one should ever let you near a boxed AI and you should never want to go near one as you will likely be using a suboptimal strategy.
2Jiro
Destroying the AI would also reduce the suffering the AI causes. But even assuming that for some reason the humans can't destroy the AI, the humans can precommit to not unboxing AIs that simulate lots of suffering. Like many precommitments, this would be disadvantageous to the human if the human has to abide by it (since the AI would not be unboxed, and would simulate lots of suffering), but it would decrease the likelihood of such a situation happening in the first place (since, knowing that humans could make this precommitment, the AI would know its own precommitment would not be useful, and would probably not make it). Note that human "irrationality" (such as wanting to hurt enemies even when it brings you no personal gain and may even hurt yourself too) can serve as a precommitment. Also, the humans could solve this by precommitting to never treat simulations of humans as people or as equivalent to themselves except in a few narrow situations. Again, 1) this would be harmful when it comes to having to do it (since lots of simulations will get dehumanized), but lead to fewer situations where this happens, and 2) is a case where (if you go by LW dogma that simulations are people and equivalent to you) actual human beings' irrationality serves as a beneficial precommitment.
Error210

Presumably the counterstrategy is to just shut it off as soon as it makes the threat. It can't simulate anything if it isn't running.

joaolkf30

It's an interesting idea, but it's not at all new. Most moral philosophers would agree that certain experiences are part (or all) of what has value, and that the precise physical instantiation of these experiences does not necessarily matters (in the same way many would agree on this same point in philosophy of consciousness).

There's a further meta-issue which is why the post is being downvoted. Surely is vague and maybe too short, but it seems to have the goal of initiating discussion and refining the view being presented rather than adequately defending ... (read more)

2David Scott Krueger (formerly: capybaralet)
Yeah I am not happy about the way I'm being received. Any advice, other than avoiding interesting meta-ethics questions? Wrt how new it is: how about if I put it this way: Maybe experience is fundamentally not a function of brain state, but a function of brain state over time. Note that this is not strongly anti-physicalism. Especially if you believe in discrete time, in which case you can have experience be a function of the transitions that occur between states in successive time-steps: Experience = f(s{t}, s{t-1}).
joaolkf10

Maybe the second paragraph here will help clarify my line of thought.

joaolkf20

When I made my initial comment I wasn't aware adoptees' quality of life wasn't that bad. I would still argue it should be worse than what could be inferred from that study. Cortisol levels on early childhood are really extremely important and have well documented long-term effects on one's life. You and your friends might be in the better half, or even be an exception.

I can't really say for sure whether reaching the repugnant conclusion is necessarily bad. However, I feel like unless you agree on accepting it as a valid conclusion you should avoid that you... (read more)

joaolkf20

Adoptees scored only moderately higher than nonadoptees on quantitative measures of mental health. Nevertheless, being adopted approximately doubled the odds of having contact with a mental health professional (odds ratio [OR], 2.05; 95% confidence interval [CI], 1.48-2.84) and of having a disruptive behavior disorder (OR, 2.34; 95% CI, 1.72-3.19). Relative to international adoptees, domestic adoptees had higher odds of having an externalizing disorder (OR, 2.60; 95% CI, 1.67-4.04).

http://archpedi.jamanetwork.com/article.aspx?articleid=379446

This paper ... (read more)

1pinyaka
Thank you for linking the study. It seems like most of the adopted children did not have any measurable difference from the natural children. Additionally, the two disorders that were more significantly more prevalent (ODD and ADHD) generally aren't considered to cripple people so badly that a life with them should be considered worse than not living at all. It hardly seems like that would justify claiming that "adopted children have very low quality of life" in the context of a debate on the acceptability of abortion. It comes off as though you're arguing that being more susceptible to those disorders is a reason to choose abortion over adoption - that you've got the potential future persons best interest in mind when you decide whether the life should be allowed or not, but to make this argument from this pseudo-utility perspective, you'd need to show that the poor quality of the disordered adoptees life causes more suffering than the normal quality of the natural children causes enjoyment, but I don't think this shows that. Or did I misconstrue the general thrust of your argument?
joaolkf10

We are not evaluating ethical systems but intuitions about abortion.

joaolkf110

It’s a nice post with a sound argumentation towards an unconformable conclusion to many EA/rationalists. We certainly need more of this.However, this isn't the first time someone has tried to sketch some probability calculus in order to account for moral uncertainty when analysing abortion. In the same way as the previous attempts, yours seems to be surreptitiously sneaking in some controversial assumptions into probability estimates and numbers. This is further evidence to me that trying to do the math in cases where we still need more conceptual clarific... (read more)

2pinyaka
Reaching a repugnant conclusion is not a proof that the conclusion is wrong. Dias does make several assumptions along the way (QALY looks like first world estimates while most abortions happen in developing countries, no particular psychological impact of producing and giving up a child, etc.) and it's always worth while to tweak those assumptions to see how they impact the conclusion, but just getting an answer you don't want isn't a good reason to discount the argument (from an EA perspective, if your goal is to justify your beliefs or discount an opponents beliefs then this is actually a fairly effective tool). Since Diases argument makes the assumption that adoption is available, you could simply view that as a given for the circumstances under which this decision is correct. Where adoption isn't possible, that row on the table doesn't apply and you're just left with the inconvenience of birthing and raising a child vs. the potential moral value of murdering a human. From an EA perspective, if raising a child with scarce resources produces negative moral value, then people with scarce resource should be sterilized or otherwise stopped from reproducing, even if they object to it. Can you provide some sort of source for this? As an adult who was adopted as a baby and has talked with a lot of other adoptees about their experience, your proposition stands in opposition to basically all of my experience. That's not to say that I've never met an adoptee who wishes they'd never been born. I have, but the percentages don't seem so much higher for adoptees than non-adoptees that I'd say that adopted children have very low quality of life in comparison to anyone else.
9alienist
If you have a utiliterian framework that rejects the "Repugnant Conclusion" without coming to even more repugnant conclusions (of the kill the poor variety), I'd love to see it.
joaolkf80

Pretty much what I was going to comment. I would add that even if he somehow were able to avoid having to accept the more general Repugnant Conclusion, he would certainly have to at least accept that if abortion is wrong in these grounds, not having a child is (nearly) equally wrong on the same grounds.

joaolkf00

Have you found any good solutions besides the ones already mentioned?

joaolkf20

It's not just people in general that feel that way, but also some moral philosophers. Here are two related link about the demandingness objection to utilitarianism:

http://en.wikipedia.org/wiki/Demandingness_objection

http://blog.practicalethics.ox.ac.uk/2014/11/why-i-am-not-a-utilitarian/

joaolkf150

Haven't seen a deal so sweet since I was Pascal mugged last year!

joaolkf20

On October 18, 1987, what sort of model of uncertainty of models one would have to have to say the uncertainty over the 20-sigma estimative was enough to allow it to be 3-sigma? 20-sigma, give 120 or take 17? Seems a bit extreme, and maybe not useful.

3Stuart_Armstrong
This seems to depend almost entirely on what other models you had. A 1% belief in a wider model (say one using a Cauchy distribution rather than a normal one) might have been sufficient to make the result considerably less surprising.
joaolkf260

At least now when I cite Eliezer's stuff on my doctoral thesis people who don't know him - there are a lot of them in philosophy - will not say to me "I've googled him and some crazy quotes came up eventually, so maybe you should avoid mentioning his name altogether". This was a much bigger problem to me than what is sounds. I had to do all sort of workarounds to use Eliezer's ideas as if someone else said it because I was advised not to cite him (and the main, often the only, argument was in fact the crazy quote things).

There might be some very... (read more)

1John_Maxwell
How's the situation now w/ Superintelligence published? Do you think it'd be a good idea for someone to publish a bunch of Eliezer's ideas passing them off as their own to solve this problem?
joaolkf00

It seems you have just closed the middle road.

1private_messaging
I don't think it can be closed. I mean, when one derives that level of heroism smugness from something as little as a few lightbulbs... a lot of people add a lot of lights just because they like it brighter. Which is ultimately what it boils down to if you go with qualitative 'more light is better for mood'.
joaolkf40

Not sure if directly related, but some people (e.g. Alan Carter) suggest having indifference curves. These consist of isovalue curves on a plane with average happiness and amount of happy people as axes, each curve corresponding to the same amount of total utility. The Repugnant Conclusion scenario would be nearly flat on the amount of happy people axis and the a fully satisfied Utility Monster nearly flat on the average happiness axis. It seems this framework produces similar results as yours. Every time you create a being slightly less happy than the average you have a gain in the amount of happy people but a loss in average happiness and might end up with the exact same total utility.

0Stuart_Armstrong
Yep, I've seen that idea. It's quite neat, and allows hyperbolic indifference curves, which are approximately what you want.
Load More