There's something rather massive that you're missing here: If you have enough freedom to create personalized environments then the bigger issue is what some fraction of people will choose to do with that kind of power. Since this is way more than you can get from an experience machine.
The best version of this I can think of entails everyone getting personalized AGI able to act as perfect DM's for simulated adventures. Since you need someone to act out the roles of the NPC's, particularly the villains. This way people can play out adventures where they get ...
That post doesn't exist anymore.
There's another potential position here you didn't mention: That AI only seems superficially moral to us, but that if it had more intelligence and power but the same morals an AGI like it would take actions that we view as obviously abhorrent. Meaning we ought to view it as essentially evil, but simply too dumb to realize it or act upon it (though some research makes even certain current models look pretty bad).
Thus if you view suffering as having a moral significance depending on the potential moral behavior of the agent, then you may not care. For the sa...
I think this view starts with a faulty concept of consciousness which then necessarily leads to one disregarding continuity of self as being importance.
Namely you assume that things like personality and memory are a part of consciousness, and that therefore those things would have any ability to predict your future anticipated experience. This is problematic, particularly once you've deconstructed the idea that you have a unified self: Since it presumes some coherent unified self which is defined by whatever bundle of cognitive faculties, personality and m...
...Preferential gaps, by contrast, are insensitive to some sweetenings and sourings. Consider another example. A is a lottery that gives the agent a Fabergé egg for sure. B is a lottery that returns to the agent their long-lost wedding album. The agent does not strictly prefer A to B and does not strictly prefer B to A. How do we determine whether the agent is indifferent or whether they have a preferential gap? Again, we sweeten one of the lotteries. A+ is a lottery that gives the agent a Fabergé egg plus a dollar-bill
The issue with running people at different speeds as a solution is that the eudemonic rate for people to increase their intelligence will vary, however this creates an incentive for people to self modify in a sort of race to the bottom. It's also problematic because people are liable to care quite a lot how fast they're running, so this forces society to splinter in terms of who can interact with each other.
Also at a fundamental level it seems like what you're reaching for is still going to end up being a Federation style caste system no matter what: What ...
This is why the only viable solution to giving existing people a sense of purpose/meaning is to create a bunch of new people who aren't as driven by status. That way every existing person can be as impressive within their particularly community as they want: Since most of the other people living with them have little drive for status and don't care about getting fame or needing to feel like they're exceptional/special in some way.
Then combine that with simulations DM'd by superintelligences and you really should be able to give every person the feeling of ...
I think the proposed solution presented here is suboptimal and would lead to a race to the bottom, or alternatively lead to most people being excluded from the potential to ever do anything that they get to feel matters (and I think a much better solution exists):
If people can enhance themselves then it becomes impossible to earn any real status except via luck. Essentially it's like a modified version of that Syndrome quote "When everyone is exceptional and talented, then no one will be".
Alternatively if you restrict people's ability to self modify ...
This is one of those areas where I think the AI alignment frame can do a lot to clear up underlying confusion. Which I suspect stems from you not taking the thought experiment far enough for you to no longer be willing to bite the bullet. Since it encourages AI aligned this way to either:
I think the whole point of a guardian angel AI only really makes sense if it isn't an offshoot of the central AGI. After all if you trusted the singleton enough to want a guardian angel AI, then you will want it to be as independent from the singleton as is allowed. Whereas if you do trust the singleton AI (because say you grew up after the singularity) then I don't really see the point of a guardian angel AI.
>I think there would be levels, and most people would want to stay at a pretty normal level and would move to more extreme levels slow...
I've had similar ideas but my conception of such a utopia would differ slightly in that:
This kind of issue (among many, many others) is why I don't think the kind of utilitarianism that this applies to is viable.
My moral position only necessitates extending consideration to beings who might in principle extend similar consideration to oneself. So one has no moral obligations to all but the smartest animals, but also your moral obligations to other humans scale in a way which I think matches most people's moral intuitions. So one genuinely does have a greater moral obligation to loved ones, and this isn't just some nepotistic personal fa...
I actually think this is plausibly among the most important questions on Lesswrong, thus my strong upvote. As I think the moral utility from having kids pre-singularity may be higher than almost anything else (see my comment).
To argue the pro-natalist position here, I think the facts being considered should actually give having kids (if you're not a terrible parent) potentially a much higher expected moral utility than almost anything else.
The strongest argument for having kids is that the influence they may have on the world (say most obviously by voting on hypothetical future AI policy) even if marginal (which it may not be if you have extremely successful children) becomes unfathomably large when multiplied by the potential outcomes.
From the your hypothetical children's per...
An irish elk/peacock type scenario is pretty implausible here for a few reasons.
Ultimately the polygenetic nature of traits...
>The AI comes up with a compromise. Once a month, you're given the opportunity to video call someone you have a deep disagreement with. At the end of the call, each of you gets to make a choice regarding whether the other should be allowed in Eudaimonia. But there's a twist: Whatever choice you made for the other person is the choice the AI makes for you.
This whole plan relies on an utterly implausible conspiracy. There's no way to avoid people knowing how this test actually works just by its nature. So if people know how this test works then there's zero reason to base your response on what you actually want for the person you disagree with.
>Of course there are probably even bigger risks if we simply allow unlimited engineering of these sorts of zero sum traits by parents thinking only of their own children's success. Everyone would end up losing.
The negative consequences of a world where everybody engineers their children to be tall, charismatic, well endowed, geniuses are almost certain to be far less than the consequences of giving the government the kind of power that would allow them to ban doing this (without banning human GM outright which is clearly an even worse outcome).
>I left this example for last because I do not yet have a specific example of this phenomenon in humans, though I suspect that some exist.
**There's plenty of traits that fit the bill here, they're just not things people would ever think of as being negative.**
Most such traits exist because of sexual selection pressures, the same reasons traits as negative sum as peacock feathers can persist. Human traits which fall under this category (or at least would have in the ancestral environment):
Traits like incredibly oversized penises for a great ap...
I suspect there's some underlying factor which effects how much psychedelics impact your identity/cognition. Since even on doses of LSD so high that the visuals make me legally blind, I don't experience any amount of ego dissolution and can function fairly well on many tasks.
It seems like you're conflating forcing something on somebody with making somebody aware of an option. Since it seems rather implausible that if people were all aware they just choose to not get ennui after 500 years, that they would choose not to alter themselves this way when there's no real downside.
As the original commenter here pointed out given how one sided this seems to be, it seem strange that humans would have converged on this bizarre deathist culture unless it was engineered that way by the minds on purpose for reasons that it's difficult to conceive of not being bad.