jacob_cannell comments on Theists are wrong; is theism? - Less Wrong

5 Post author: Will_Newsome 20 January 2011 12:18AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (533)

You are viewing a single comment's thread. Show more comments above.

Comment author: jacob_cannell 27 January 2011 06:15:59PM *  1 point [-]

I was talking about a variety of reasons for simulation and arguing that simulating a single entity seems as reasonable as many---but you seem only to be concerned with historical recreation.

Historical recreation currently seems to be the best rationale for a superintelligence to simulate this timeslice, although there are probably other motivations as well.

Power efficiency, in terms of ops/joule, increases directly with transistor density.

To my knowledge, this is incorrect. Increases in transistor density have dramatically increased circuit leakage (because of bumping into quantum tunneling), requiring more power per transistor in order to accurately distinguish one path from another.

If that was actually the case, then there would be no point to moving to a new technology node!

Yes leakage is a problem at the new tech nodes, but of course power per transistor can not possibly be increasing. I think you mean power per surface area has increased.

Shrinking a circuit by half in each dimension makes the wires thinner, shorter and less resistant, decreasing power use per transistor just as you'd think. Leakage makes this decrease somewhat less than the shrinkage rate, but it doesn't reverse the entire trend.

There are also other design trends that can compensate and overpower this to an extent, which is why we have a plethora of power efficient circuits in the modern handheld market.

"which mentioned that the increased waste heat from modern circuits was rising at a faster exponential than circuit density"

Do you remember when this was from or have a link? I could see that being true when speeds were also increasing, but that trend has stopped or reversed.

I recall seeing some slides from NVidia where they are claiming there next GPU architecture will cut power use per transistor dramatically as well at several times the rate of shrinkage.

You propose that not existing would be a terrible evil. But how much better, for all the trillions upon trillions you're proposing must suffer for the creator's whims, would it be to have that computational substrate be used to host entities that have amazingly positive, productive, maximally Fun lives?

Even if the goal is maximizing fun, creating some historical sims for the purpose of resurrecting the dead may serve that goal. But I really doubt that current-human-fun-maximization is an evolutionary stable goal system.

I imagine that future posthuman morality and goals will evolve into something quite different.

Knowledge is a universal feature of intelligence. Even the purely mathematical hypothetical superintelligence AIXI would end up creating tons of historical simulations - and that might be hopelessly brute force, but nonetheless superintelligences with a wide variety of goal systems would find utility in various types of simulation.

Comment author: Desrtopa 27 January 2011 07:21:47PM *  2 points [-]

Historical recreation currently seems to be the best rationale for a superintelligence to simulate this timeslice, although there are probably other motivations as well.

Much of the information from the past is probably irretrievably lost to us. If the information input into the simulation were not precisely the same as the actual information from that point in history, the differences would quickly propagate so that the simulation would bear little resemblance to the history. Supposing the individuals in question did have access to all the information they'd need to simulate the past, they'd have no need for the simulation, because they'd already have complete informational access to the past. It suffers similar problems to your sandboxed anthropomorphic AI proposal; provided you have all the resources necessary to actually do it, it ceases to be a good idea.

There are other possible motivations, but it's not clear that there are any others that are as good or better, so we have little reason to suppose it will ever happen.

Comment author: datadataeverywhere 27 January 2011 07:07:23PM *  1 point [-]

Historical recreation currently seems to be the best rationale for a superintelligence to simulate this timeslice, although there are probably other motivations as well.

This seems to be overly restrictive, but I don't mind confining the discussion to this hypothesis.

I think you mean power per surface area has increased.

Yes, you are correct.

Do you remember when this was from or have a link? I could see that being true when speeds were also increasing, but that trend has stopped or reversed.

The roundtable was at SC'08, a while after speeds had stabilized, and since it is a supercomputing conference, the focus was on massively parallel systems. It was part of this.

I really doubt that current-human-fun-maximization is an evolutionary stable goal system. I imagine that future posthuman morality and goals will evolve into something quite different.

Without needing to dispute this, I can remain exceptionally upset that whatever their future morality is, it is blind to suffering and willing to create innumerable beings that will suffer in order to gain historical knowledge. Does this really not bother you in the slightest?

ETA: still 404

Comment author: jacob_cannell 28 January 2011 12:30:36AM *  0 points [-]

The roundtable was at SC'08, a while after speeds had stabilized, and since it is a supercomputing conference, the focus was on massively parallel systems. It was part of this.

While the leakage issue is important and I want to read a little more about this reference, I don't think that any single such current technical issue is nearly sufficient to change the general analysis. There have always been major issues on the horizon, the question is more of the increase in engineering difficulty as we progress vs the increase in our effective intelligence and simulation capacity.

In the specific case of leakage, even if it is a problem that persists far into the future, it just slightly lowers the growth exponent as we just somewhat lower the clock speeds. And even if leakage can never be fully prevented, eventually it itself can probably be exploited for computation.

I really doubt that current-human-fun-maximization is an evolutionary stable goal system. I imagine that future posthuman morality and goals will evolve into something quite different.

Without needing to dispute this, I can remain exceptionally upset that whatever their future morality is, it is blind to suffering and willing to create innumerable beings that will suffer in order to gain historical knowledge.

As I child I liked Mcdonalds, bread, plain pizza and nothing more - all other foods were poisonous. I was convinced that my parent's denial of my right to eat these wonderful foods and condemn me to terrible suffering as a result was a sure sign of their utter lack of goodness.

Imagine if I could go back and fulfill that child's wish to reduce it's suffering. It would never then evolve into anything like my current self, and in fact may evolve into something that would suffer more or at the very least wish that it could be me.

Imagine if we could go back in time and alter our primate ancestors to reduce their suffering. The vast majority of such naive interventions would cripple their fitness and wipe out the lineage. There is probably a tiny set of sophisticated interventions that could simultaneously eliminate suffering and improve fitness, but these altered creatures would not develop into humans.

Our current existence is completely contingent on a great evolutionary epic of suffering on an astronomical scale. But suffering itself is just one little component of that vast mechanism, and forms no basis from which to judge the totality.

You made the general point earlier, which I very much agree with, about opportunity cost. Simulating humanity's current time-line has an opportunity cost in the form of some paradise that could exist in it's place. You seem to think that the paradise is clearly better, and I agree: from our current moral perspective.

In the end of the day morality is governed by evolution. There is an entire landscape of paradises that could exist, the question is what fitness advantage do they provide their creator? The more they diverge from reality, the less utility they have in advancing knowledge of reality towards closure.

It looks like earth will evolve into a vast planetary hierarchical superintelligence, but ultimately it will probably be just one of many, and still subject to evolutionary pressure.

Comment author: datadataeverywhere 28 January 2011 01:13:17AM 2 points [-]

In the specific case of leakage, even if it is a problem that persists far into the future, it just slightly lowers the growth exponent as we just somewhat lower the clock speeds.

I disagree; I think that problems like this, unresolved, may or may not decrease the base of our exponent, but will cap its growth earlier.

I don't think that any single such current technical issue is nearly sufficient to change the general analysis. There have always been major issues on the horizon, the question is more of the increase in engineering difficulty as we progress vs the increase in our effective intelligence and simulation capacity.

On this point, we disagree, and I may be on the unpopular side of this agreement. I don't see how past increases that have required technological revolutions can be considered more than weak evidence for future technological revolutions. I actually think it quite likely that increase in computational power per Joule will bottom out in ten to twenty years. I wouldn't be too surprised if exponential increase lasts thirty years, but forty seems unlikely, and fifty even less likely.

Imagine if we could go back in time and alter our primate ancestors to reduce their suffering. The vast majority of such naive interventions would cripple their fitness and wipe out the lineage. There is probably a tiny set of sophisticated interventions that could simultaneously eliminate suffering and improve fitness, but these altered creatures would not develop into humans.

I don't care. We aren't talking about destroying the future of intelligence by going back in time. We're talking about repeating history umpteen many times, creating suffering anew each time. It sounds to me like you are insisting that this suffering is worthwhile, even if the result of all of it will never be more than a data point in a historian's database.

We live in a heartbreaking world. Under the assumption that we are not in a simulation, we can recognize facts like 'suffering is decreasing over time' and realize that it is our job to work to aid this progress. Under the assumption that we are in a simulation, we know that the capacity for this progress is already fully complete, and the agents who control it simply don't care. If we are being simulated, it means that one or more entities have chosen to create unimaginable quantities of suffering for their own purposes---to your stated belief, for historical knowledge.

Your McDonald's example doesn't address this in the slightest. You were already a living, thinking being, and your parents took care of you in the right way in an attempt to make your future life better. They couldn't have chosen before you were born to instead create someone who would be happier, smarter, wiser, and better in every way. If they could have, wouldn't it be upsetting that they chose not to?

Given the choice between creating agents that have to endure suffering for generations upon generations, and creating agents that will have much more positive, productive lives, why are you arguing for the side that chooses the former? Of course the former and latter are entirely different entities, but that serves as no argument whatsoever for choosing the former!

Comment author: Dreaded_Anomaly 28 January 2011 03:17:59AM 2 points [-]

A person running such a simulation could create a simulated afterlife, without suffering, where each simulated intelligence would go after dying in the simulated universe. It's like a nice version of Pascal's Wager, since there's no wagering involved. Such an afterlife wouldn't last infinitely long, but it could easily be made long enough to outweigh any suffering in the simulated universe.

Comment author: Desrtopa 28 January 2011 03:23:03AM 2 points [-]

Or you could skip the part with all the suffering. That would be a lot easier.

Comment author: Dreaded_Anomaly 28 January 2011 03:37:31AM 1 point [-]

In general, I agree. I just wanted to offer a more creative alternative for someone truly dedicated to operating such a simulation.

Comment author: Desrtopa 28 January 2011 03:48:52AM 1 point [-]

So far the only person who seems dedicated to making such a simulation is jacob cannell, and he already seems to be having enough trouble separating the idea from cached theistic assumptions.

Comment author: Alicorn 28 January 2011 03:21:22AM 1 point [-]

outweigh

I don't think that's how it works.

Comment author: Dreaded_Anomaly 28 January 2011 03:42:45AM 2 points [-]

How much future happiness would you need in order to choose to endure 50 years of torture?

Comment author: nshepperd 28 January 2011 03:56:51AM 0 points [-]

That depends if happiness without torture is an option. The options are better/worse, not good/bad.

Comment author: jimrandomh 28 January 2011 03:51:18AM 0 points [-]

The simulated afterlife wouldn't need to outweigh the suffering in the first universe according to our value system, only according to the value system of the aliens who set up the simulation.

Comment author: jacob_cannell 28 January 2011 02:38:52AM *  -1 points [-]

I don't see how past increases that have required technological revolutions can be considered more than weak evidence for future technological revolutions.

Technology doesn't really advance through 'revolutions', it evolves. Some aspects of that evolution appear to be rather remarkably predictable.

That aside, the current predictions do posit a slow-down around 2020 for the general lithography process, but there are plenty of labs researching alternatives. As the slow-down approaches, their funding and progress will accelerate.

But there is a much more fundamental and important point to consider, which is that circuit shrinkage is just one dimension of improvement amongst several. As that route of improvement slows down, other routes will become more profitable.

For example, for AGI algorithms, current general purpose CPUs are inefficient by a factor of perhaps around 10^4. That is a decade of exponential gain right there just from architectural optimization. This route - neuromorphic hardware and it's ilk - currently receives a tiny slice of the research budget, but this will accelerate as AGI advances and would accelerate even more if the primary route of improvement slowed.

Another route of improvement is exponentially reducing manufacturing cost. The bulk of the price of high-end processors pays for the vast amortized R&D cost of developing the manufacturing node within the timeframe that the node is economical. Refined silicon is cheap and getting cheaper, research is expensive. The per transistor cost of new high-end circuitry on the latest nodes for a CPU or GPU is 100 times more expensive than the per transistor cost of bulk circuitry produced on slightly older nodes.

So if moore's law stopped today, the cost of circuitry would still decay down to the bulk cost. This is particularly relevant to neurmorphic AGI designs as they can use a mass of cheap repetitive circuitry, just like the brain. So we have many other factors that will kick in even as moore's law slows.

I suspect that we will hit a slow ramping wall around or by 2020, but these other factors will kick in and human-level AGI will ramp up, and then this new population and speed explosion will drive the next S-curve using a largely new and vastly more complex process (such as molecular nano-tech) that is well beyond our capability or understanding.

I don't care. We aren't talking about destroying the future of intelligence by going back in time.

It's more or less equivalent from the perspective of a historical sim. A historical sim is a recreation of some branch of the multiverse near your own incomplete history that you then run forward to meet your present.

It sounds to me like you are insisting that this suffering is worthwhile

My existence is fully contingent on the existence of my ancestors in all of their suffering glory. So from my perspective, yes their suffering was absolutely worthwhile, even if it wasn't from their perspective.

Likewise, I think that it is our noble duty to solve AI, morality, and control a Singularity in order to eliminate suffering and live in paradise.

I also understand that after doing that we will over time evolve into beings quite unlike what we are now and eventually look back at our prior suffering and view it from an unimaginably different perspective, just as my earlier mcdonald's loving child-self evolved into a being with a completely different view of it's prior suffering.

your parents took care of you in the right way in an attempt to make your future life better.

It was right from both their and my current perspective, it was absolutely wrong from my perspective at the time.

They couldn't have chosen before you were born to instead create someone who would be happier, smarter, wiser, and better in every way. If they could have, wouldn't it be upsetting that they chose not to?

Of course! Just as we should create something better than ourselves. But 'better' is relative to a particular subjective utility function.

I understand that my current utility function works well now, that it is poorly tuned to evaluate the well-being of bacteria, just as poorly tuned to evaluate the well-being of future posthuman godlings, and most importantly - my utility function or morality will improve over time.

Given the choice between creating agents that have to endure suffering for generations upon generations, and creating agents that will have much more positive, productive lives, why are you arguing for the side that chooses the former?

Imagine you are the creator. How do you define 'positive' or 'productive'? From your perspective, or theirs?

There are an infinite variety of uninteresting paradises. In some virtual humans do nothing but experience continuous rapturous bliss well outside the range of current drug-induced euphoria. There are complex agents that just set their reward functions to infinity and loop.

There are also a spectrum of very interesting paradises, all having the key differentiator that they evolve. I suspect that future godlings will devote most of their resources to creating these paradises.

I also suspect that evolution may operate again at an intergalactic or higher level, ensuring that paradises and all simulations somehow must pay for themselves.

At some point our descendants will either discover for certain they are in a sim and integrate up a level, or they will approach local closure and perhaps discover an intergalactic community. At that point we may have to compete with other singularity-civilizations, and we may have the opportunity to historically intervene on pre-singularity planets we encounter. We'd probably want to simulate any interventions before preceeding, don't you think?

A historical recreation can develop into a new worldline with it's own set of branching paradises that increase overall variation in a blossoming metaverse.

If you could create a new big bang, an entire new singularity and new universe, would you?

You seem to be arguing that you would not because it would include humans who suffer. I think this ends up being equivalent to arguing the universe should not exist.

Comment author: Desrtopa 28 January 2011 02:59:23AM *  3 points [-]

At some point our descendants will either discover for certain they are in a sim, or they will approach local closure and perhaps discover an intergalactic community. At that point we may have to compete with other singularity-civilizations, and we may have the opportunity to historically intervene on pre-singularity planets we encounter. We'd probably want to simulate any interventions before preceeding, don't you think?

If we had enough information to create an entire constructed reality of them in simulation, we'd have much more than we needed to just go ahead and intervene.

If you could create a new big bang, an entire new singularity and new universe, would you? You seem to be arguing that you would not because it would include humans who suffer. I think this ends up being equivalent to arguing the universe should not exist.

Some people would argue that it shouldn't (this is an extreme of negative utilitarianism.) However, since we're in no position to decide whether the universe gets to exist or not, the dispute is fairly irrelevant. If we're in a position to decide between creating a universe like ours, creating one that's much better, with more happiness and productivity and less suffering, and not creating one at all, though, I would have an extremely poor regard for the morality of someone who chose the first.

My existence is fully contingent on the existence of my ancestors in all of their suffering glory. So from my perspective, yes their suffering was absolutely worthwhile, even if it wasn't from their perspective.

If my descendants think that all my suffering was worthwhile so that they could be born instead of someone else, then you know what? Fuck them. I certainly have a higher regard for my own ancestors. If they could have been happier, and given rise to a world as good as better than this one, then who am I to argue that they should have been unhappy so I could be born instead? If, as you point out

A historical recreation can develop into a new worldline with it's own set of branching paradises that increase overall variation in a blossoming metaverse.

then why not skip the historical recreation and go straight to simulating the paradises?

Comment author: JoshuaZ 28 January 2011 02:56:11AM 3 points [-]

For example, for AGI algorithms, current general purpose CPUs are inefficient by a factor of perhaps around 10^4. That is a decade of exponential gain right there just from architectural optimization.

I'm curious how you've reached this conclusion given how little we know about what AGI algorithms would look like.

Comment author: jacob_cannell 28 January 2011 03:44:44AM *  0 points [-]

For example, for AGI algorithms, current general purpose CPUs are inefficient by a factor of perhaps around 10^4. That is a decade of exponential gain right there just from architectural optimization.

I'm curious how you've reached this conclusion given how little we know about what AGI algorithms would look like.

The particular type of algorithm is actually not that important. There is a general speedup in moving from a general CPU-like architecture to a specialized ASIC - once you are willing to settle on the algorithms involved.

There is another significant speedup moving into analog computation.

Also, we know enough about the entire space of AI sub-problems to get a general idea of what AGI algorithms look like and the types of computations they need. Naturally the ideal hardware ends up looking much more like the brain than current von neumann machines - because the brain evolved to solve AI problems in an energy efficient manner.

If you know your are working in the space of probabilistic/bayesian like networks, exact digital computations are extremely wasteful. Using ten or hundreds of thousands of transistors to do an exact digital multiply is useful for scientific or financial calculations, but it's a pointless waste when the algorithm just needs to do a vast number of probabilistic weighted summations, for example.

Comment author: gwern 28 January 2011 04:30:33AM 2 points [-]

Cite for last paragraph about analog probability: http://phm.cba.mit.edu/theses/03.07.vigoda.pdf

Comment author: jacob_cannell 28 January 2011 04:44:16AM 1 point [-]

Thanks. Hefty read, but this one paragraph is worth quoting:

Statistical inference algorithms involve parsing large quantities of noisy (often analog) data to extract digital meaning. Statistical inference algorithms are ubiquitous and of great importance. Most of the neurons in your brain and a growing number of CPU cycles on desk-tops are spent running statistical inference algorithms to perform compression, categorization, control, optimization, prediction, planning, and learning.

I had forgot that term, statistical inference algorithms, need to remember that.

Comment author: gwern 28 January 2011 04:56:26AM *  2 points [-]

Well, there's also another quote worth quoting, and in fact the quote that is in my Mnemosyne database and which enabled me to look that thesis up so fast...

"In practice replacing digital computers with an alternative computing paradigm is a risky proposition. Alternative computing architectures, such as parallel digital computers have not tended to be commercially viable, because Moore's Law has consistently enabled conventional von Neumann architectures to render alternatives unnecessary.

Besides Moore's Law, digital computing also benefits from mature tools and expertise for optimizing performance at all levels of the system: process technology, fundamental circuits, layout and algorithms. Many engineers are simultaneously working to improve every aspect of digital technology, while alternative technologies like analog computing do not have the same kind of industry juggernaut pushing them forward."

Comment author: JoshuaZ 28 January 2011 03:55:09AM 1 point [-]

The particular type of algorithm is actually not that important. There is a general speedup in moving from a general CPU-like architecture to a specialized ASIC - once you are willing to settle on the algorithms involved.

Ok. But this prevents you from directly improving your algorithms. And if the learning mechanisms are to be highly flexible (like say those of a human brain) then the underlying algorithms may need to modify a lot even to just approximate being an intelligent entity. I do agree that given a fixed algorithm this would plausibly lead to some speed-up.

There is another significant speedup moving into analog computation.

A lot of things can't be put into analog. For example, what if you need factor large numbers. And making analog and digital stuff interact is difficult.

Also, we know enough about the entire space of AI sub-problems to get a general idea of what AGI algorithms look like and the types of computations they need. Naturally the ideal hardware ends up looking much more like the brain than current von neumann machines - because the brain evolved to solve AI problems in an energy efficient manner.

This doesn't follow. The brain evolved through a long path of natural selection. It isn't at all obvious that the brain is even highly efficient at solving AI-type problems, especially given that humans have only needed to solve much of what we consider standard problems for a very short span of evolutionary history (and note that general mammal brain architecture looks very similar to ours).

Comment author: jacob_cannell 28 January 2011 04:35:26AM *  -1 points [-]

EDIT: why the downvotes?

Ok. But this prevents you from directly improving your algorithms.

Yes - which is part of the reason there is a big market for CPUs.

And if the learning mechanisms are to be highly flexible (like say those of a human brain) then the underlying algorithms may need to modify a lot even to just approximate being an intelligent entity.

Not necessarily. For example, the cortical circuit in the brain can be reduced to an algorithm which would include the learning mechanism built in. The learning can modify the network structure to a degree but largely adjusts synaptic weights. That can be described as (is equivalent to) a single fixed algorithm. That algorithm in turn can be encoded into an efficient circuit. The circuit would learn just as the brain does, no algorithmic changes ever needed past that point, as the self-modification is built into the algorithm.

A modern CPU is a jack-of all trades that is designed to do many things, most of which have little or nothing to do with the computational needs of AGI.

A lot of things can't be put into analog. For example, what if you need factor large numbers. And making analog and digital stuff interact is difficult.

If the AGI need to factor large numbers, it can just use an attached CPU. Factoring large numbers is easy compared to reading this sentence about factoring large numbers and understanding what that actually means.

It isn't at all obvious that the brain is even highly efficient at solving AI-type problems,

The brain has roughly 10^15 noisy synapses that can switch around 10^3 times per second and store perhaps a bit each as well. (computation and memory integrated)

My computer has about 10^9 exact digital transistors in it's CPU & GPU that can switch around 10^9 times per second. It has around the same amount of separate memory and around 10^13 bits of much slower disk storage.

These systems have similar peak throughputs of about 10^18 bits/second, but they are specialized for very different types of computational problems. The brain is very slow but massively wide, the computer is very narrow but massively fast.

The brain is highly specialized and extremely adept at doing typical AGI stuff - vision, pattern recognition, inference, and so on - problems that are suited to massively wide but slow processing with huge memory demands.

Our computers are specialized and extremely adept at doing the whole spectrum of computational problems brains suck at - problems that involve long complex chains of exact computations, problems that require massive speed and precision but less bulk processing and memory.

So to me, yes it's obvious that the brain is highly efficient at doing AGI-type stuff - almost because that's how we define AGI-type stuff - its all the stuff that brains are currently much better than computers at!

Comment author: JoshuaZ 28 January 2011 04:47:58AM 3 points [-]

Not necessarily. For example, the cortical circuit in the brain can be reduced to an algorithm which would include the learning mechanism built in. The learning can modify the network structure to a degree but largely adjusts synaptic weights. That can be described as (is equivalent to) a single fixed algorithm. That algorithm in turn can be encoded into an efficient circuit. The circuit would learn just as the brain does, no algorithmic changes ever needed past that point, as the self-modification is built into the algorithm.

This limits the amount of modification one can do. Moreover, the more flexible your algorithm the less you gain from hard-wiring it.

The brain is highly specialized and extremely adept at doing typical AGI stuff - vision, pattern recognition, inference, and so on - problems that are suited to massively wide but slow processing with huge memory demands.

No, we don't know that the brain is "extremely adept" at these things. We just know that it is better than anything else that we know of. That's not at all the same thing. The brain's architecture is formed by a succession of modifications to much simpler entities. The successive, blind modification has been stuck with all sorts of holdovers from our early chordate ancestors and a lot from our more recent ancestors.

If the AGI need to factor large numbers, it can just use an attached CPU. Factoring large numbers is easy compared to reading this sentence about factoring large numbers and understanding what that actually means.

Easy is a misleading term in this context. I certainly can't factor a forty digit number but for a computer that's trivial. Moreover, some operations are only difficult because we don't know an efficient algorithm. In any event, if your speedup is only occuring for the narrow set of tasks which humans can do decently such as vision, then you aren't going to get a very impressive AGI. The ability to engage in face recognition if it takes you only a tiny amount of time that it would for a person to do is not an impressive ability.

Comment author: Desrtopa 28 January 2011 03:11:18AM *  1 point [-]

You made the general point earlier, which I very much agree with, about opportunity cost. Simulating humanity's current time-line has an opportunity cost in the form of some paradise that could exist in it's place. You seem to think that the paradise is clearly better, and I agree: from our current moral perspective.

It seems you're arguing that our successors will develop a preference for simulating universes like ours over paradises. If that's what you're arguing, then what reason do we have to believe that this is probable?

If their preferences do not change significantly from ours, it seems highly unlikely that they will create simulations identical to our current existence. And out of the vast space of possible ways their preferences could change, selecting that direction in the absence of evidence is a serious case of privileging the hypothesis.