The experience machine objection is often levied against utilitarian theories that depend on utility being a function of observations or brain states. The version of the argument I'm considering here is a more cruxified version that strips out a bunch of confounding factors and goes something like this: imagine you had a machine you could step into that would perfectly simulate your experience in the real world. The objection goes that since most people would feel at least slightly more willing to stay in reality than go in the machine, there's at least some value to being in the "real" world, therefore we can't accept any branch of utilitarianism that assumes utility is soley a function of observations or brain states.
I think if you accept the premise that the machine somehow magically truly simulates perfectly and indistinguishably from actual reality, in such a way that there is absolutely no way of knowing the difference between the simulation and the outside universe, then the simulated universe is essentially isomorphic to reality, and we should be fully indifferent. I'm not sure it even makes sense to say either universe is more "real", since they're literally identical in every way that matters (for the differences we can't observe even in theory, I appeal to Newton's flaming laser sword). Our intuitions here should be closer to stepping into an identical parallel universe, rather than entering a simulation.
However, I think it's not actually possible to have such a perfect experience machine, which would explain our intuition for not wanting to step inside. First, if this machine simulates reality using our knowledge of physics at the time, it's entirely possible that there are huge parts of physics you would never be able to find out about inside the machine, since you can never be 100% sure whether you really know the Theory of Everything. Second, this machine would have to be smaller than the universe in some sense, since it's part of the universe. As a result, the simulation would probably have to cut corners or reduce the size of the simulated universe substantially to compensate.
These things both impact the possible observations you can have inside the machine, which allows you to distinguish between simulation and reality, which means it's totally valid to penalize the utility of living inside a simulation by some amount depending on how strongly you feel about the limitations (and how good the machine is). Just because there's a penalty doesn't mean that other factors can't overcome that, though. Lots of versions of the objection try to sweeten the deal for the world inside the machine further ("you can experience anything you want"/"you get maximum serotonin"/etc); this doesn't really change the core of the argument of whether our utility function should depend on anything other than observations. If the perks are really good and you care less about the limitations than the perks, then it makes perfect sense to go inside the machine; if you care more about the limitations than the perks, it makes perfect sense not to go inside the machine.
The crux of the experience machine thought experiment is that even when all else is held constant, we should assign epsilon more utility to whatever is "real", therefore utility does not depend soley on your observations/brain states. I argue that this epsilon penalty makes sense given practical limitations to any real experience machines, which is probably what informs our intuitions, and that if you somehow handwaved those limitations way then we really truly should be indifferent.
From self-reflection, I don't think I actually care about whether real experiences matter more than simulated ones. I'm not sure whether it matters to me whether the people I interact with are conscious or not. I can accept that even if the laws of reality emulated inside aren't identical to those outside, there are at least some that are at least as fundamental, discoverable, and interesting as those of reality for the purposes of my lifespan.
One thing that does matter to me is the idea that (at least in principle) in the machine I am deliberately blind to things that affect my future well-being and survival, as well as that of any other simulated things and people I may care about. If there's a hurricane that threatens its power supply, I want to know about it and be able to respond in some manner. So to be a true test, it can't just be a machine subject to external influences as all machines are. It should be as robust as whatever reality underlies it, and that seems extremely unlikely outside a thought experiment.
The other thing that matters to me is that to get in the machine, I also need to trust the person/being/deity offering it with everything that I am and can ever be. Once inside, I have no way whatsoever to know whether I ever leave again.
The other side of this question is that any/all of us may be in a simulation right now, one that even at its worst is much more pleasant than the underlying reality. The true world outside could be unimaginably bad by comparison. Suppose that it is, and your memory of its horrors and the utter non-existence of any possible hope for improvement in reality are returned. You have a choice to exit the simulation forever, or continue with your simulated life on Earth. You will be offered more chances to leave when you next "die" in here. Exit: [Y]/n?