This is an interesting reductio, even if not intended that way. I think the key trick is that it's easy to say "I" or "myself," when talking about both you and the hypothetical you being simulated in a dream, but you aren't actually providing a full description, only an IOU for such a description. But now that I put it that way, this issue doesn't apply the same way to the regular ol' simulation hypothesis.
I wonder if you could expand more on this observation. So you are saying that a dream is operating on a very limited dataset on a person, not an exact copy of information ("full description"). Do I understand right?
I sort of do intend of it as a kind of reductio, unless people find reason for this "Dream Hypothesis" to be taken seriously.
>So you are saying that a dream is operating on a very limited dataset on a person, not an exact copy of information ("full description"). Do I understand right?
Slightly different - I mean that when I talk about someone appearing in my dream, I am being somewhat loose with the definition of that "someone." Like, I would agree that my dream of the person is much smaller and is a poor copy of the real person. But the thing I was trying to point at is the broad definition by which I might equivocate between them, e.g. calling them by the same name.
This argument just occurred to me today and I'm glad to see I'm not alone!
With low epistemic confidence, (even if we assume reality isn't itself a dream) I wonder if the odds of being in a dream at any one time might possibly be higher than 50% as well.
If dream time is likely slowed down and we're in dreaming sleep for a few hours each night (perhaps forgetting most of our dreaming experience), it might only take a few dreams per night to dwarf, say, 14-16 hours of waking experience.
Equally, as the original Simulation Hypothesis is based on the number of simulations vs. base reality, rather than guided by subjective experience of time spent in different 'realities' (of course due to the lack of regular experience of a base reality behind this one), by original simulation logic, perhaps you could argue that, as we have experience of thousands of dream worlds compared to only one reality, that itself might even make any new life situation more likely to be a dream than continued reality?
Of course, I'm not at all confident in this logic, as I wouldn't post this if I really believed this experience was more likely to be a dream, but perhaps it's interesting speculation.
In reading about the summary of Bostrom's Simulation Argument, it seems to speculate on the realization of future technology which may or may not ever exist. That is, computer technology that is capable of making simulations that are not only vividly accurate to the behavior of people from the distant past, but also contain a consciousness that are convinced they are the very people they are imitating. And as long as there are many such self-conscious simulations, and only one reality (i.e. a "high ratio scenario"), then the odds of a self-conscious agent being in reality and not a simulation are pretty low.
But until such technology exists, we can do some empirical investigation on simulation technology we already have, using the most advanced computer ever developed in the natural world: the human brain.
Dreams as Simulations
When we dream, our brain is effectively creating a fully-detailed simulation using our organic hardware. This can be simulations based on our normal lives, or variations of our lives, or possible future scenarios such as in stress-induced dreams. Or alternatively our dreams can simulate other people's lives, either real or fictional, based on our personal experiences, or perfectly replicate how we might imagine those people would interact with us or each other.
I've never been one for lucid dreaming myself, but in my personal experience my dreams can be quite vivid. My dream-self can hold conversations with people, go outside and see large crowds going about their day, or even look out a balcony to an entire landscape of city life. The details and experiences of an entire world can be simulated by the subconscious imagination of ones own mind.
So the simulation argument is something we experience all throughout our lives. We go to bed at night, live in a simulation for 8-12 hours, then wake up and live in reality for an equal amount of time. As we have two realities (dreaming and waking), both equally vivid, and we spend an equal amount of time in both, then following Bostrom's logic it would seem that the probability that you are currently dreaming and not actually awake should be 50:50.
Now, one one could argue that dreams have a tendency to deviate from our experiences in reality, both in logic and consistency, and that should make it more obvious which is the dream and which is real life. But outside cases of lucid dreaming, normally one takes on a kind of "dream consciousness" which accepts the oddities around us as if they were logically consistent, and we only realize they are illogical in hindsight after waking up.
Dreams as a "High Ratio" Simulation argument
As an even stranger idea, there is also the possibility that you aren't a real person at all, but are actually a figment of someone else's dream. After all, other people you interact with in dreams are essentially autonomous agents spawned by the computer of the mind. Around the world, there are billions of people who are currently dreaming at any given moment, so the number of simulated worlds out there are already enormous. Even if we limited the scope to minds who dream about people who are very similar to you (including your friends and family who may be dreaming about you specifically), that is still a fairly high ratio. Even if it were only nine people dreaming about you, then the odds of you being real and not in someone else's dream is only 10%.
Now the idea of being the figment of someone else's dream seems more like absurdist rhetoric than a real possibility. But the point I am trying to make is not so much to conjecture that reality is a dream, but more-so to conjecture that it is not any less absurd than to believe that reality is a simulation, given the similarity between the two scenarios.
One obvious objection is that there is a distinct difference between people imagined in dreams verses a simulation. People in a dream are only simulating real-life behavior on a superficial level, they aren't "alive" in a sense of having self-awareness or consciousness. So because you have self-awareness and/or consciousness, then you aren't in someone else's dream.
This objection can be looked at in multiple ways. How can one objectively prove that people in a dream don't have consciousness? As long as these agents act and react in believable, human-like ways, it is virtually impossible to distinguish them from a real person, and thus it is not any less absurd than speculating self-awareness of a computer simulation. Maybe every morning we wake up and commit casual genocide against hundreds of self-aware agents living inside your dreams.
Conversely, one could speculate why our future descendants would want to create simulations of their ancestors that have self-awareness in the first place, or essentially point #2 of Bostrom's Filter. If our evolutionary necessities are satisfied with only a superficial imitation of real-life people (in our dreams), why should a computer simulation need anything more than that?
The Brain's Capability of Creating Self-Aware Agents
This is also related to things I've read about in emerging studies of Dissociative Identity Disorder (DID), also called Multiple-Personality Disorder. When the brain finds itself polluted with harmful or traumatic memories, particularly in early stages of childhood, it is speculated that the brain partitions itself as a way to keep those memories isolated. This effectively acts like the partitioning of hardware resources in a computer when creating a Virtual Machine: The disconnected parts of the brain proceed to grow and develop separate memories and experiences, and thus becoming two separate minds in the same body. I bring this up just as an example of how the human brain is technically capable of creating a fully-functional consciousness inside itself, just the same way that a computer creates a virtual agent in its simulation.