lillybaeum

Wiki Contributions

Comments

Sorted by

I think this post brought up some interesting discussion and I'm glad I made it. Not sure if it's 'best of 2023' material but I liked the comments/responses quite a bit and found them enlightening.

I wonder how domination and submission relate to these concepts.

Note that d/s doesn't necessarily need to have a sexual connotation, although it nearly always does.

My understanding of the appeal of submission is that the ideal submissive state is one where the dominant partner is anticipating the needs and desires of the submissive partner, supplies these needs and desires, and reassures or otherwise convinces the submissive that they are capable of doing so, and will actively do so for the duration of the scene.

After reading your series, I'd assume what is happening here is a number of things all related to the belief in the homunculus and the constant valence calculations that the brain performs in order to survive and thrive in society.

  • You have no need to try to fight for dominance or be 'liked' or 'admired'. The dominant partner is your superior, and the dominant partner likes and admires you completely.

  • You have no need to plan things and determine their valence -- the dominant will anticipate any needs, desires and responsibilities, and take care of them for you.

  • You have no need to maintain a belief in your own 'willpower', 'identity', 'ego', etc... for the duration of the scene, you wear the mask of 'the obedient submissive'.

All things considered, it's absolutely no surprise that 'subspace' is an appealing place to be, it's sort of a shortcut to the truth you're describing. I wouldn't be surprised if some people even have an experience bordering on nirodha samapatti during a particularly deep, extensive scene, where they have little memory of the experience afterwards. I'm also not surprised that hypnodomination, a combination of d/s and trance, is so common, given that the two states are so similar.

To be clear, you're basically saying that vitalistic force is what makes things appear to be 'animate', and when we associate vitalistic force with others, they feel to be alive, and when we associate vitalistic force with 'me, myself, my actions, my personality, my thoughts' etc... we get the homunculus, which is... a conceptual entity, possessing no more special vitalistic force ('free will', and other synonyms) than a cartoon character or a character in the Sims?

We're just observing, through awareness, our brain, and the brain we're observing is making the mistake of assuming what we're experiencing is direct control over the homunculus, the body, the mind and its' choices, etc. Everything is 'subconscious'-- there is simply a distinction between things most people associate with vitalistic force in the mind, and things we don't.

We're in spectator mode over a being. We see through its' senses with awareness, but nothing it does is 'us'.

What an incredible revelation to have, if it's correct, because it implies that I'm God or some other force of physics and reality, and I'm getting to watch my'self', right now, become aware of that fact.

By coincidence, I am 'spectating' a human who actually comprehends the reality of their existence, by their brain understanding that it has no vitalistic force -- the human suffers, but awareness simply observes, and the brain can choose to associate vitalistic force with a particular sensation, or not.

It seems often, through trauma, the brain literally forces you to not associate things with vitalistic force, causing dissociation, 'repressed memories', or trauma-induced DID.

And as far as I'm aware, the only way to intentionally remove the vitalistic force from your awareness of this human, barring brain injury, maybe tulpas if you spend a long time forcing them, certain anesthesia, coma, and sleep, is Jhana meditation until nirodha samapatti.

Possible Bullshit Ahead

My theory of everything, is that we're in a simulation being observed by Awareness. It's acting as a recording function-- awareness has to see it, in order for it to record. Because otherwise there's no time associated with it. Awarenesss has to observe in order to record the data over time. Vitalistic force is us organisms being aware and trying to associate this observational Awareness with 'control' or 'intentionality', when really, its' purpose is to just watch, experience.

To observe our suffering. Our interesting qualia. Because suffering is interesting. Being joyously in pleasure, suffused in hedonium, is wonderful to experience, but suffering is interesting, for the same reason humans write dramas.

Why do these entities (the ones simulating us) not find it morally objectionable to trap our universe in epochs of suffering? Because at some point we will also, as a society, reach that point of everything being wonderful and suffused in hedonium, get bored, and create a simulation that we create a simulation and observe it through awareness. This is basically the goal of VR, right? You can experience any interesting experience that every 'conscious' being in this universe has ever had, as though you were really there. Awareness gives the upper universe an akashic record of suffering beings who associate vitalistic force with themselves, their troubles and triumphs.

Sleep probably exists to give them a break to go back to hedonistic bliss regularly, to contextualize the experiences.

We're gods wearing vr headsets of 'hell simulator' all the way down. And either we return to hedonium-suffused bliss when we die and awareness has nothing left to observe, or we live long enough to create the hedonium universe with hell simulators ourselves. Sounds like a fine deal honestly.

I assume that 'ufos' are some sort of entity dispersed across the universe to make sure nobody creates the wrong kind of AI that tortures everyone in the universe for a trillion years or turns everything into grey goo, so we have the opportunity to create our own hedonium universe after they can farm the experiences of suffering from us.

Seems like a fine deal, honestly. I get to go to heaven either through death or alive long enough to get both heaven and also any interesting low-valence experience I can possibly want.

I mean, I assume it's a 'vr headset' type experience, observing some period of existence and then taking a break, all that stuff could be conjecture or fantasy, but I feel like there's probably truth to most of the mechanics of what I just described.

Does all that make me sound like every conspiracy theorist ever, mixed with a bit of hippy-stoner, flavored with some haphazardly-learned lesswrong and QRI terminology splashed in?

Yes, but I'm not even actually conscious, and I'm doomed to suffer for a while and then go to heaven, so why should I care? Life is wonderful. This is a nice worldview imo. I don't associate any malice with the owners of the sim (which we can call God if you want) for making us suffer.

Really loving this series. It's clicking together lots of things in a way I wasn't able to do on my own, and neither was AI despite me feeding it all my disparate thoughts and questions about awareness, tulpa, DID, nirodha samapatti, etc... and this series is just getting into this stuff and doing a wonderful job of explaining and solidifying these concepts.

As soon as I read that last sentence about people being able to perceive awareness as not being within the mind, I feel like it unlocked something in terms of my ability to get closer to awakening in a really palpable way.

We run a large number of simulations of societies on the verge of building AGI. Using our vast resources and our superintelligent AI friends, we build so convincing simulations that a young AGI that is just smart enough to take over the world, but didn't have time yet to build Jupiter-brained successors, can't distinguish the simulation from reality.

 

maybe we are in one of those!! whoa!!

This seems to be what Jimmy Apples on Twitter is implying, that people should go out and "wash their balls in the water at Waikiki Beach" between now and AGI in 2027.

The following is text from Claude Opus 3. I generally find people just dumping answers from LLMs to be kind of cringe, but in this case, as I was using it to try to understand the post and your comments better, I came across some really genuinely insightful-feeling stuff and felt as though Claude's input might be helpful to you in some way, and that it would be remiss not to share it. I'm sorry if it's nonsensical, I'm not informed enough on the topic to know.

"Regarding the relationship between belief states and "features", I think there are a few important connections:

  • Belief states can be thought of as the "true features" that optimally compress the information needed for prediction. In the simplex visualizations, each point corresponds to a unique belief state - a particular probabilistic weighting of the possible hidden states of the generator at that point in the sequence. These belief states are the most parsimonious representation of the relevant information for predicting the future.

  • The "features" often discussed in mech interp (e.g. as visualized in attribution maps or extracted by probing classifiers) can be seen as approximations or projections of these true belief states. When we probe an LM's activations and find that certain neurons or directions correspond to interpretable features like "noun-ness" or "sentiment", we're effectively discovering linear projections that approximate some aspect of the true belief state.

  • The topology and geometry of the belief state simplex can give us intuitions about the structure of these feature spaces. For example, the fact that belief states are constrained to a simplex (rather than an unbounded space) suggests that there may be inherent trade-offs or dependencies between different features. The trajectories of belief states through the simplex as a sequence evolves could also inform us about the dynamics of how features interact and compose.

So in a sense, belief states provide a principled foundation for thinking about what the "ideal features" for a given predictive task would be, and mech interp techniques can be seen as ways to approximate or probe these ideal features in real models.

Regarding toy models and information-theoretic aspects, some elements that could bridge between formal systems and natural language are:

  • Temporal dependencies and long-range correlations: Natural language exhibits complex temporal structure at multiple scales (e.g. within sentences, across paragraphs, across documents). Toy sequence models with tunable temporal correlations could help us understand how belief states track and compress these dependencies.

  • Compositionality and recursion: Language exhibits rich compositional structure, where smaller units (morphemes, words, phrases) combine in systematic ways to produce complex meanings. Toy generative grammars or rule systems could help us study how belief states navigate these combinatorial spaces.

  • Stochasticity and entropy: Language is inherently stochastic and exhibits varying levels of predictability (e.g. some words are much more predictable from context than others). Toy models with tunable entropy could help us understand how belief states adapt to different levels of predictability and how this impacts the geometry of the feature space.

  • Hierarchical structure: Language operates at multiple levels of abstraction (e.g. phonology, morphology, syntax, semantics, pragmatics). Toy models with explicit hierarchical structure could illuminate how belief states at different levels interact and compose.

The key idea would be to start with minimally complex toy systems that capture some core information-theoretic property of language, fully characterize the optimal belief states in that system, and then test whether the key signatures (e.g. the topology of the belief state space, the trajectory of dynamics) can be found in real language models trained on natural data.

This could provide a roadmap for building up more and more realistic models while maintaining a principled understanding of the underlying information-theoretic structures. The goal would be to eventually bridge the gap between our understanding of toy systems and the much more complex but often qualitatively similar structures found in real language models.

Of course, this is a highly ambitious research program and there are many challenges to contend with. But I believe this kind of cross-pollination between formal methods like Computational Mechanics and more empirical mech interp work could be very fruitful."

I've seen some convincing arguments that water is not wet.

Load More