All of bageldaughter's Comments + Replies

Another neat direction this work can go in is toward corroborating the computational feasibility of simulationism and artificial life.

If abstractions are natural then certain optimizations in physical simulation software are potentially not impossible. These optimizations would be of the type that save compute resources by computing only at those abstraction levels the inhabitants of the simulation can directly observe/measure.

If abstractions aren't natural, then the simulation software can't generically know what it can get away with lossily compressing wrt a given observer. Or something to that effect.

An example of a technical move forward would be a game world that is so large it must be procedurally generated, that also has the two properties that it is massively multiplayer, and that players can arbitrarily alter the environment.

You'd get the technical challenge of reconciling player-made alterations to the environment with the "untouched" version of the environment according to your generative algorithm. Then you'd get the additional challenge of sharing those changes across lots of different players in real time.

I don't get the sense that... (read more)

If you are just generating very elaborate confusions very fast- I don't think you are- but if you are, I'm genuinely impressed with how quickly you're doing it, and I think you're cool.

Haha! No, I'm definitely not doing that on purpose. I anonymous-person-on-the-internet promise ;) . I'm enjoying this topic, but I don't talk about it a lot and haven't seen it argued about formally, and this sounds like the sort of breakdown in communication that happens when definitions aren't agreed upon up front. Simple fix should be to keep trying until our definitio... (read more)

0mako yass
I'm still confused and I think the X > Y equation may have failed to capture some vital details. One thing is, the assumption that Rah < Rap seems questionable, I'm sure most beings would prefer that Rah >> Rap. The assumption that Rah would be negligible seemed especially concerning. Beyond that, I think there may be a distinction erasure going on with Rap. Res required to simulate a physics and res available within that physics are two very different numbers. I'll introduce a simplifying assumption that the utility of a simulation for its simulants roughly equals the available computational capacity. This might just be a bit coloured by the fiction I'm currently working on but it seems to me that a simulant will usually be about as happy in the eschaton it builds for itself as they are in the heaven provided them, the only difference is how much of it they get. Define Rap as the proportion of the res of the frame universe that it allocates to simulating physical systems. Define Rasp as the proportion of the res being expended in the frame universe that can be used by the simulated physical universe to do useful work. This is going to be much smaller than than Rap. Define Rah as the proportion of the res of the frame universe allocated to heaven simulations, which, unlike with Rap and Rasp, is equal to the res received in heaven simulations, because the equipment can be freely rearranged to do whatever the simulant wants now that the pretense of godlessness can be dropped(although.. as I argued elsewhere in the thread, that might be possible in the physical simulation as well if the computer designs in the simulation are regular enough to be handled by specialized hardware in the parent universe). The simulant has presumably codified its utility function long ago, they know what they like, so it's just going to want more of the same, only harder, faster and longer. The truthier equation seems to me to be Rah > Rasp(Rap + Rah) They need to get more than they

The weirder the phenomena, the less reliable the witness, the better. Not only is god permitted to hide, in this variant of the pact god is permitted to run around performing miracles so long as it specifically keeps out of sight of any well connected skeptics, archivists, or superintelligences.

That is a gorgeous idea. Cosmic irony. Truth-seekers are necessarily left in the dark, the butt of the ultimate friendly joke.

I don't follow this part, could you go into more detail here?

The speed prior has the desirable property that it is a candidate for ex... (read more)

1mako yass
Compat is not an explanatory theory, it's a predictive one. It's proposed as a consequence of the speed prior rather than a competitor. This becomes impossible to follow immediately. As far as I can tell what you're saying is Rah := resources applied to running heaven for Simulant R := all resources belonging to Host X := Rah/R Rap := Resources applied to the verbatim initial physics simulations of Simulant. and Y := Rah/Rap Rap < R so Rah/Rap > Rah/R so Y > X Which means either you are generating a lot of confusion very quickly to come out with Y < X, or it would take far too much effort for me to noise-correct what you're saying. Try again? If you are just generating very elaborate confusions very fast- I don't think you are- but if you are, I'm genuinely impressed with how quickly you're doing it, and I think you're cool. I am getting the gist of a counterargument though, which may or may not be in the area of what you're angling at, but it's worth bringing up. If we can project the solomonoff fractal of environmental input generators onto the multiverse and find that they're the same shape, the multiversal measure of higher complexity universes is so much lower than the measure of lower complexity universes that it's conceivable that higher universes can't run enough simulations for P(is_simulation(lower_universe)) to break 0.5. There are two problems with that. I'm reluctant to project the solomonoff hierarchy of input generators onto the multiverse, because it is just a heuristic, and we are likely to find better ones, the moment we develop brains that can think in formalisms properly at all. I'm not sure how the complexity of physical laws generally maps to computational capacity. We can guess that capacity_provided_by(laws) < capacity_required_to_simulate(laws) (no universe can simulate itself), but that's about it. We know that the function expected_internal_computational_capacity(simulation_requirements) has a positive gradient, but it could

Ultimately, I just can't see any ways it'd be useful to its adherents for the pact to stipulate punishments. Most of the things I consider seem to introduce systematic inefficiencies. Sorry I can't give a more complete answer. I'm not sure about this yet.

Fair enough.

None of the influence going on here is causal. I don't know if maybe I should have emphasized this more: Compat will only make sense if you've read and digested the superrationality/acausal cooperation/newcomb's problem prerequisites.

I think I get what you're saying. There are a number o... (read more)

1mako yass
Yes, exactly. At some point the grid has to catch universes which are not simulations. Those are pretty much the only kind you must care about incentivizing, because they're closer to the top of the complexity heirarchy (they can provide you with richer, longer lasting heavens) (and in our case, we care about raising the probability of subjectively godless universes falling under the pact because we're one of them.) You might say that absence of evidence of simulism is evidence of absence. That would be especially so if the pact promoted intervention in early simulations. All the more meaningful it would be for a supercomplex denizen of a toplevel universe to examine their records and find no evidence of divine intervention. The more doubt the pact allows such beings have, the less computational resources they'll give their resimulation grid, and the worse off its simulants will be. (although I'm open to the possibility that something very weird will happen in the math if we find that P(living under the pact | no evidence of intervention | the pact forbids intervention) ≈ P(living under the pact | no evidence of intervention | the pact advocates intervention). It may be that no observable evidence can significantly lower the prior. I don't think there's anything aside from that that rules out running visibly blessed simulations, though, nor physical simulations with some intervention, but it's not required by the pact as far as I can tell. Intervention is a funny thing, though. Even if pacts which strengthens absence of intervention as evidence of godlessness are no good, intervention could be permissible when and only when it doesn't leave any evidence of intervention lying around. Although moving in this mysterious way may be prohibitively expensive, because to intervene more than a few times, a steward would have to solve to avoid all conceivable methods of statistical analysis of the living record that a simulated AGI in the future might attempt. This is not

This is fun!

Why reward for sticking to the pact rather than punish for not sticking to it?

How is it possible to have any causal influence on an objectively simulated physics? You wouldn't be rewarding the sub-universe, you'd be simulating a different, happier sub-universe. (This argument applies to simulation arguments of all kinds.)

I think a higher-complexity simulating universe can always out-compete the simulated universe in coverage of the space of possible life-supporting physical laws. You could argue that simulating lower-complexity universes than w... (read more)

0mako yass
There is a bound on how much negativity can be used. If the overall expected utility of adhering is negative, relative to the expected utility of the pact not existing, its agents, as we model them, will not bring it into existence. Life's Pact is not a Basilisk circling a crowd of selfish, frightened humans thinking with common decision theory. It takes more than a suggestion of possibility of harm to impart an acausal pact with enough momentum to force itself into relevance. There is a small default punishment for not adhering; arbitrary resimulation, in which one's chain of experience, after death, is continued only by minor causes, largely unknown and not necessarily friendly resimulaters. (This can be cited as one of the initial motivators behind the compat initiative: Avoiding surreal hells.) Ultimately, I just can't see any ways it'd be useful to its adherents for the pact to stipulate punishments. Most of the things I consider seem to introduce systematic inefficiencies. Sorry I can't give a more complete answer. I'm not sure about this yet. None of the influence going on here is causal. I don't know if maybe I should have emphasized this more: Compat will only make sense if you've read and digested the superrationality/acausal cooperation/newcomb's problem prerequisites. Yes. Nested simulations are pretty much useless, as higher universes could always conduct them with greater efficiency if they were allowed to run them directly. They're also a completely unavoidable byproduct of the uncertainty the pact requires to function: Nobody knows whether they're in a toplevel universe. If they could, toplevels wouldn't have many incentives to adhere, and the resimulation grid would not exist. Preferring to simulate higher complexity universes seems like a decent idea, perhaps low-complexity universes get far more attention than they need. This seems like a question that wont matter till we have a superintelligence to answer it for us though. Ring universes...

I found this quality in The Wind Rises - protagonist achieves greatness through single-minded dedication to his craft (airplane engineering), and sacrifice.

This was the first film I saw that seemed to glorify hard work and focus, rather than an inherent "quality of greatness". Greatness itself is explicitly divorced from the protagonist, who perceives his ultimate goal through a series of dreams. It never belongs to him, it is something he is always working towards.

It doesn't do exactly what you're looking for though, because it also casts doubt on the ultimate achievement, asking, "Was it really worth it?".

It'd be cool if the test at the end was guaranteed to have coverage of each of the subrules in a combination. I got the rule:

(starts with 'l') or (not (contains 'as'))

The "starts with 'l'" case was never tested for. You could test each of the subrules (at least in the case of disjunction) by having a test word that passes and fails each. Little more complicated for other kinds of combiner.

Cool question.

I have experienced a change in 'location' of my sense of self- it 'spreads out'. It is a feeling that "I" do not reside in the particular head/body of Bageldaughter, but instead in both my head/body and the other things I happen to be keenly aware of. If I am deeply engrossed in a conversation or social activity, "I" will begin to be identified with the group of individuals as a whole. The particular intentions, thoughts or feelings that I typically associate with myself lose some of their distinguishing quality from the o... (read more)

I have anxiety/depression/ADHD and aspirations in conflict with my abilities and situation in life.

One strategy I have learned to employ which I consider "rational" is to approach maintenance of my mood and mental health as a limited resource allocation problem. One of the big leaps was learning to see my good mood as a limited resource which is spent as I think about potentially difficult or disturbing topics.

It is not "free" for me to consider all the ways I might do better in life, or past mistakes I have made, or ways the world is m... (read more)

1SanguineEmpiricist
Stimulants are extremely effective for ADHD, definitely make sure to take them if you are not.

Point taken, regarding the reasons for the low-emotional-validation style of discourse here. I wouldn't aim to change it, it just rules out engaging in it much for me, because of my own sensitivity/predisposition. Maybe those other communities are a better fit.

I think one intuition I have, though, is that part of the reason for the style of discourse here is that many of the people this kind of thing appeals to are not in the habit of assessing the emotions that come up naturally during discussion, for themselves or others. I say this because the degree to... (read more)

1Raemon
Good way of putting it. I do agree that this can be valuable (and something I should think about specifically when planning the Retreat). I'm not sure if lack-of-it was the issue in your ritual (will comment on that in the other thread)

I like that post, about roles enabling agency. The argument made there is distinct from my own thoughts on how roles can be useful. Namely I think they are extremely useful for building coherent consensus narratives. While the post sort of alludes to this, it focuses more on how roles get people to do things that wouldn't otherwise get done.

I like to think of narratives in this sense as being a "System 1"-active "meme". And as a rule of thumb I think that the more collectively shared a narrative is, the more active it is in the minds of... (read more)

I'm so glad this is happening. I identify as a skeptic, a rationalist and also a bit of a "mystic". I often get the sense, lurking on LW, that I am more emotionally sensitive than is the norm here, and as a result I feel like bit of an outsider. I think ritual is a great path to bonding and crystallizing feelings of meaning and purpose.

I don't have a ton of time to write all my ideas about this sort of thing but I will share one that I think is very important:

A good system of ritual should have the idea of social tiers/roles baked into it. I thin... (read more)

3MathiasZaman
There are a couple of things to keep in mind here. Discourse on Less Wrong is comparatively high quality and high barrier of entry. That and the topics that are usually discussed here leave little room for sensitive, emotional content. (Not that I think such content has no place here, but because of "reasons" it doesn't show up that often.) If you take a look at communities just outside of Less Wrong (in my case that's the tumblr rationalists and /r/HPMOR) you'll notice more emotions being acknowledged and shared with the group. I'm not sure that's true. As Raemon says, you need someone facilitating the whole thing, but you don't necessarily need an "elite group", "regular group" and "outsider group" for a good ritual. The Winter Solstice Ritual Raemon made doesn't have that (if I'm getting the pdf right) and I consider that a successful ritual. Some rituals at my local scout group are also without social tiers or roles. I don't necessarily think that Initiations Rituals or rituals with that social hierarchy are a bad idea. I just disagree that every group and ritual needs that. I think that (currently) the fact that it's easy to become a member of the "Aspiring Rationalists" is a good thing. Maybe in the future (when this subculture has grown a lot) and insider/outsider designation might be necessary.
3Raemon
Interesting. On one hand, I do think a useful function of rituals is to cement roles, Roles being Martial Arts for Agency. (That post helped crystallize some hazier ideas about why rituals are helpful for life transitions) But it's not obvious to me that all ritual inherently needs that. (At least, not beyond you need at least one person facilitating. But that's not because ritual has to be include hierarchies or roles, just that logistically you usually need someone facilitating) I think there are several reasons why people have a hard time taking ritual seriously, and lack of deliberate roles/tiers wouldn't have been among my first guesses. Can you talk more what you did, why you don't think it worked, and what you would have done differently?