Actually: Maybe this is a decent chance to have Less Wrong folk help refine common LW concepts. Or at least a few of the ones that are featured in this article. Certainly most folk won't share my confusions, misgivings, or ignorance about most of Less Wrong's recurring concepts, but certainly a few must about a few, and could benefit from such a list! Consider this comment a test of the idea. I'll list some concepts from this post that I'm dissatisfied with and at least one reason why, and maybe others can point out a better way of thinking about the concept or a way to make it more precise or less confused. Here we go:
If this is useful---if some answers to the above questions or confusions are useful---then I'll consider writing a much longer post in the same vein. Being annoyed or disturbed about imperfections of concepts or conceptual breakdowns is something I am automatically predisposed to be motivated to do, and if this works I can see myself as being more valuable to the LW community, especially the part of the community that is afraid to ask stupid questions with a loud voice.
Mathematical structure: I don't know what this means, and I don't think Wikipedia's definition is the relevant one. Can we give minimal examples of mathematical structures? What's simpler than the empty set? Is a single axiom a mathematical structure? (What if that axiom is ridiculously (infinitely?) long---how short does something have to be to be an axiom? Where are we getting the language we use to write out the axiom, and where did it get its axioms? ("Memetic evolution?! That's not even a... what is this i dont even"))
Research on numerical cognition seems relevant here. Interesting links here, here and here.
Anyway. Typically, a mathematical structure is something with some sort of attached rule set, whereas a mathematical object is a primitive, something that can be manipulated according to that rule set. So the empty set or a single axiom might be a mathematical object, but not a structure.
Infinite axioms are the objects of infinitary logics, and there is a whole branch of model theory devoted to their study (I think that most of the groundwork for that area was developed by Jon Barwise). You can learn about this and several other areas of model theory here.
There is a pervasive sense that mathematics is not an anthropocentric activity, and that it is in some way universal, but this is not very well specified. I tend to think that in order to tackle this issue it might be necessary to understand how to implement a program that can 'understand' and generate new mathematics at least as generally as a human with peak mathematical ability, but that is just my intuition.
(Warning: I be hittin' the comment button without reviewing this carefully, views expressed may be inaccurately expressed and shit, ya dig? Aight yo.)
Thanks for the pointers. I wish there were a place for me to just bring up things I've been thinking about, and quickly get pointers or even conversation. Is Less Wrong IRC the best place for that? I've never used it.
I tend to think that in order to tackle this issue it might be necessary to understand how to implement a program that can 'understand' and generate new mathematics at least as generally as a human with peak mathematical ability, but that is just my intuition.
One FAI-relevant question I'm very interested in is: What if anything happens when a Goedel machine becomes intelligent enough to "understand" the semantics of its self-description, especially its utility function and proof search axioms? Many smart people emphasize the important difference between syntax and semantics, but this is not nearly as common in Less Wrong's standard philosophy.[1] If we could show that there's no way a Goedel machine can "re-interpret" the semantics of its axioms or utility function to mean something intuitively rather different than how humans were interpreting them, then we would have two interesting arguments: that it is indeed theoretically possible to build a general intelligence that is "stable" if the axioms are sound[2], and also that superintelligences with non-Friendly initial utility functions probably won't converge on whatever a Friendly AI would also have converged on. Though this still probably wouldn't convince the vast majority of AGI researchers who weren't already convinced, it would convince smart technically-minded objectors like me or perhaps Goertzel (though I I'm not sure what his position is).
One interesting way to look at Goedel machines for all kinds of investigations is to imagine that they create new agents to do things for them. (A variation on this theme is to trap the Goedel machine in a box and make it choose which of two totally different agents to let out of their boxes---it's a situation where it outputs the best strategy according to its goals, but that strategy has a huge number of side effects besides just optimizing its goals.) For ideas related to those in the previous paragraph it might be useful to imagine that the Goedel machine's proof search tells it that a very good idea would be to create an agent to monitor the Goedel machine and to intervene if the Goedel machine stops optimizing according to its utility function. (After all, what if its hardware gets corrupted, or it gets coerced to modify its utility function and to delete its memory of the coercion?) How does this second agent determine the "actual" or "intended" semantics of the original machine's utility function, assuming it's not too worried about its own utility function that references the original machine's utility function? These are just tools one can use to look at such things, the details I'm adding could be better optimized. Though it's not my reason for bringing up these ideas, you can see how such considerations indicate that having a thorough understanding of the Goedel machine's architecture and utility function doesn't obviously tell us everything we need to know, because superintelligences are liable to get creative. No pun intended.
To further show why this might be interesting for LW folk: Many of SingInst's standard arguments about the probable unFriendliness of not-explicitly-coded-to-be-Friendly AIs are either contradictory or technically weak, and it'd be nice to technically demonstrate that they are or aren't compelling. To substantiate that claim a little bit: Despite SingInst's standard arguments---which I've thoroughly understood for two years now and I was a Visiting Fellow for over a freakin' year so please Less Wrong for once don't parrot them back to me; ahem, anyway...---despite SingInst's standard arguments it's difficult to imagine an agent that doesn't automatically instantly fail, for example by simple wireheading or just general self-destruction, but instead even becomes superintelligent, and yet somehow manages to land in the sweet spot where it (mis-)interprets its utility function to be referring to something completely alien but again not because it's wire-heading. Most AI designs simply don't go anywhere; thus formal ones like Goedel machines are by far the best to inspect closely. If we look at less-technical ones then it becomes a game where anyone can assert their intuition or play reference class tennis. For example, some AI with a hacky implicit goal system becomes smart enough to FOOM: As it's reflecting on its goal system in order to piece it together, how much reflection does it do? What kind of reflection does it do? The hard truth is that it's hard to argue for any particular amount less than "a whole bunch of reflection", and if you think about it for awhile it's easy to see how in theory such reflection could lead to it becoming Friendly. Thus Goedel machines with very precise axioms and utility functions are by far the best to look at.
(BTW, combining two ideas above: it's important to remember that wireheading agents can technically create non-wireheading agents that humans would have to worry about. It's just hard to imagine an AI that stayed non-wireheading long enough and became competent enough to write a non-wireheading seed AI, and then suddenly started wireheading.)
[1] Maybe because it leads to people like Searle saying questionable things? Though ironically Searle is generally incredibly misunderstood and caricatured. At any rate, I am afraid that some kind of reversed stupidity might be occurring, whether or not that stupidity was ever there in the first place or was just incorrect pattern-matching from computationalist-skepticism to substance dualism, or something.
[2] Does anyone talk about how it can be shown that two axioms aren't contradictory, or that an axiom isn't self-contradictory? (Obviously you need to at least implicitly use axioms to show this, so it's an infinite regress, but just as obviously at some point we're going to have to trust in induction, even if we're coding an FAI.)
trap the Goedel machine
Ash Ketchum is strolling around Kanto when he happens upon a MissingNo. "GOEDEL MACHINE, I choose you!" GOEDEL MACHINE used RECURSION. Game Boy instantly explodes.
MISTY: "Ash, we have to do something! Kooky Psychic Gym Leader Sabrina is leveling up her Abra and she's not even trying to develop a formal theory of Friendly Artificial Alakazam!"
ASH: "Don't panic! She doesn't realize that in order to get her Kadabra to evolve she'll have to trade with carbon copies of us in other Game Boys, then trade back. Nintendo's clever ploy to sell link cables ensures we have a..." Ash dons sunglasses. "...game theoretic advantage."
BROCK: "Dah dah dah dah, Po-ké-MON!"
Game Boy explodes.
I think this would be useful. On some of these topics we may not even realize how confused we are. I thought I knew where I was at with "computation", for example. I realized I cannot answer your question without begging more though.
My reasoning is this:
Consider the domain of bit streams - to avoid having to deal with infinity, let's take some large but finite length, say a trillion bits. Then there are 2^trillion possible bit streams. Now restrict our attention to just those that begin with a particular ordered pattern, say the text of Hamlet, and choose one of those at random. (We can run this experiment on a real computer by taking a copy of said text and appending enough random noise to bring the file size up to a trillion bits.) What can we say about the result?
Well, almost all bit streams that begin with the text of Hamlet, consist of just random noise thereafter, so almost certainly the one we choose will lapse into random noise as soon as the chosen text ends.
Suppose we go about things in a different way and instead of choosing a bit stream directly, we take our domain as that of programs a trillion bits long, and then take the first trillion bits of the program's output as our bit stream. Now restrict our attention to just those programs whose output begins with the text of Hamlet, and choose one of those at random. What can we say about the result this time?
It's possible that the program we chose consists of print "...", i.e. the output bit stream is just embedded in the program code. Then our conclusion from choosing a bit stream directly still applies.
But this is highly unlikely. The entropy of English text has been estimated at about one bit per character. In other words, there exist subroutines that will output the text of Hamlet, that are about eight times shorter than print "...". That in turn means that in our domain of programs a trillion bits long, exponentially more programs contain the compact subroutine than the literal print statement. Therefore our randomly selected program is almost certainly printing the text of Hamlet by using a compact subroutine, not a literal print statement.
But the compact subroutine will probably not lapse into random noise as soon as Hamlet is done. It will continue to generate... probably not a meaningful sequel, Hamlet doesn't constrain the universe enough for that, but at least something that bears some syntactic resemblance to English text.
For the constraint the output bit stream must begin with the text of Hamlet, substitute at least one observer must exist. We are then left with the conclusion that if the universe were randomly chosen from all bit streams, we should be Boltzmann brains, observing just enough order for an instance of observation, with an immediate lapse back into noise. As we observe continuing order, we may conclude that the universe was randomly chosen from programs (or mathematical descriptions, if the universe is not restricted to be computable) so that it was selected to contain a compact generator of the patterns that produced us, therefore a generator that will continue producing order.
Note that this is an independent derivation of Solomonoff's result that future data should be expected to be that which would be generated by the shortest program that would produce past data. (I actually figured out the above argument as a way to refute Hume's claim that induction is not logical, before I heard of Solomonoff induction.)
It also works equally well if output data is allowed to be infinite, or even uncountably infinite in size (e.g. continuum physics), because uncountably infinite data can still be generated by a formal definition of finite size.
That all makes sense and you put it more clearly than I've seen before, but I dispute the implication that finding that our local universe is the result of a compact generator implies very much about the large-scale structure of an ensemble universe. For example imagine pockets of local universes that look all nice and neat from the inside yet are completely alien to aliens in a far-off universe pocket---"far off" being determined by the Turing languages for their respective universal priors, say. For a slightly more elegant variation on the idea I made the same argument here. Such an ensemble might be "uniform" and even devoid of any information content---see Standish's Theory of Nothing---yet could look very rich from the inside. Does your reasoning eliminate this possibility in a way that I'm not seeing?
Edit: I was assuming you mean "ensemble" when you say "universe" but you might not have actually been implying this seemingly much stronger claim?
I don't understand your objection. I would take "ensemble" to roughly map to what I meant by "domain". Certainly the whole ensemble has little or no information content. You can't really look at an ensemble from the inside, only your own universe. Does any of that clarify anything?
"far off" being determined by the Turing languages for their respective universal priors, say.
You are raising the objection that the Solomonoff prior takes the language as a parameter? True, but I'm not sure how that can be helped; in practice it amounts to only a small additive constant on program complexity, and in any case it's not like there's any competing theory that does the job without taking the language as a parameter. Besides, it doesn't affect the point I was making.
You can't really look at an ensemble from the inside, only your own universe. Does any of that clarify anything?
I think so, in the sense that I think we basically understand each other; but I'm not sure why you agree but seem uninterested in the idea that "certainly the whole ensemble has little or no information content". Do you think that's all there really is to say on the matter? (That sounds reasonable, I guess I just still feel like there's more to the answer, or something.)
Well, I am interested in it in the sense that one of the things that attracted me to the multiverse theory in the first place was its marvelous economy of assumption. I'm not sure there is anything much else specifically to be said about that, though.
That in turn means that in our domain of programs a trillion bits long, exponentially more programs contain the compact subroutine than the literal print statement.
Are you sure this is right? There's exponentially many different print statements. Do you have an argument why they should have low combined weight?
The number of N-bit print statements whose output starts with the n-bit Hamlet is exactly 2^(N-n). If K(Hamlet)=n/6, then there are at least 2^(N-n/6) programs whose output starts with Hamlet. Probably more.
EDIT: Ooops, I didn't prove anything about the structuredness of the remaining N-n bits, so this is not too useful in itself.
What interests me about the Boltzmann brain (this is a bit of a tangent) is that it sharply poses the question of where the boundary of a subjective state lies. It doesn't seem that there's any part X of your mental state that couldn't be replaced by a mere "impression of X". E.g. an impression of having been to a party yesterday rather than a memory of the party. Or an impression that one is aware of two differently-coloured patches rather than the patches themselves together with their colours. Or an impression of 'difference' rather than an impression of differently coloured patches.
If we imagine "you" to be a circle drawn with magic marker around a bunch of miscellaneous odds and ends (ideas, memories etc. but perhaps also bits of the 'outside world', like the tattoos on the guy in Memento) then there seems to be no limit to how small we can draw the circle - how much of your mental state can be regarded as 'external'. But if only the 'interior' of the circle needs to be instantiated in order to have a copy of 'you', it seems like anything, no matter how random, can be regarded as a "Boltzmann brain".
there seems to be no limit to how small we can draw the circle
If you haven't decided what the circle should represent.
Besides intuitions, has there been any progress on better understanding the whole agent-environment thing? (There's some abstract machine conference meeting in Germany soon that's having a workshop on computation in context. They meet every 2 years. It might be good for DT folk to have something to inspire them with next time it comes around.)
Besides intuitions, has there been any progress on better understanding the whole agent-environment thing?
What agent-environment thing? There doesn't appear to be a mystery here.
I'm afraid I'll butcher the explanation and hurt the meme, so I'll say it's mostly about other programs, not your own. E.g. like how do we differentiate between the preferences of an agent and just things in its environment after it's gone and interacted with its local environment a lot, e.g. Odysseus and his ship. (These are just cached ideas without any context in my mind, I didn't generate these concerns myself.)
a uniform weighting of mathematical structures in a Tegmark-like 'verse
I don't know what this is supposed to mean. There isn't any uniform distribution over a countably infinite set. We can have a continuous uniform distribution over certain special kinds of uncountable sets (for example, the real unit interval [0,1]), but somehow I doubt that this was the intended reading.
(I'm curious, why is it countably infinite rather than uncountably infinite?) I had assumed, perhaps quite ridiculously, that there was some silly improper or uniform-like distribution, then for shorthand just called that prior a uniform distribution. The reason I'd assumed that is because I remembered Tegmark not seeming to be worried about the problem in one of his longer more-detailed multiverse papers despite saying he wasn't fond of Schmidhuber's preference for the universal or speed priors; or something like that? I'm pretty sure he explicitly considered it though. I feel silly for not having actually looked it up myself. Anyway, is there some halfway sane equivalent to talking about a uniform prior here? (Are improper priors not too meaningless/ugly for this purpose?) (Warning, I might edit this comment as I read or re-read the relevant Wikipedia articles.)
Anyway, is there some halfway sane equivalent to talking about a uniform prior here?
I don't think so. At least I've never seen anyone mention any prior over mathematical structures that might be even roughly described as a "uniform prior".
I think the intuition here is basically that of the everything-list's "white rabbit" problem. If you consider e.g. all programs at most 10^100 bits in length, there will be many more long than short programs that output a given mind. But I think the standard answer is most of those long programs will just be short programs with irrelevant junk bits tacked on?
I basically don't understand such arguments as applied to real-world cosmology, i.e. computing programs and not discovering them. 'Cuz if we're talking about cosmology aren't we assuming that at some point some computation is going to occur? If so, there's a very short program that outputs a universal dovetailer that computes all programs of arbitrary length, that repeatedly outputs a universal dovetailer for all programs at most 10^5 bits in length, that.... and it's just not clear to me what generators win out in the end, whether short short-biased or long long-biased, how that depends on choice of language, or generally what the heck is going.
Warning! Almost assuredly blithering nonsense: (Actually, in that scenario aren't there logical attractors for programs to output 0, 1, 10, 11, ... which results in universal distribution/generator constructed from the uniform generator, which then goes on to compute whatever universe we would have seen from an original universal distribution anyway? This self-organization looks suspiciously like getting information from nowhere, but those computations must cost negentropy if they're not reversible. If they are reversible then how? Reversible by what? Anyway that is information as seen from outside the system which might not be meaningful---information from any point inside the system seems like it might be lost with each irreversible computation? Bleh, speculations.)
(ETA: Actually couldn't we just run some simulations of this argument or translate it into terms of Hashlife and see what we get? My hypothesis is that as we compute all programs of length x=0, x++ till infinity, the binary outputs of all computations when sorted into identical groups converge on a universal prior distribution, though for small values of x the convergence is swamped by language choice. I have no real reason to suspect this is hypothesis is accurate or even meaningful.)
(ETA 2: Bleh, forgot about the need to renormalize outputs by K complexity (i.e. maximum lossless compression) 'cuz so many programs will output "111111111111...". Don't even know if that's meaningful or doesn't undermine the entire point. Brain doesn't like math.)
Warning! Almost assuredly blithering nonsense: Hm, is this more informative if instead we consider programs between 10^100 and 10^120 bits in length? Should it matter all that much how long they are? If we can show convergence upon characteristic output distributions by various reasonably large sets of all programs of bit lengths a to b, a < b, between 0 and infinity, then we can perhaps make some weak claims about "attractive" outputs for programs of arbitrary length. I speculated in my other comment reply to your comment that after maximally compressing all of the outputs we might get some neat distribution (whatever the equivalent of the normal distribution is for enough arbitrary program outputs in a given language after compression), though it's probably something useless that doesn't explain anything, like, I'm not sure that compressing the results doesn't just destroy the entire point of getting the outputs. (Instead maybe we'd run all the outputs as programs repeatedly; side question: if you keep doing this how quickly does the algorithm weed out non-halting programs?) Chaitin would smile upon such methods, I think, even if he'd be horrified at my complete bastardization of pretend math, let alone math?
Anyway, is there some halfway sane equivalent to talking about a uniform prior here?
Yes, (depending what you mean by half-way sane). You can have a prior that all your sense inputs are uniformly distributed. This prior is stable since no matter what sense inputs you experience they're just as likely as any other. For reasons that should be obvious, I certainly don't recommend adopting this prior.
(I'm curious, why is it countably infinite rather than uncountably infinite?)
(In retrospect, my comment could have been worded better; I only meant to consider both cases. The point is that we can't talk about a uniform distribution until we've specified more precisely just what the Tegmark multiverse is supposed to include; "all mathematical structures" is not well-defined.)
We can have a continuous uniform distribution over certain special kinds of uncountable sets (for example, the real unit interval [0,1])
And even then, that distribution is only uniform with respect to the usual measure on the unit interval.
If it turns out that moon is made out of cheese, should you focus on optimizing your world, or the world of the philosopher considering the thought experiment?
It seems that from philosopher's point of view, your world has low measure (moral relevance), but it's unclear whether it's tempting to decide so just because it's not possible to affect the cheese world. From cheese world's point of view, it's even less clear, and the difficulty of control is similar.
I wonder how much of a role ability to control plays in these estimates, since decision theoretically, moral value (and so probability) of a fact one can't control is meaningless, and so one is tempted to assign arbitrary or contradictory values. (A related question is whether it makes sense to have control over probability (or moral value) of a fact without having control over the fact itself (I would guess so), in which case probability (moral value) isn't meaningless if it itself can be controlled, even if the fact it's probability (moral value) of can't.)
If it turns out that moon is made out of cheese, should you focus on optimizing your world, or the world of the philosopher considering the thought experiment?
I first heard this as a half-joking argument by Steve Rayhawk as to why we might not want to push fat people in front of trains.[1] (Of course, second-order consequentialists have to do this kind of reasoning constantly in real life: what precedent am I setting, what rule am I choosing to follow, in making this decision, and do I endorse agents like me following that rule generally?)
Even for me personally I think about this a fair bit in the context of simple moral dilemmas like vegetarianism. Another way to think about it is, If some agent could only simulate me with a coarse-grained simulation such that it only observed my decisions as evidence about morality and not the intricacies and subtleties of the context of my decisions, would I still think of myself as providing reflectively endorsed evidence about morality? If not, how strongly does that indicate that I am using my "subtle" contexts as rationalizations for immoral decisions?
[1] Incidentally, this is what meta-contrarianism or steel men are supposed to look like in practice: take a cached conclusion, look at its opposite, construct your best argument for it, then see if that got you anywhere. Even if you're just manipulating language, like translating some combination of bland computationalism, distributed/parallel computing and memetics into a half-baked description of Plato's Forms and Instances across human minds, it still gives you a new perspective on your original ideas from which you can find things that are unnatural or poorly defined, and see new connections that previously hadn't leapt out. The fact that it's often fun to do so acts as incentive. (I note that this may be typical mind fallacy: it seems that others are somewhat-to-significantly more attached to their preferred languages than I am, and for them that may be a correct choice in the face of many languages that are optimized for anti-epistemology in subtle or not-so-subtle ways, e.g. academic Christianity. But it is important to explicitly note that doing so may reinforce the algorithm that classifies new information by its literary genre rather than its individual merits: you're trusting your priors more than your likelihood ratios without training your likelihood estimator.)
"Because trying to philosophize about weightings over ensemble universes is utterly meaningless generally you fool." is one possible correct answer, and I'd upvote it if it was written eloquently (and didn't appeal overmuch to the disclaimed fundamental problems that are only solvable by updateless-like decision theories).
Every now and then I see a claim that if there were a uniform weighting of mathematical structures in a Tegmark-like 'verse---whatever that would mean even if we ignore the decision theoretic aspects which really can't be ignored but whatever---that would imply we should expect to find ourselves as Boltzmann mind-computations
The idea is this: Just as most N-bit binary strings have Kolmogorov complexity close to N, so most N-bit binary strings containing s as a substring have Kolmogorov complexity at least N - length(s) + K(s) - somethingsmall.
And now applying the analogy:
N-bit binary string <---> Possible universe
N-bit binary string containing substring s <---> Possible universe containing a being with 'your' subjective state. (Whatever the hell a 'subjective state' is.)
we get:
N-bit binary string containing substring s with Kolmogorov complexity >= N - length(s) + K(s) - O(1) <---> A Boltzmann brain universe.
We don't seem to be experiencing nonsensical chaos, therefore the argument concludes that a uniform weighting is inadequate and an Occamian weighting over structures is necessary
I've never seen 'the argument' finish with that conclusion. The whole point of the Boltzmann brain idea is that even though we're not experiencing nonsensical chaos, it still seems worryingly plausible that everything outside of one's instantaneous mental state is just nonsensical chaos.
What an 'Occamian' weighting buys us is not consistency with our experience of a structured universe (because a Boltzmann brain hypothesis already gives us that) but the ability to use science to decide what to believe - and thus what to do - rather than descend into a pit of nihilism and despair.
Without having looked closely at the rest of your comment yet:
What an 'Occamian' weighting buys us is not consistency with our experience of a structured universe (because a Boltzmann brain hypothesis already gives us that) but the ability to use science to decide what to believe - and thus what to do - rather than descend into a pit of nihilism and despair.
Here I risk a meaningless map/territory distinction, and yet it seems straightforwardly possible that the local universe---the thing we care about most---is perfectly well modeled by a universal prior, whereas the ensemble---say, a stack of universal prior pancakes infinitely high with each pancake having a unique Turing language along the real number line---is more accurately described with something vaguely like a uniform prior. (I have no idea if this is useful, but maybe this is clearer if it wasn't already painfully sickeningly clear: non-technically, you gotsa cylinder Ensemble made up of infinite infinitely thin mini-cylinder Universes (universal priors), where each mini-cylinder (circle!) is tagged with a "language" that is arbitrarily close to the one above or below it ('close' in the sense that the languages of Scheme and Haskell are closer together than The Way Will Newsome Describes The World and Haskell). (As an extremely gratuitous detail I'm imagining the most commonly used strings in each language scribbled along the circumference of each mini-cylinder in exponentially decreasing font size and branching that goes exactly all the way around the circumference. If you zoom out a little bit to examine continuous sets of mini-cylinders, that slightly-less-mini-cylinder too has its own unique language: it's all overlapping. If you zoom out to just see the whole cylinder you get... nothing! Or, well, everything. If your theory can explain everything you have zero knowledge.)
(In decision theory such a scenario really messes with our notions of timeless control---what does it mean, if anything, to be an equivalent or analogous algorithm of a decision algorithm that is located inside a pancake that is in some far-off part of the pancake stack, and thus written in an entirely different language? It's a reframing of the "controlling copies of you in rocks" question but where it feels more like you should be able to timelessly control the algorithm.)
I don't immediately see how your comment argues against this idea, but again I haven't looked at it closely. (Honestly I immediately very much pattern-matched it to "things that really didn't convince me in the past", but I'll try to see if perhaps I've just been missing something obvious.)
My response is that simpler universes would exist within more complicated ones, and thus you'd be more likely to be in a simpler universe. For example, the universe as we know it is much simpler than a Boltzmann brain. As such, you'd be more likely to find our universe somewhere within another universe than you would be to find a Boltzmann brain.
I'm generally against this weighting because of all the infinity paradoxes. For example, no matter how complex the universe is, we'd logically figure that it's almost definitely much more complex.
At the risk of speaking of nonsense:
I'm generally against this weighting because of all the infinity paradoxes. For example, no matter how complex the universe is, we'd logically figure that it's almost definitely much more complex.
In some ways I think this is kind of expected, no? I know a few smart people who think that the idea of "fundamental" laws of physics is meaningless 'cuz it could very well just go deeper forever---in that case the math will surely just get more and more complicated (and superintelligences will have to resolve more and more logical uncertainty to reach the increasing theoretical limits of computation per [insert new equivalent of Planck length]). Interestingly the jump from classical to quantum made the universe "bigger" in some sense on both the smallest and biggest scales---if you buy MWI.
Every now and then I see a claim that if there were a uniform weighting of mathematical structures in a Tegmark-like 'verse---whatever that would mean even if we ignore the decision theoretic aspects which really can't be ignored but whatever---that would imply we should expect to find ourselves as Boltzmann mind-computations, or in other words thingies with just enough consciousness to be conscious of nonsensical chaos for a brief instant before dissolving back into nothingness. We don't seem to be experiencing nonsensical chaos, therefore the argument concludes that a uniform weighting is inadequate and an Occamian weighting over structures is necessary, leading to something like UDASSA or eventually giving up and sweeping the remaining confusion into a decision theoretic framework like UDT. (Bringing the dreaded "anthropics" into it is probably a red herring like always; we can just talk directly about patterns and groups of structures or correlated structures given some weighting, and presume human minds are structures or groups of structures much like other structures or groups of structures given that weighting.)
I've seen people who seem very certain of the Boltzmann-inducing properties of uniform weightings for various reasons that I am skeptical of, and others who seemed uncertain of this for reason that sound at least superficially reasonable. Has anyone thought about this enough to give slightly more than just an intuitive appeal? I wouldn't be surprised if everyone has left such 'probabilistic' cosmological reasoning for the richer soils of decision theoretically inspired speculation, and if everyone else never ventured into the realms of such madness in the first place.
(Bringing in something, anything, from the foundations of set theory, e.g. the set theoretic multiverse, might be one way to start, but e.g. "most natural numbers look pretty random and we can use something like Goedel numbering for arbitrary mathematical structures" doesn't seem to say much to me by itself, considering that all of those numbers have rich local context that in their region is very predictable and non-random, if you get my metaphor. Or to stretch the metaphor even further, even if 62534772 doesn't "causally" follow 31256 they might still be correlated in the style of Dust Theory, and what meta-level tools are we going to use to talk about the randomness or "size" of those correlations, especially given that 294682462125 could refer to a mathematical structure of some underspecified "size" (e.g. a mathematically "simple" entire multiverse and not a "complex" human brain computation)? In general I don't see how such metaphors can't just be twisted into meaninglessness or assumptions that I don't follow, and I've never seen clear arguments that don't rely on either such metaphors or just flat out intuition.)