tailcalled

Sequences

Linear Diffusion of Sparse Lognormals: Causal Inference Against Scientism

Wiki Contributions

Comments

Sorted by
Answer by tailcalled30

I like vaccines and suspect they (or antibiotics) account for the majority of the value provided by the medical system. I don't usually see discussion of what can be done to promote or improve vaccines, so I don't know much about it, but the important part is they remain available and get improved and promoted in whatever ways are reasonable.

Beyond that, a major health problem is obesity and here semaglutide seems like it would help a lot.

I think there's something to this. Also since making the OP, I've been thinking that human control of fire seems important. If trees have the majority of the biomass, but humans can burn the trees for energy or just to make space, then that also makes humans special (and overlaps a lot with what you say about energy controlled).

This also neatly connects human society to the evolutionary ecology since human dominance hierarchies determine who is able to control what energy (or set fire to what trees).

The OP is more of a statement that you get different results depending on whether you focus on organism count or biomass or energy flow. I motivate this line of inquiry by a question about what evolution selects for, but that's secondary to the main point.

In the case of gradient flow, we expect almost-all starting conditions to end up in a similar functional relationship when restricting attention to their on-distribution behavior. This allows us to pick a canonical winner.

Evolution is somewhat different from this in that we're not working with a random distribution but instead a historical distribution, but that should just increase the convergence even more.

The noteworthy part is that despite this convergence, there's still multiple winners because it depends on your weighting (and I guess because the species aren't independent, too).

But the problem I mention seems to still apply even if you hold the environment fixed.

E.g. suppose there's some game where you can reproduce by getting resources, and you get resources by playing certain strategies, and it turns out there's an equilibrium where there's 90% strategy A in the ecosystem (by some arbitrary accounting) and 10% strategy B. It's kind of silly to ask whether it's A or B that's winning based on this.

But this is an abstraction that would never occur in reality. The real systems that inspire this sort of thing have lots of pelagibacter communis and the strategies A and B are constantly diverging off into various experimental organisms that fit neither strategy and then die out.

When you choose to model this as a mixture of A and B, you're already implicitly picking out both A and B as especially worth paying attention to - that is, as "winners" in some sense.

Actually I guess I endorse this response in the real world too, where if a species is materially changing to exploit a new niche, it seems wrong to say "oh, that old species that's totally dead now sure were winners." If the old species had particular genes with a satisfying story for making it more adaptable than its competitors, perhaps better to take a gene's-eye view and say those genes won. If not, just call it all a wash.

But in this case you could just say A' is winning over A. Like if you were training a neural network, you wouldn't say that your random initialization won the loss function, you'd say the optimized network scores better loss than the initial random initialization.

I like to treat the environment/ecology as the cause. So that e.g. trees are caused by the sun.

I kind of feel like pelagibacter communis could maybe be seen as "evolutionary heat" or "ecological heat", like in the sense that the ecology has "space for" some microscopic activity so whatever major ecological causes pop up, some minimal species will evolve to fill up the minimal-species-niche.

I wouldn't go this far yet. E.g. I've been playing with the idea that the weighting where humans "win" evolution is something like adversarial robustness. This just wasn't really a convincing enough weighting to be included in the OP. But if something like that turns out correct then one could imagine that e.g. humans get outcompeted by something that's even more adversarially robust. Which is basically the standard alignment problem.

Like I did not in fact interject in response to Nate or Eliezer. Someone asked me what triggered my line of thought, and I explained that it came from their argument, but I also said that my point was currently too incomplete.

Right, I think there are variants of it that might work out, but there's also the aspect where some people argue that AGI will turn out to essentially be a bag-of-heuristics or similar, where inner alignment becomes less necessary because the heuristics achieve the outer goal even if they don't do it as flexibly as they could.

Richard Kennaway asked why I would think in those lines but the point of the OP isn't to make an argument about AI alignment, it's merely to think in those lines. Conclusions can come later once I'm finished exploring it.

Some people say that e.g. inner alignment failed for evolution in creating humans. In order for that claim of historical alignment difficulty to cash out, it feels like humans need to be "winners" of evolution in some sense, as otherwise species that don't achieve as full agency as humans do seem like a plausibly more relevant comparison to look at. This is kind of a partial post, playing with the idea but not really deciding anything definitive.

Load More