Sequences

Statistical Mechanics
Independent AI Research
Rationality in Research

Wiki Contributions

Comments

I've also realized that it might explain the anomalous (i.e. after adjusting for confounders) effects of living at higher altitude. The lower the atmospheric pressure, the less oxygen available to oxidize the PUFAs. Of course some foods will be imported already full of oxidized FAs and that will be too late, but presumably a McDonalds deep fryer in Colorado Springs is producing less PUFAs/hour than a correspondingly-hot one in San Francisco.

This feels too crazy to put in the original post but it's certainly interesting.

That post is part of what spurred this one

I uhh, didn't see that. Odd coincidence! I've added a link and will consider what added value I can bring from my perspective.

Thanks for the feedback. There's a condition which I assumed when writing this which I have realized is much stronger than I originally thought, and I think I should've devoted more time to thinking about its implications.

When I mentioned "no information being lost", what I meant is that in the interaction , each value  (where  is the domain of ) corresponds to only one value of . In terms of FFS, this means that each variable must be the maximally fine partition of the base set which is possible with that variable's set of factors.

Under these conditions, I am pretty sure that 

I was thinking about causality in terms of forced directional arrows in Bayes nets, rather than in terms of d-separation. I don't think your example as written is helpful because Bayes nets rely on the independence of variables to do causal inference:  is equivalent to .

It's more important to think about cases like  where causality can be inferred. If we change this to  by adding noise then we still get a distribution satisfying  (as  and  are still independent).

Even if we did have other nodes forcing  (such as a node  which is parent to , and another node  which is parent to ), then I still don't think adding noise lets us swap the orders round.

On the other hand, there are certainly issues in Bayes nets of more elements, particularly the "diamond-shaped" net with arrows . Here adding noise does prevent effective temporal inference, since, if  and  are no longer d-separated by , we cannot prove from correlations alone that no information goes between them through .

I had forgotten about OEIS! Anyway Ithink the actual number might be 1577 rather than 1617 (this also gives no answers). I was only assuming agnosticism over factors in the overlap region  if all pairs  had factors, but I think that is missing some examples. My current guess is that any overlap region like  should be agnostic iff all of the overlap regions "surrounding" it in the Venn diagram () in this situation either have a factor present or agnostic. This gives the series 1, 2, 15, 1577, 3397521 (my computer has not spat out the next element). This also gives nothing on the OEIS.

My reasoning for this condition is that we should be able to "remove" an observable from the system without trouble. If we have an agnosticism, in the intersection , then we can only remove observable  if this doesn't cause trouble for the new intersection , which is only true if we already have an factor in  (or are agnostic about it). 

I know very, very little about category theory, but some of this work regarding natural latents seem to absolutely smack of it. There seems to be a fairly important three-way relationship between causal models, finite factored sets, and Bayes nets.

To be precise, any causal model consisting of root sets , downstream sets , and functions mapping sets to downstream sets like  must, when equipped with a set of independent probability distributions over B, create a joint probability distribution compatible with the Bayes net that's isomorphic to the causal model in the obvious way. (So in the previous example, there would be arrows from only , and  to ) The proof of this seems almost trivial but I don't trust myself not to balls it up somehow when working with probability theory notation.

In the resulting Bayes net, one "minimal" natural latent which conditionally separates  and  is just the probabilities over just the root elements from  which both  and  depend on. It might be possible to show that this "minimal" construction of  satisfies a universal property, and so other  which is also "minimal" in this way must be isomorphic to .

I think the position of the ball is in V, since the players are responding to the position of the ball by forcing it towards the goal. It's difficult to predict the long-term position of the ball based on where it is now. The position of the opponent's goal would be an example of something in U for both teams. In this case both team's utility-functions contain a robust pointer to the goal's position.

I'd go for:

Reinforcement learning agents do two sorts of planning. One is the application of the dynamic (world-modelling) network and using a Monte Carlo tree search (or something like it) over explicitly-represented world states. The other is implicit in the future-reward-estimate function. You need to have as much planning as possible be of the first type:

  1. It's much more supervisable. An explicitly-represented world state is more interrogable than the inner workings of a future-reward-estimate.
  2. It's less susceptible to value-leaking. By this I mean issues in alignment which arise from instrumentally-valuable (i.e. not directly part of the reward function) goals leaking into the future-reward-estimate.
  3. You can also turn down the depth on the tree search. If the agent literally can't plan beyond a dozen steps ahead it can't be deceptively aligned.

I would question the framing of mental subagents as "mesa optimizers" here. This sneaks in an important assumption: namely that they are optimizing anything. I think the general view of "humans are made of a bunch of different subsystems which use common symbols to talk to one another" has some merit, but I think this post ascribes a lot more agency to these subsystems than I would. I view most of the subagents of human minds as mechanistically relatively simple.

For example, I might reframe a lot of the elements of talking about the unattainable "object of desire" in the following way:

1. Human minds have a reward system which rewards thinking about "good" things we don't have (or else we couldn't ever do things)
2. Human thoughts ping from one concept to adjacent concepts
3. Thoughts of good things associate to assessment of our current state
4. Thoughts of our current state being lacking cause a negative emotional response
5. The reward signal fails to backpropagate to the reward system in 1 enough, so the thoughts of "good" things we don't have are reinforced
6. The cycle continues

I don't think this is literally the reason, but framings on this level seem more mechanistic to me. 

I also think that any framings along the lines of "you are lying to yourself all the way down and cannot help it" and "literally everyone is messed in some fundamental way and there are no humans who can function in satisfying way" are just kind of bad. Seems like a Kafka trap to me.

I've spoken elsewhere about the human perception of ourselves as a coherent entity being a misfiring of systems which model others as coherent entities (for evolutionary reasons), I don't particularly think some sort of societal pressure is the primary reason for our thinking of ourselves as being coherent, although societal pressure is certainly to blame for the instinct to repress certain desires.

Load More