All of paulom's Comments + Replies

A while back I was looking for toy examples of environments with different amounts of 'naturalness' to their abstractions, and along the way noticed a connection between this version of Gooder Regulator and the Blackwell order.

Inspired by this, I expanded on this perspective of preferences-over-models / abstraction a fair bit here.

It includes among other things:

  • the full preorder of preferences-shared-by-all-agents over maps (vs. just the maximum)
  • an argument that actually we want to generalize to this diagram instead[1] :
  • an extension to preferences
... (read more)

FWIW - here (finally) is the related post I mentioned, which motivated this observation: Natural Abstraction: Convergent Preferences Over Information Structures The context is a power-seeking-style analysis of the naturality of abstractions, where I was determined to have transitive preferences.

It had quite a bit of scope creep already, so I ended up not including a general treatment of the (transitive) 'sum over orbits' version of retargetability (and some parts I considered only optimality - sorry! still think it makes sense to start there first and then... (read more)

Not sure this is exactly what you meant by the full preference ordering, but might be of interest: I give the preorder of universally-shared-preferences between "models" here (in section 4).

Basically, it is the Blackwell order, if you extend the Blackwell setting to include a system.

Thanks for the reply. I'll clean this up into a standalone post and/or cover this in a related larger post I'm working on, depending on how some details turn out.

What are  here?

Variables I forgot to rename, when I changed how I was labelling the arguments of in my example. This should be  retargetable (as arguments to ).

paulomΩ6120

I appreciate this generalization of the results - I think it's a good step towards showing the underlying structure involved here.

One point I want to comment on is transitivity of , as a relation on induced functions . Namely, it isn't, and can even contain cycles of non-equivalent elements. (This came up when I was trying to apply a version of these results, and hoping that would be the preference relation I was looking for out of the box.) Quite possibly you noticed this since you give 'limited transitivity' in Lemma B.1 rather than fu... (read more)

3TurnTrout
This is a nice contribution, thank you!  I agree with the parts I could verify within about 10 minutes of staring (it's been a while). The scalar-retargetability is nice, and I like the delineation of what definitions yield what properties. Seems like an additional hour of work would yield a good AF post, where I'd expect most of the useful additional work to come from fleshing out the example more and justifying the claims in a bit more detail.  To clarify: What are A,B,C here?

I think this line of research is interesting. I really like the core concept of abstraction as summarizing the information that's relevant 'far away'.


A few thoughts:

- For a common human abstraction to be mostly recoverable as a 'natural' abstraction, it must depend mostly on the thing it is trying to abstract, and not e.g. evolutionary or cultural history, or biological implementation. This seems more plausible for 'trees' than it does for 'justice'. There may be natural game-theoretic abstractions related to justice, but I'd expect human concepts and beha... (read more)

7johnswentworth
Great comment, you're hitting a bunch of interesting points. A few notes on this. First, what natural abstractions we use will clearly depend at least somewhat on the specific needs of humans. A prehistoric tribe of humans living on an island near the equator will probably never encounter snow, and never use that natural abstraction. My claim, for these cases, is that the space of natural abstractions is (approximately) discrete. Discreteness says that there is no natural abstraction "arbitrarily close" to another natural abstraction - so, if we can "point to" a particular natural abstraction in a close-enough way, then there's no ambiguity about which abstraction we're pointing to. This does not mean that all minds use all abstractions. But it means that if a mind does use a natural abstraction, then there's no ambiguity about which abstraction they're using. One concrete consequence of this: one human can figure out what another human means by a particular word without an exponentially massive number of examples. The only way that's possible is if the space of potential-word-meanings is much smaller than e.g. the space of configurations of a mole of atoms. Natural abstractions give a natural way for that to work. Of course, in order for that to work, both humans must already be using the relevant abstraction - e.g. if one of them has no concept of snow, then it won't work for the word "snow". But the claim is that we won't have a situation where two people have intuitive notions of snow which are arbitrarily close, yet different. (People could still give arbitrarily-close-but-different verbal definitions of snow, but definitions are not how our brain actually represents word-meanings at the intuitive level. People could also use more-or-less fine-grained abstractions, like eskimos having 17 notions of snow, but those finer-grained abstractions will still be unambiguous.) Yes! This can also happen even without agents: if the earth were destroyed and all that