Mateusz Bagiński

Agent foundations, AI macrostrategy, human enhancement.

I endorse and operate by Crocker's rules.

I have not signed any agreements whose existence I cannot mention.

Wikitag Contributions

Load More

Comments

Sorted by

Link to the source of the quote?

Seeing some training data more than once would make the incentive to [have concepts that generalize OOD] weaker than if [they saw every possible training datapoint at most once], but this doesn't mean that the latter is an incentive towards concepts that generalize OOD.

Though admittedly, we are getting into the discussion of where to place the zero point of "null OOD generalization incentive".

Also, I haven't looked into it, but it's plausible to me that models actually do see some data more than once because there are a lot of duplicates on the internet. If your training data contains the entire English Wikipedia, nlab, and some math textbooks, then surely there's a lot of duplicated theorems and exercises (not necessarily word-by-word, but it doesn't have to be word-by-word).

But I realized there might be another flaw in my comment, so I'm going to add an ETA.

(If I'm misunderstanding you, feel free to elaborate, ofc.)

DeepMind says boo SAEs, now Anthropic says yay SAEs!

The most straightforward synthesis[1] of these two reports is that SAEs find some sensible decomposition of the model's internals into computational elements (concepts, features, etc.), which circuits then operate on. It's just that these computational elements don't align with human thinking as nicely as humans would like. E.g. SAE-based concept probes don't work well OOD because the models were not optimized to have concepts that generalize OOD. This is perfectly consistent with linear probes being able to detect the concept from model activations (the model retains enough information about the concept such as "harmful intent" for the probe to latch onto it, even if the concept itself (or rather, its OOD-generalizing version) is not priviledged in the model's ontology). 

ETA: I think this would (weakly?) predict that SAE generalization failures should align with model performance dropping on some tasks. Or at least that the model would need to have some other features that get engaged OOD so that the performance doesn't drop? Investigating this is not my priority, but I'd be curious to know if something like this is the case.

  1. ^

    not to say that I'm believing it's strongly; it's just a tentative/provisional synthesis/conclusion

So... there surely are things like (overlapping, likely non-exhaustive):

  • Memetic Darwinian anarchy - concepts proliferating without control, trying to carve out for themselves new niches in the noosphere or grab parts of real estate belonging to incumbent concepts.
  • Memetic warfare - individuals, groups, egregores, trying to control the narrative by describing the same thing in the language of your own ideology, yadda yadda.
  • Independent invention of the same idea - in which case it's usually given different names (but also, plausibly, since some people may grow attached to their concepts of choice, they might latch onto trivial/superficial differences and amplify that, so that one or more instances of this multiply independently invented concept now is now morphed into something else than what it "should be").
  • Memetic rent seeking - because introducing a new catchy concept might marginally bump up your h-index.

So, as usual, the law of equal and opposite advice applies.

Still, the thing Jan describes is real and often a big problem.

I also think I somewhat disagree with this:

An idea should either be precisely defined enough that it's clear why it can't be rounded off (once the precise definition is known), or it's a vague idea and it either needs to become more precise to avoid being rounded or it is inherently vague and being vague there can't be much harm from rounding because it already wasn't clear where its boundaries were in concept space.

Meanings are often subtle, intuited but not fully grasped, in which case a (premature) attempt to explicitize them risks collapsing their reference to the important thing they are pointing at. Many important concepts are not precisely defined. Many are best sorta-defined ostensively: "examples of X include A, B, C, D, and E; I'm not sure what it makes all of them instances of X, maybe it's that they share the properties Y and Z ... or at least my best guess is that Y and Z are important parts of X and I'm pretty sure that X is a Thing™".

Eliezer has a post (I couldn't find it at the moment) where he noticed that the probabilities he gave were inconsistent. He asks something like, "Would I really not behave as if God existed if I believed that P(Christianity)=1e-5?" and then, "Oh well, too bad, but I don't know which way to fix it, and fixing it either way risks losing important information, so I'm deciding to live with this lack of consistency for now."

This Google search is empty (and it's also empty on the original Arbital page, so it's not a porting issue).

LUCA lived around 4 billion years ago with some chirality chosen at random.

Not necessarily: https://en.wikipedia.org/wiki/Homochirality#Deterministic_theories

E.g.

Deterministic mechanisms for the production of non-racemic mixtures from racemic starting materials include: asymmetric physical laws, such as the electroweak interaction (via cosmic rays) or asymmetric environments, such as those caused by circularly polarized light, quartz crystals, or the Earth's rotation, β-Radiolysis or the magnetochiral effect. The most accepted universal deterministic theory is the electroweak interaction. Once established, chirality would be selected for.

Especially given how concentrated-sparse it is.

It would be much better to have it as a google sheet.

How long do you[1] expect it to take to engineer scaffolding that will make reasoning models useful for the kind of stuff described in the OP?

  1. ^

    You=Ryan firstmost but anybody reading this secondmost.

Load More