Vivek Hebbar

Wikitag Contributions

Comments

Sorted by

Slightly more spelled-out thoughts about bounded minds:

  1. We can't actually run the hypotheses of Solomonoff induction.  We can only make arguments about what they will output.
  2. In fact, almost all of the relevant uncertainty is logical uncertainty.  The "hypotheses" (programs) of Solomonoff induction are not the same as the "hypotheses" entertained by bounded Bayesian minds.  I don't know of any published formal account of what these bounded hypotheses even are and how they relate to Solomonoff induction.  But informally, all I'm talking about are ordinary hypotheses like "the Ponzi guy only gets money from new investors".
  3. In addition to "bounded hypotheses" (of unknown type), we also have "arguments".  An argument is a thing whose existence provides fallible evidence for a claim.
  4. Arguments are made of pieces which can be combined "conjuctively" or "disjunctively".  The conjunction of two subarguments is weaker evidence for its claim than each subargument was for its subclaim.  This is the sense in which "big arguments" are worse.

I suspect there is some merit to the Scientist's intuition (and the idea that constant returns are more "empirical") which nobody has managed to explain well.  I'll try to explain it here.[1]

The Epistemologist's notion of simplicity is about short programs with unbounded runtime which perfectly explain all evidence.  The [non-straw] empiricist notion of simplicity is about short programs with heavily-bounded runtime which approximately explain a subset of the evidence.  The Epistemologist is right that there is nothing of value in the empiricist's notion if you are an unbounded Solomonoff inductor.  But for a bounded mind, two important facts come into play:

  1. The implications of hypotheses can only be guessed using "arguments".  These "arguments" become less reliable the more conjunctive they are.
  2. Induction over runtime-bounded programs turns out for some reason to agree with Solomonoff induction way more than the maximum entropy prior does, despite not containing any "correct" hypotheses.  This is a super important fact about reality.

Therefore a bounded mind will sometimes get more evidence from "fast-program induction on local data" (i.e. just extrapolate without a gears-level model) than from highly conjunctive arguments about gears-level models.

  1. ^

    FWIW, I agree with the leading bit of Eliezer's position -- that we should think about the object-level and not be dismissive of arguments and concretely imagined gears-level models.

I'd be capable of helping aliens optimize their world, sure. I wouldn't be motivated to, but I'd be capable.

@So8res How many bits of complexity is the simplest modification to your brain that would make you in fact help them?  (asking for an order-of-magnitude wild guess)
(This could be by actually changing your values-upon-reflection, or by locally confusing you about what's in your interest, or by any other means.)

Sigmoid is usually what "straight line" should mean for a quantity bounded at 0 and 1.  It's a straight line in logit-space, the most natural space which complies with that range restriction.
(Just as exponentials are often the correct form of "straight line" for things that are required to be positive but have no ceiling in sight.)

Do you want to try playing this game together sometime?

We're then going to use a small amount of RL (like, 10 training episodes) to try to point it in this direction. We're going to try to use the RL to train: "Act exactly like [a given alignment researcher] would act."

Why are we doing RL if we just want imitation?  Why not SFT on expert demonstrations?
Also, if 10 episodes suffices, why is so much post-training currently done on base models?

If the agent follows EDT, it seems like you are giving it epistemically unsound credences. In particular, the premise is that it's very confident it will go left, and the consequence is that it in fact goes right. This was the world model's fault, not EDT's fault. (It is notable though that EDT introduces this loopiness into the world model's job.)

Superadditivity seems rare in practice.  For instance, workers should have subadditive contributions after some point.  This is certainly true in the unemployment example in the post.

The idea of dividing failure stories into "failures involving rogue deployments" and "other failures" seems most useful if the following argument goes through:
1. Catastrophes require a very large (superhuman?) quantity and/or quality of intellectual labor
2. Either this labor is done by AIs in approved scaffolds, or it is done in "rogue deployments"
3. Hence the only easy-by-default disaster route is through a rogue deployment
4. Hence if we rule out rogue deployments and very impressive/difficult malicious labor in our scaffolds, we are safe

This seems true for most catastrophes which only need outsider affordances (such as "order the proteins etc for a pandemic or a nanofactory").  Catastrophes with outsider affordances can usually be presumed difficult because rogue humans don't randomly end the world.

However, this argument for #1 doesn't work for routes which use insider affordances or responsibilities.  Nor for routes which go through work done by future generations of AIs.  The central case of both is routes that involve using insider privileges to backdoor or intentionally misalign future AIs.  For example:
- Planting backdoors in the codebase which allow safety training code to be turned off or misconfigured at a later time
- Poisoning datasets (idk if insider access helps though)
- Intentionally creating and defending flawed alignment plans

These routes get future generations of AIs to do the difficult work required for the final catastrophe.  But they aren't necessarily rogue deployments.  Consider a backdoored future AI running only in authorized scaffolds, where the safety case for the future AI was an alignment case (not a control case).  The alignment case fails due to the subversive actions of the earlier AI.  So the earlier AI has caused a catastrophe without doing the difficult work of directly taking over the world, and also without a rogue deployment.

One could separately argue that these routes are also "fundamentally hard" (even if not as hard as directly causing a catastrophe), but I don't see a clear blanket reason.

Load More