LESSWRONG
LW

Vivek Hebbar
1222Ω379161321
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
4Vivek Hebbar's Shortform
Ω
3y
Ω
5
New Endorsements for “If Anyone Builds It, Everyone Dies”
Vivek Hebbar9d57

Wouldn't it be better to have a real nighttime view of north america?  I also found it jarring...

Reply
the void
Vivek Hebbar19dΩ895

I sympathize somewhat with this complexity point but I'm worried that training will be extremely non-Bayesian in a way that makes complexity arguments not really work.  So I feel like the point about optimization power at best cuts the worry about hyperstition by about a factor of 2.  Perhaps there should be research on how "sticky" the biases from early in training can be in the face of later optimization pressure.

Reply1
Toward A Mathematical Framework for Computation in Superposition
Vivek Hebbar22d20

In general, computing a boolean expression with k terms without the signal being drowned out by the noise will require ϵ<1/k if the noise is correlated, and ϵ<1/k2 if the noise is uncorrelated.

Shouldn't the second one be 1√k?

the past token can use information that is not accessible to the token generating the key (as it is in its “future” – this is captured e.g. by the attention mask)

Is this meant to say "last token" instead of "past token"?

Reply
How training-gamers might function (and win)
Vivek Hebbar2moΩ340

I made a few edits to this post today, mostly in response to feedback from Ryan and Richard:

  • Added 2 sentences emphasizing the point that schemers probably won't be aware of their terminal goal in most contexts.  I thought this was clear from the post already, but apparently it wasn't.
  • Modified "What factors affect the likelihood of training-gaming?" to emphasize that "sum of proxies" and "reward-seeker" are points on a spectrum.  We might get an in-between model where context-dependent drives conflict with higher goals and sometimes "win" even outside the settings they are well-adapted to.  I also added a footnote about this to "Characterizing training-gamers and proxy-aligned models".
  • Edited the "core claims" section (e.g. softening a claim and adding content).
  • Changed "reward seeker" to "training-gamer" in a bunch of places where I was using it to refer to both terminal reward seekers and schemers.
  • Miscellaneous small changes
Reply
Downstream applications as validation of interpretability progress
Vivek Hebbar2moΩ340

In the fictional dialogue, Claude Shannon's first answer is more correct -- info theory is useful far outside the original domain of application, and its elegance is the best way to predict that.

Reply
'Empiricism!' as Anti-Epistemology
Vivek Hebbar3mo42

Slightly more spelled-out thoughts about bounded minds:

  1. We can't actually run the hypotheses of Solomonoff induction.  We can only make arguments about what they will output.
  2. In fact, almost all of the relevant uncertainty is logical uncertainty.  The "hypotheses" (programs) of Solomonoff induction are not the same as the "hypotheses" entertained by bounded Bayesian minds.  I don't know of any published formal account of what these bounded hypotheses even are and how they relate to Solomonoff induction.  But informally, all I'm talking about are ordinary hypotheses like "the Ponzi guy only gets money from new investors".
  3. In addition to "bounded hypotheses" (of unknown type), we also have "arguments".  An argument is a thing whose existence provides fallible evidence for a claim.
  4. Arguments are made of pieces which can be combined "conjuctively" or "disjunctively".  The conjunction of two subarguments is weaker evidence for its claim than each subargument was for its subclaim.  This is the sense in which "big arguments" are worse.
Reply
'Empiricism!' as Anti-Epistemology
Vivek Hebbar3mo42

I suspect there is some merit to the Scientist's intuition (and the idea that constant returns are more "empirical") which nobody has managed to explain well.  I'll try to explain it here.[1]

The Epistemologist's notion of simplicity is about short programs with unbounded runtime which perfectly explain all evidence.  The [non-straw] empiricist notion of simplicity is about short programs with heavily-bounded runtime which approximately explain a subset of the evidence.  The Epistemologist is right that there is nothing of value in the empiricist's notion if you are an unbounded Solomonoff inductor.  But for a bounded mind, two important facts come into play:

  1. The implications of hypotheses can only be guessed using "arguments".  These "arguments" become less reliable the more conjunctive they are.
  2. Induction over runtime-bounded programs turns out for some reason to agree with Solomonoff induction way more than the maximum entropy prior does, despite not containing any "correct" hypotheses.  This is a super important fact about reality.

Therefore a bounded mind will sometimes get more evidence from "fast-program induction on local data" (i.e. just extrapolate without a gears-level model) than from highly conjunctive arguments about gears-level models.

  1. ^

    FWIW, I agree with the leading bit of Eliezer's position -- that we should think about the object-level and not be dismissive of arguments and concretely imagined gears-level models.

Reply
Shah and Yudkowsky on alignment failures
Vivek Hebbar3moΩ240

I'd be capable of helping aliens optimize their world, sure. I wouldn't be motivated to, but I'd be capable.

@So8res How many bits of complexity is the simplest modification to your brain that would make you in fact help them?  (asking for an order-of-magnitude wild guess)
(This could be by actually changing your values-upon-reflection, or by locally confusing you about what's in your interest, or by any other means.)

Reply
Arguments about fast takeoff
Vivek Hebbar3mo30

Sigmoid is usually what "straight line" should mean for a quantity bounded at 0 and 1.  It's a straight line in logit-space, the most natural space which complies with that range restriction.
(Just as exponentials are often the correct form of "straight line" for things that are required to be positive but have no ceiling in sight.)

Reply
Discussion with Nate Soares on a key alignment difficulty
Vivek Hebbar9moΩ330

Do you want to try playing this game together sometime?

Reply
Load More
Transformers
3y
68When does training a model change its goals?
Ω
19d
Ω
2
40Political sycophancy as a model organism of scheming
Ω
2mo
Ω
0
52How can we solve diffuse threats like research sabotage with AI control?
Ω
2mo
Ω
1
107How training-gamers might function (and win)
Ω
3mo
Ω
5
69Different senses in which two AIs can be “the same”
Ω
1y
Ω
2
174Thomas Kwa's MIRI research experience
2y
53
46Infinite-width MLPs as an "ensemble prior"
Ω
2y
Ω
0
13Is EDT correct? Does "EDT" == "logical EDT" == "logical CDT"?
Q
2y
Q
2
4Vivek Hebbar's Shortform
Ω
3y
Ω
5
68Path dependence in ML inductive biases
Ω
3y
Ω
13
Load More