porby

Wikitag Contributions

Comments

Sorted by
porby40

I sometimes post experiment ideas on my shortform. If you see one that seems exciting and you want to try it, great! Please send me a message so we can coordinate and avoid doing redundant work.

porby50

This is great research and I like it!

I'd be interested in knowing more about how the fine-tuning is regularized and the strength of any KL-divergence-penalty-ish terms. I'm not clear on how the openai fine-tuning API works here with default hypers.

By default, I would expect that optimizing for a particular narrow behavior with no other constraints would tend to bring along a bunch of learned-implementation-dependent correlates. Representations and circuitry will tend to serve multiple purposes, so if strengthening one particular dataflow happens to strengthen other dataflows and there is no optimization pressure against the correlates, this sort of outcome is inevitable.

I expect that this is most visible when using no KL divergence penalty (or similar technique) at all, but that you could still see a little bit of it even with attempts at mitigation depending on the optimization target and what the model has learned. (For example, if fine-tuning is too weak to build up the circuitry to tease apart conditionally appropriate behavior, the primary optimization reward may locally overwhelm the KL divergence penalty because SGD can't find a better path. I could see this being more likely with PEFT like LoRAs, maybe?)

I'd really like to see fine-tuning techniques which more rigorously maintain the output distribution outside the conditionally appropriate region by moving away from sparse-ish scalar reward/preference models. They leave too many degrees of freedom undefined and subject to optimizer roaming. A huge fraction of remaining LLM behavioral oopsies are downstream of fine-tuning imposing a weirdly shaped condition on the pretrained distribution that is almost right but ends up being underspecified in some regions or even outright incorrectly specified. This kind of research is instrumental in motivating that effort.

porby40

These things are possible, yes. Those bad behaviors are not necessarily trivial to access, though.

  1. If you underspecify/underconstrain your optimization process, it may roam to unexpected regions permitted by that free space.
  2. It is unlikely that the trainer's first attempt at specifying the optimization constraints during RL-ish fine tuning will precisely bound the possible implementations to their truly desired target, even if the allowed space does contain it; underconstrained optimization is a likely default for many tasks.
  3. Which implementations are likely to be found during training depends on what structure is available to guide the optimizer (everything from architecture, training scheme, dataset, and so on), and the implementations' accessibility to the optimizer with respect to all those details.
  4. Against the backdrop of the pretrained distribution on LLMs, low-level bad behavior (think Sydney Bing vibes) is easy to access (even accidentally!) against a pretraining distribution. Agentic coding assistants are harder to access; it's very unlikely you will accidentally produce an agentic coding assistant. Likewise, it takes effort to specify an effective agent that pursues coherent goals against the wishes of its user. It requires a fair number of bits to narrow the distribution in that way.
  5. More generally, if you use N bits to try to specify behavior A, having a nonnegligible chance of accidentally instead specifying behavior B requires that the bits you specify at minimum allow B, and to make it probable, they would need to imply B. (I think Sydney Bing is actually a good example case to consider here.)
  6. For a single attempt at specifying behavior, it's vastly more likely that a developer trains a model that fails in uninteresting ways than for them to accidentally specify just enough bits to achieve something that looks about right, but ends up entailing extremely bad outcomes at the same time. Uninteresting, useless, and easy-to-notice failures are the default because they hugely outnumber 'interesting' (i.e. higher bit count) failures.
  7. You can still successfully specify bad behavior if you are clever, but malicious.
  8. You can still successfully specify bad behavior if you make a series of mistakes. This is not impossible or even improbable; it has already happened and will happen again. Achieving higher capability bad behavior, however, tends to require more mistakes, and is less probable.

Because of this, I expect to see lots of early failures, and that more severe failures will be rarer proportional to the error rate needed to specify the failure. I strongly expect the failures to be visible enough that a desire to make a working product combined with something like liability frameworks would have some iterations to work and spook irresponsible companies into putting nonzero effort into not making particularly long series of mistakes. This is not a guarantee of safety.

porby40

Instrumentality exists on the simulacra level, not the simulator level.  This would suggest that corrigibility could be maintained by establishing a corrigible character in context.  Not clear on the practical implications.

That one, yup. The moment you start conditioning (through prompting, fine tuning, or otherwise) the predictor into narrower spaces of action, you can induce predictions corresponding to longer term goals and instrumental behavior. Effective longer-term planning requires greater capability, so one should expect this kind of thing to be more apparent as models get stronger even as the base models can be correctly claimed to have 'zero' instrumentality.

In other words, the claims about simulators here are quite narrow. It's pretty easy to end up thinking that this is useless if the apparent-nice-property gets deleted the moment you use the thing, but I'd argue that this is actually still a really good foundation. A longer version was the goal agnosticism FAQ, and there's this RL comment poking at some adjacent and relevant intuitions, but I haven't written up how all the pieces come together. A short version would be that I'm pretty optimistic at the moment about what path to capabilities greedy incentives are going to push us down, and I strongly suspect that the scariest possible architectures/techniques are actually repulsive to the optimizer-that-the-AI-industry-is.

porby40

There are lots of little things when it's not at a completely untenable level. Stuff like:

  1. Going up a flight or three of steps and really feeling it in my knees, slowing down, and saying 'hoo-oof.'
  2. Waking up and stepping out of bed and feeling general unpleasantness in my feet, ankles, knees, hips, or back.
  3. Quickly seeking out places to sit when walking around, particularly if there's also longer periods of standing, because my back would become terribly stiff.
  4. Walking on uneven surfaces and having a much harder time catching myself when I stumbled, not infrequently causing me to tweak something in my ankle or knee.
  5. Always having something hurting a bit. Usually my knees, back, or ankles, but maybe I tried to catch myself with my arm and my elbow didn't like it because it wasn't ready to arrest that much mass that quickly.
  6. Trying to keep up on a somewhat sloped section of sidewalk and trying to not sound like I was struggling.
  7. Being unable to basic motions like squatting (unweighted) without significant pain, and needing to use my arms to stand up.
  8. Being woken up by aches.
  9. Accumulated damage from mild and moderate sprains making itself known for years after the original incidents.
  10. Conscious avoidance. Looking at some stairs and having a pang of "ugh," and looking for an alternative.
  11. Subconscious avoidance. Reaching down to pick up a backpack from the floor, but bracing one arm against a desk to minimize how much load is carried by my knees or hips because my learned motor patterns took into account that going too much further than that was hard and would likely be painful.

When it progresses, it's hard to miss:

  1. Laying on the ground, trying not to laugh at the absurdity of how thoroughly stuck I was, because laughing would hurt too much. I tried motivating myself to move, but came to the conclusion that even if there was a knife-wielding madman sprinting toward me in that moment, the involuntary muscle spasms caused by the pain would not have let me escape.
  2. Walking around a corner, slowly, with bare feet, on level ground, indoors, and rolling my ankle.
  3. Sometimes being unable to walk normally for days at a time because one of my knees decided to fall out of position while walking and putting weight on it afterwards squished a bunch of soft tissues that weren't supposed to be pinched like that.
  4. Injuries becoming mentally routine. Getting too much practice breathing through acute pain, developing a dispassionate approach to checking the joint to see how bad it is, begrudgingly calling for help when it was clear I wasn't going to be able to walk.
porby20

Hey, we met at EAGxToronto : )

🙋‍♂️

So my model of progress has allowed me to observe our prosaic scaling without surprise, but it doesn't allow me to make good predictions since the reason for my lack of surprise has been from Vingean prediction of the form "I don't know what progress will look like and neither do you".

This is indeed a locally valid way to escape one form of the claim—without any particular prediction carrying extra weight, and the fact that reality has to go some way, there isn't much surprise in finding yourself in any given world.

I do think there's value in another version of the word "surprise," here, though. For example: the cross-entropy loss between the predicted distribution with respect to the observed distribution. Holding to a high uncertainty model of progress will result in continuously high "surprise" in this sense, because it struggles to narrow to a better distribution generator. It's a sort of overdamped epistemological process.

I think we have enough information to make decent gearsy models of progress around AI. As a bit of evidence, some such models have already been exploited to make gobs of money. I'm also feeling pretty good[1] about many of my predictions (like this post) that contributed to me pivoting entirely into AI; there's an underlying model that has a bunch of falsifiable consequences which has so far survived a number of iterations, and that model has implications through the development of extreme capability.

What I have been surprised about has been governmental reaction to AI...

Yup! That was a pretty major (and mostly positive) update for me. I didn't have a strong model of government-level action in the space and I defaulted into something pretty pessimistic. My policy/governance model is still lacking the kind of nuance that you only get by being in the relevant rooms, but I've tried to update here as well. That's also part of the reason why I'm doing what I'm doing now.

In any case, I've been hoping for the last few years I would have time to do my undergrad and start working on the alignment without a misaligned AI going RSI, and I'm still hoping for that. So that's lucky I guess. 🍀🐛

May you have the time to solve everything!

  1. ^

     ... epistemically

porby121

I've got a fun suite of weird stuff going on[1], so here's a list of sometimes-very-N=1 data:

  1. Napping: I suck at naps. Despite being very tired, I do not fall asleep easily, and if I do fall asleep, it's probably not going to be for just 5-15 minutes. I also tend to wake up with a lot of sleep inertia, so the net effect of naps on alertness across a day tends to be negative. They also tend to destroy my sleep schedule. 
  2. Melatonin: probably the single most noticeable non-stimulant intervention. While I'm by-default very tired all the time, it's still hard to go to sleep. Without mitigation, this usually meant it was nearly impossible to maintain a 24 hour schedule. Melatonin helps a lot with going to sleep and mostly pauses the forward march (unless I mess up).[2]
  3. Light therapy: subtle, but seems to have an effect. It's more obvious when comparing 'effectively being in a cave' with 'being exposed to a large amount of direct sunlight.' I did notice that, when stacked on everything else, the period where I tried light therapy[3] was the first time I was able to intentionally wake up earlier over the course of several days.
  4. Avoiding excessive light near bed: pretty obviously useful. I've used blue-blocking glasses with some effect, though it's definitely better to just not be exposed to too much light in the first place. I reduce monitor brightness to the minimum if I'm on the computer within 3-5 hours of sleep.
  5. Consistent sleep schedule: high impact, if I can manage it. Having my circadian rhythm fall out of entrainment was a significant contributor[4] to my historical sleeping 10-12 hours a day.[5]
  6. Going to bed earlier: conditioning on waking up with no alarm, sleep duration was not correlated with daytime alertness for me according to my sleep logs. Going to bed early enough such that most of my sleep was at night was correlated.[6]
  7. CPAP: Fiddled with one off-prescription a while since I had access to one and it was cheaper than testing for sleep apnea otherwise. No effect.[7]
  8. Nose strips: hard to measure impact on sleep quality, but subjectively nice! My nosetubes are on the small side, I guess.
  9. Changing detergents/pillows: I seem to react to some detergents, softeners, dust, and stuff along those lines. It's very obvious when I don't double-rinse my pillowcases; my nose swells up to uselessness.
  10. Sleeping room temperature: 62-66F is nice. 72F is not nice. 80F+ is torture.[8]
  11. Watercooled beds: I tried products like eight sleep for a while. If you don't have the ability to reduce the air temperature and humidity to ideal levels, it's worth it, but there is a comfort penalty. It doesn't feel like laying on a fresh and pleasantly cool sheet; it's weirdly like laying on somewhat damp sheets that never dry.[9] Way better than nothing, but way worse than a good sleeping environment.[10]
  12. Breathable bedding: surprisingly noticeable. I bought some wirecutter-reviewed cotton percale sheets and a latex mattress. I do like the latex mattress, but I think the sheets have a bigger effect. Don't have data on whether it meaningfully changed sleep quality, but it is nice.
  13. Caffeine: pretty standard. Helps a bit. Not as strong as prescription stimulants at reasonable dosages, can't use it every day without the effect diminishing very noticeably. And without tolerance, consuming it much later than immediately after getting out of bed disrupts my sleep the next night. I tend to drink some coffee in the morning on days where I don't take other stimulants to make the mornings suck less.
  14. Protriptyline: sometimes useful, but a very bad time for me. Got pretty much all the side effects, including the "stop taking and talk to your doctor immediately" kind and uncomfortably close to the "go to a hospital" kind.[11]
  15. Modafinil: alas, no significant effect. Maybe slightly clumsier, maybe slightly longer sleep, maybe slightly more tired. Best guess is that it interfered with my sleep a little bit.
  16. Ritalin: Works! I use a low dose (12.5 mg/day) of the immediate release generic. Pretty short half-life, but that's actually nice for being able to go to sleep. I often cut pills in half to manually allocate alertness more smoothly. I can also elect to just not take it before a plane flight or on days where being snoozey isn't a big problem.
  17. Stimulant juggling/off days: very hard to tell if there's an effect on tolerance with N=1 for non-caffeine stimulants at low therapeutic dosages. I usually do ~5 ritalin days and ~2 caffeine days a week, and I can say that ritalin does still obviously work after several years.[12]
  18. Creatine: I don't notice any sleep/alertness effect, though some people report it. I use it primarily for fitness reasons.[13]
  19. Exercise: hard to measure impact on alertness. Probably some long-term benefit, but if I overdo it on any given day, it's easy to ruin myself. I exercise a bit every day to try to avoid getting obliterated.[14]
  20. Cytomel: this is a weird one that I don't think will be useful to anyone reading this. It turns out that, while my TSH and T4 levels are normal, my untreated T3 levels are very low for still-unclear reasons. I had symptoms of hypothyroidism for decades, but it took until my late 20's to figure out why. Hypothyroidism isn't the same thing as a sleep disorder, but stacking fatigue on a sleep disorder isn't fun.[15]
  21. Meal timing: another weird one. I've always had an unusual tendency towards hypoglycemic symptoms.[16] In its milder form, this comes with severe fatigue that can seem a bit like sleepiness if you squint. As of a few weeks ago with the help of a continuous glucose monitor, I finally confirmed I've got some very wonky blood sugar behavior despite a normal A1C; one notable bit is a pattern of reactive hypoglycemia. I can't avoid hypoglycemia during exercise by e.g. drinking chocolate milk beforehand. I've actually managed to induce mild hypoglycemia by eating a cinnamon roll pancake (and not exercising). Exercising without food actually works a bit better, though I do still have to be careful about the intensity * duration.

I'm probably forgetting some stuff.

  1. ^

    "Idiopathic hypersomnia"-with-a-shrug was the sleep doctor's best guess on the sleep side, plus a weirdo kind of hypothyroidism, plus HEDs, plus something strange going on with blood sugar regulation, plus some other miscellaneous and probably autoimmune related nonsense.

  2. ^

    I tend to take 300 mcg about 2-5 hours before my target bedtime to help with entrainment, then another 300-600 mcg closer to bedtime for the sleepiness promoting effect. 

  3. ^

    In the form of luminette glasses. I wouldn't say they have a great user experience; it's easy to get a headache and the nose doohicky broke almost immediately. That's part of why I didn't keep using them, but I may try again.

  4. ^

    But far from sole!

  5. ^

    While still being tired enough during the day to hallucinate on occasion.

  6. ^

    Implementing this and maintaining sleep consistency functionally requires other interventions. Without melatonin etc., my schedule free-runs mercilessly.

  7. ^

    Given that I was doing this independently, I can't guarantee that Proper Doctor-Supervised CPAP Usage wouldn't do something, but I doubt it. I also monitored myself overnight with a camera. I do a lot of acrobatics, but there was no sign of apneas or otherwise distressed breathing.

  8. ^

    When I was younger, I would frequently ask my parents to drop the thermostat down at night because we lived in one of those climates where the air can kill you if you go outside at the wrong time for too long. They were willing to go down to around 73F at night. My room was east-facing, theirs was west-facing. Unbeknownst to me, there was also a gap between the floor and wall that opened directly into the attic. That space was also uninsulated. Great times.

  9. ^

    It wasn't leaking!

  10. ^

     The cooling is most noticeable at pressure points, so there's a very uneven effect. Parts of your body can feel uncomfortably cold while you're still sweating from the air temperature and humidity.

  11. ^

    The "hmm my heart really isn't working right" issues were bad, but it also included some spooky brain-hijacky mental effects. Genuinely not sure I would have survived six months on it even with total awareness that it was entirely caused by the medication and would stop if I stopped taking it. I had spent some years severely depressed when I was younger, but this was the first time I viscerally understood how a person might opt out... despite being perfectly fine 48 hours earlier.

  12. ^

    I'd say it dropped a little in efficacy in the first week or two, maybe, but not by much, and then leveled out. Does the juggling contribute to this efficacy? No idea. Caffeine and ritalin both have dopaminergic effects, so there's probably a little mutual tolerance on that mechanism, but they do have some differences.

  13. ^

    Effect is still subtle, but creatine is one of the only supplements that has strong evidence that it does anything.

  14. ^

    Beyond the usual health/aesthetic reasons for exercising, I also have to compensate for joint loosey-gooseyness related to doctor-suspected HEDs. Even now, I can easily pull my shoulders out of socket, and last week discovered that (with the help of some post-covid-related joint inflammation), my knees still do the thing where they slip out of alignment mid-step and when I put weight back on them, various bits of soft tissues get crushed. Much better than it used to be; when I was ~18, there were many days where walking was uncomfortable or actively painful due to a combination of ankle, knee, hip, and back pain.

  15. ^

    Interesting note: my first ~8 years of exercise before starting cytomel, including deliberate training for the deadlift, saw me plateau at a 1 rep max on deadlift of... around 155 pounds. (I'm a bit-under-6'4" male. This is very low, like "are you sure you're even exercising" low. I was, in fact, exercising, and sometimes at an excessive level of intensity. I blacked out mid-rep once; do not recommend.)

    Upon starting cytomel, my strength increased by around 30% within 3 months. Each subsequent dosage increase was followed by similar strength increases. Cytomel is not an anabolic steroid and does not have anabolic effects in healthy individuals.

    I'm still no professional powerlifter, but I'm now at least above average within the actively-lifting population of my size. The fact that I "wasted" so many years of exercise was... annoying.

  16. ^

    Going too long without food or doing a little too much exercise is a good way for me to enter a mild grinding hypoglycemic state. More severely, when I went a little too far with intense exercise, I ended up on the floor unable to move while barely holding onto consciousness.

porby41

But I disagree that there’s no possible RL system in between those extremes where you can have it both ways.

I don't disagree. For clarity, I would make these claims, and I do not think they are in tension:

  1. Something being called "RL" alone is not the relevant question for risk. It's how much space the optimizer has to roam.
  2. MuZero-like strategies are free to explore more space than something like current applications of RLHF. Improved versions of these systems working in more general environments have the capacity to do surprising things and will tend to be less 'bound' in expectation than RLHF. Because of that extra space, these approaches are more concerning in a fully general and open-ended environment.
  3. MuZero-like strategies remain very distant from a brute-forced policy search, and that difference matters a lot in practice.
  4. Regardless of the category of the technique, safe use requires understanding the scope of its optimization. This is not the same as knowing what specific strategies it will use. For example, despite finding unforeseen strategies, you can reasonably claim that MuZero (in its original form and application) will not be deceptively aligned to its task.
  5. Not all applications of tractable RL-like algorithms are safe or wise.
  6. There do exist safe applications of RL-like algorithms.
porby62

It does still apply, though what 'it' is here is a bit subtle. To be clear, I am not claiming that a technique that is reasonably describable as RL can't reach extreme capability in an open-ended environment.

The precondition I included is important:

in the absence of sufficient environmental structure, reward shaping, or other sources of optimizer guidance, it is nearly impossible for any computationally tractable optimizer to find any implementation for a sparse/distant reward function

In my frame, the potential future techniques you mention are forms of optimizer guidance. Again, that doesn't make them "fake RL," I just mean that they are not doing a truly unconstrained search, and I assert that this matters a lot.

For example, take the earlier example of a hypercomputer that brute forces all bitstrings corresponding to policies and evaluates them to find the optimum with no further guidance required. Compare the solution space for that system to something that incrementally explores in directions guided by e.g. strong future LLM, or something. The RL system guided by a strong future LLM might achieve superhuman capability in open-ended domains, but the solution space is still strongly shaped by the structure available to the optimizer during training and it is possible to make much better guesses about where the optimizer will go at various points in its training.

It's a spectrum. On one extreme, you have the universal-prior-like hypercomputer enumeration. On the other, stuff like supervised predictive training. In the middle, stuff like MuZero, but I argue MuZero (or its more open-ended future variants) is closer to the supervised side of things than the hypercomputer side of things in terms of how structured the optimizer's search is. The closer a training scheme is to the hypercomputer one in terms of a lack of optimizer guidance, the less likely it is that training will do anything at all in a finite amount of compute.

porby61

Calling MuZero RL makes sense. The scare quotes are not meant to imply that it's not "real" RL, but rather that the category of RL is broad enough that it belonging to it does not constrain expectation much in the relevant way. The thing that actually matters is how much the optimizer can roam in ways that are inconsistent with the design intent.

For example, MuZero can explore the superhuman play space during training, but it is guided by the structure of the game and how it is modeled. Because of that structure, we can be quite confident that the optimizer isn't going to wander down a path to general superintelligence with strong preferences about paperclips.

Load More