porby

Wiki Contributions

Comments

Sorted by
porby40

I sometimes post experiment ideas on my shortform. If you see one that seems exciting and you want to try it, great! Please send me a message so we can coordinate and avoid doing redundant work.

porby20

Hey, we met at EAGxToronto : )

🙋‍♂️

So my model of progress has allowed me to observe our prosaic scaling without surprise, but it doesn't allow me to make good predictions since the reason for my lack of surprise has been from Vingean prediction of the form "I don't know what progress will look like and neither do you".

This is indeed a locally valid way to escape one form of the claim—without any particular prediction carrying extra weight, and the fact that reality has to go some way, there isn't much surprise in finding yourself in any given world.

I do think there's value in another version of the word "surprise," here, though. For example: the cross-entropy loss between the predicted distribution with respect to the observed distribution. Holding to a high uncertainty model of progress will result in continuously high "surprise" in this sense, because it struggles to narrow to a better distribution generator. It's a sort of overdamped epistemological process.

I think we have enough information to make decent gearsy models of progress around AI. As a bit of evidence, some such models have already been exploited to make gobs of money. I'm also feeling pretty good[1] about many of my predictions (like this post) that contributed to me pivoting entirely into AI; there's an underlying model that has a bunch of falsifiable consequences which has so far survived a number of iterations, and that model has implications through the development of extreme capability.

What I have been surprised about has been governmental reaction to AI...

Yup! That was a pretty major (and mostly positive) update for me. I didn't have a strong model of government-level action in the space and I defaulted into something pretty pessimistic. My policy/governance model is still lacking the kind of nuance that you only get by being in the relevant rooms, but I've tried to update here as well. That's also part of the reason why I'm doing what I'm doing now.

In any case, I've been hoping for the last few years I would have time to do my undergrad and start working on the alignment without a misaligned AI going RSI, and I'm still hoping for that. So that's lucky I guess. 🍀🐛

May you have the time to solve everything!

  1. ^

     ... epistemically

porby121

I've got a fun suite of weird stuff going on[1], so here's a list of sometimes-very-N=1 data:

  1. Napping: I suck at naps. Despite being very tired, I do not fall asleep easily, and if I do fall asleep, it's probably not going to be for just 5-15 minutes. I also tend to wake up with a lot of sleep inertia, so the net effect of naps on alertness across a day tends to be negative. They also tend to destroy my sleep schedule. 
  2. Melatonin: probably the single most noticeable non-stimulant intervention. While I'm by-default very tired all the time, it's still hard to go to sleep. Without mitigation, this usually meant it was nearly impossible to maintain a 24 hour schedule. Melatonin helps a lot with going to sleep and mostly pauses the forward march (unless I mess up).[2]
  3. Light therapy: subtle, but seems to have an effect. It's more obvious when comparing 'effectively being in a cave' with 'being exposed to a large amount of direct sunlight.' I did notice that, when stacked on everything else, the period where I tried light therapy[3] was the first time I was able to intentionally wake up earlier over the course of several days.
  4. Avoiding excessive light near bed: pretty obviously useful. I've used blue-blocking glasses with some effect, though it's definitely better to just not be exposed to too much light in the first place. I reduce monitor brightness to the minimum if I'm on the computer within 3-5 hours of sleep.
  5. Consistent sleep schedule: high impact, if I can manage it. Having my circadian rhythm fall out of entrainment was a significant contributor[4] to my historical sleeping 10-12 hours a day.[5]
  6. Going to bed earlier: conditioning on waking up with no alarm, sleep duration was not correlated with daytime alertness for me according to my sleep logs. Going to bed early enough such that most of my sleep was at night was correlated.[6]
  7. CPAP: Fiddled with one off-prescription a while since I had access to one and it was cheaper than testing for sleep apnea otherwise. No effect.[7]
  8. Nose strips: hard to measure impact on sleep quality, but subjectively nice! My nosetubes are on the small side, I guess.
  9. Changing detergents/pillows: I seem to react to some detergents, softeners, dust, and stuff along those lines. It's very obvious when I don't double-rinse my pillowcases; my nose swells up to uselessness.
  10. Sleeping room temperature: 62-66F is nice. 72F is not nice. 80F+ is torture.[8]
  11. Watercooled beds: I tried products like eight sleep for a while. If you don't have the ability to reduce the air temperature and humidity to ideal levels, it's worth it, but there is a comfort penalty. It doesn't feel like laying on a fresh and pleasantly cool sheet; it's weirdly like laying on somewhat damp sheets that never dry.[9] Way better than nothing, but way worse than a good sleeping environment.[10]
  12. Breathable bedding: surprisingly noticeable. I bought some wirecutter-reviewed cotton percale sheets and a latex mattress. I do like the latex mattress, but I think the sheets have a bigger effect. Don't have data on whether it meaningfully changed sleep quality, but it is nice.
  13. Caffeine: pretty standard. Helps a bit. Not as strong as prescription stimulants at reasonable dosages, can't use it every day without the effect diminishing very noticeably. And without tolerance, consuming it much later than immediately after getting out of bed disrupts my sleep the next night. I tend to drink some coffee in the morning on days where I don't take other stimulants to make the mornings suck less.
  14. Protriptyline: sometimes useful, but a very bad time for me. Got pretty much all the side effects, including the "stop taking and talk to your doctor immediately" kind and uncomfortably close to the "go to a hospital" kind.[11]
  15. Modafinil: alas, no significant effect. Maybe slightly clumsier, maybe slightly longer sleep, maybe slightly more tired. Best guess is that it interfered with my sleep a little bit.
  16. Ritalin: Works! I use a low dose (12.5 mg/day) of the immediate release generic. Pretty short half-life, but that's actually nice for being able to go to sleep. I often cut pills in half to manually allocate alertness more smoothly. I can also elect to just not take it before a plane flight or on days where being snoozey isn't a big problem.
  17. Stimulant juggling/off days: very hard to tell if there's an effect on tolerance with N=1 for non-caffeine stimulants at low therapeutic dosages. I usually do ~5 ritalin days and ~2 caffeine days a week, and I can say that ritalin does still obviously work after several years.[12]
  18. Creatine: I don't notice any sleep/alertness effect, though some people report it. I use it primarily for fitness reasons.[13]
  19. Exercise: hard to measure impact on alertness. Probably some long-term benefit, but if I overdo it on any given day, it's easy to ruin myself. I exercise a bit every day to try to avoid getting obliterated.[14]
  20. Cytomel: this is a weird one that I don't think will be useful to anyone reading this. It turns out that, while my TSH and T4 levels are normal, my untreated T3 levels are very low for still-unclear reasons. I had symptoms of hypothyroidism for decades, but it took until my late 20's to figure out why. Hypothyroidism isn't the same thing as a sleep disorder, but stacking fatigue on a sleep disorder isn't fun.[15]
  21. Meal timing: another weird one. I've always had an unusual tendency towards hypoglycemic symptoms.[16] In its milder form, this comes with severe fatigue that can seem a bit like sleepiness if you squint. As of a few weeks ago with the help of a continuous glucose monitor, I finally confirmed I've got some very wonky blood sugar behavior despite a normal A1C; one notable bit is a pattern of reactive hypoglycemia. I can't avoid hypoglycemia during exercise by e.g. drinking chocolate milk beforehand. I've actually managed to induce mild hypoglycemia by eating a cinnamon roll pancake (and not exercising). Exercising without food actually works a bit better, though I do still have to be careful about the intensity * duration.

I'm probably forgetting some stuff.

  1. ^

    "Idiopathic hypersomnia"-with-a-shrug was the sleep doctor's best guess on the sleep side, plus a weirdo kind of hypothyroidism, plus HEDs, plus something strange going on with blood sugar regulation, plus some other miscellaneous and probably autoimmune related nonsense.

  2. ^

    I tend to take 300 mcg about 2-5 hours before my target bedtime to help with entrainment, then another 300-600 mcg closer to bedtime for the sleepiness promoting effect. 

  3. ^

    In the form of luminette glasses. I wouldn't say they have a great user experience; it's easy to get a headache and the nose doohicky broke almost immediately. That's part of why I didn't keep using them, but I may try again.

  4. ^

    But far from sole!

  5. ^

    While still being tired enough during the day to hallucinate on occasion.

  6. ^

    Implementing this and maintaining sleep consistency functionally requires other interventions. Without melatonin etc., my schedule free-runs mercilessly.

  7. ^

    Given that I was doing this independently, I can't guarantee that Proper Doctor-Supervised CPAP Usage wouldn't do something, but I doubt it. I also monitored myself overnight with a camera. I do a lot of acrobatics, but there was no sign of apneas or otherwise distressed breathing.

  8. ^

    When I was younger, I would frequently ask my parents to drop the thermostat down at night because we lived in one of those climates where the air can kill you if you go outside at the wrong time for too long. They were willing to go down to around 73F at night. My room was east-facing, theirs was west-facing. Unbeknownst to me, there was also a gap between the floor and wall that opened directly into the attic. That space was also uninsulated. Great times.

  9. ^

    It wasn't leaking!

  10. ^

     The cooling is most noticeable at pressure points, so there's a very uneven effect. Parts of your body can feel uncomfortably cold while you're still sweating from the air temperature and humidity.

  11. ^

    The "hmm my heart really isn't working right" issues were bad, but it also included some spooky brain-hijacky mental effects. Genuinely not sure I would have survived six months on it even with total awareness that it was entirely caused by the medication and would stop if I stopped taking it. I had spent some years severely depressed when I was younger, but this was the first time I viscerally understood how a person might opt out... despite being perfectly fine 48 hours earlier.

  12. ^

    I'd say it dropped a little in efficacy in the first week or two, maybe, but not by much, and then leveled out. Does the juggling contribute to this efficacy? No idea. Caffeine and ritalin both have dopaminergic effects, so there's probably a little mutual tolerance on that mechanism, but they do have some differences.

  13. ^

    Effect is still subtle, but creatine is one of the only supplements that has strong evidence that it does anything.

  14. ^

    Beyond the usual health/aesthetic reasons for exercising, I also have to compensate for joint loosey-gooseyness related to doctor-suspected HEDs. Even now, I can easily pull my shoulders out of socket, and last week discovered that (with the help of some post-covid-related joint inflammation), my knees still do the thing where they slip out of alignment mid-step and when I put weight back on them, various bits of soft tissues get crushed. Much better than it used to be; when I was ~18, there were many days where walking was uncomfortable or actively painful due to a combination of ankle, knee, hip, and back pain.

  15. ^

    Interesting note: my first ~8 years of exercise before starting cytomel, including deliberate training for the deadlift, saw me plateau at a 1 rep max on deadlift of... around 155 pounds. (I'm a bit-under-6'4" male. This is very low, like "are you sure you're even exercising" low. I was, in fact, exercising, and sometimes at an excessive level of intensity. I blacked out mid-rep once; do not recommend.)

    Upon starting cytomel, my strength increased by around 30% within 3 months. Each subsequent dosage increase was followed by similar strength increases. Cytomel is not an anabolic steroid and does not have anabolic effects in healthy individuals.

    I'm still no professional powerlifter, but I'm now at least above average within the actively-lifting population of my size. The fact that I "wasted" so many years of exercise was... annoying.

  16. ^

    Going too long without food or doing a little too much exercise is a good way for me to enter a mild grinding hypoglycemic state. More severely, when I went a little too far with intense exercise, I ended up on the floor unable to move while barely holding onto consciousness.

porby41

But I disagree that there’s no possible RL system in between those extremes where you can have it both ways.

I don't disagree. For clarity, I would make these claims, and I do not think they are in tension:

  1. Something being called "RL" alone is not the relevant question for risk. It's how much space the optimizer has to roam.
  2. MuZero-like strategies are free to explore more space than something like current applications of RLHF. Improved versions of these systems working in more general environments have the capacity to do surprising things and will tend to be less 'bound' in expectation than RLHF. Because of that extra space, these approaches are more concerning in a fully general and open-ended environment.
  3. MuZero-like strategies remain very distant from a brute-forced policy search, and that difference matters a lot in practice.
  4. Regardless of the category of the technique, safe use requires understanding the scope of its optimization. This is not the same as knowing what specific strategies it will use. For example, despite finding unforeseen strategies, you can reasonably claim that MuZero (in its original form and application) will not be deceptively aligned to its task.
  5. Not all applications of tractable RL-like algorithms are safe or wise.
  6. There do exist safe applications of RL-like algorithms.
porby62

It does still apply, though what 'it' is here is a bit subtle. To be clear, I am not claiming that a technique that is reasonably describable as RL can't reach extreme capability in an open-ended environment.

The precondition I included is important:

in the absence of sufficient environmental structure, reward shaping, or other sources of optimizer guidance, it is nearly impossible for any computationally tractable optimizer to find any implementation for a sparse/distant reward function

In my frame, the potential future techniques you mention are forms of optimizer guidance. Again, that doesn't make them "fake RL," I just mean that they are not doing a truly unconstrained search, and I assert that this matters a lot.

For example, take the earlier example of a hypercomputer that brute forces all bitstrings corresponding to policies and evaluates them to find the optimum with no further guidance required. Compare the solution space for that system to something that incrementally explores in directions guided by e.g. strong future LLM, or something. The RL system guided by a strong future LLM might achieve superhuman capability in open-ended domains, but the solution space is still strongly shaped by the structure available to the optimizer during training and it is possible to make much better guesses about where the optimizer will go at various points in its training.

It's a spectrum. On one extreme, you have the universal-prior-like hypercomputer enumeration. On the other, stuff like supervised predictive training. In the middle, stuff like MuZero, but I argue MuZero (or its more open-ended future variants) is closer to the supervised side of things than the hypercomputer side of things in terms of how structured the optimizer's search is. The closer a training scheme is to the hypercomputer one in terms of a lack of optimizer guidance, the less likely it is that training will do anything at all in a finite amount of compute.

porby61

Calling MuZero RL makes sense. The scare quotes are not meant to imply that it's not "real" RL, but rather that the category of RL is broad enough that it belonging to it does not constrain expectation much in the relevant way. The thing that actually matters is how much the optimizer can roam in ways that are inconsistent with the design intent.

For example, MuZero can explore the superhuman play space during training, but it is guided by the structure of the game and how it is modeled. Because of that structure, we can be quite confident that the optimizer isn't going to wander down a path to general superintelligence with strong preferences about paperclips.

porby105

I do think that if you found a zero-RL path to the same (or better) endpoint, it would often imply that you've grasped something about the problem more deeply, and that would often imply greater safety.

Some applications of RL are also just worse than equivalent options. As a trivial example, using reward sampling to construct a gradient to match a supervised loss gradient is adding a bunch of clearly-pointless intermediate steps.

I suspect there are less trivial cases, like how a decision transformer isn't just learning an optimal policy for its dataset but rather a supertask: what different levels of performance look like on that task. By subsuming an RL-ish task in prediction, the predictor can/must develop a broader understanding of the task, and that understanding can interact with other parts of the greater model. While I can't currently point to strong empirical evidence here, my intuition would be that certain kinds of behavioral collapse would be avoided by the RL-via-predictor because the distribution is far more explicitly maintained during training.[1][2]

But there are often reasons why the more-RL-shaped thing is currently being used. It's not always trivial to swap over to something with some potential theoretical benefits when training at scale. So long as the RL-ish stuff fits within some reasonable bounds, I'm pretty okay with it and would treat it as a sufficiently low probability threat that you would want to be very careful about how you replaced it, because the alternative might be sneakily worse.[3]

  1. ^

    KL divergence penalties are one thing, but it's hard to do better than the loss directly forcing adherence to the distribution.

  2. ^

    You can also make a far more direct argument about model-level goal agnosticism in the context of prediction.

  3. ^

    I don't think this is likely, to be clear. They're just both pretty low probability concerns (provided the optimization space is well-constrained).

Answer by porby7211

"RL" is a wide umbrella. In principle, you could even train a model with RL such that the gradients match supervised learning. "Avoid RL" is not the most directly specified path to the-thing-we-actually-want.

The source of spookiness

Consider two opposite extremes:

  1. A sparse, distant reward function. A biped must successfully climb a mountain 15 kilometers to the east before getting any reward at all.
  2. A densely shaped reward function. At every step during the climb up the mountain, there is a reward designed to induce gradients that maximize training performance. Every slight mispositioning of a toe is considered.

Clearly, number 2 is going to be easier to train, but it also constrains the solution space for the policy.

If number 1 somehow successfully trained, what's the probability that the solution it found would look like number 2's imitation data? What's the probability it would look anything like a bipedal gait? What's the probability it just exploits the physics simulation to launch itself across the world?

If you condition on a sparse, distant reward function training successfully, you should expect the implementation found by the optimizer to sample from a wide distribution of possible implementations that are compatible with the training environment.

It is sometimes difficult to predict what implementations are compatible with the environment. The more degrees of freedom exist in the environment, the more room the optimizer has to roam. That's where the spookiness comes from.

Is RL therefore spooky?

RL appears to make this spookiness more accessible. It's difficult to use (un)supervised learning in a way that gives a model great freedom of implementation; it's usually learning from a large suite of examples.

But there's a major constraint on RL: in the absence of sufficient environmental structure, reward shaping, or other sources of optimizer guidance, it is nearly impossible for any computationally tractable optimizer to find any implementation for a sparse/distant reward function. It simply won't sample the reward often enough to produce useful gradients.[1]

In other words, practical applications of RL are computationally bounded to a pretty limited degree of reward sparsity/distance. All the examples of "RL" doing interesting things that look like they involve sparse/distant reward involve enormous amounts of implicit structure of various kinds, like powerful world models.[2] 

Given these limitations, the added implementation-uncertainty of RL is usually not so massive that it's worth entirely banning it. Do be careful about what you're actually reinforcing, just as you must be careful with prompts or anything else, and if you somehow figure out a way to make from-scratch sparse/distant rewards work better without a hypercomputer, uh, be careful?

A note on offline versus online RL

The above implicitly assumes online RL, where the policy is able to learn from new data generated by the policy as it interacts with the environment.

Offline RL that learns from an immutable set of data does not allow the optimizer as much room to explore, and many of the apparent risks of RL are far less accessible.

Usage in practice

The important thing is that the artifact produced by a given optimization process falls within some acceptable bounds. Those bounds might arise from the environment, computability, or something else, but they're often available.

RL-as-it-can-actually-be-applied isn't that special here. The one suggestion I'd have is to try to use it in a principled way. For example: doing pretraining but inserting an additional RL-derived gradient to incentivize particular behaviors works, but it's just arbitrarily shoving a bias/precondition into the training. The result will be at some equilibrium between the pretraining influence and the RL influence. Perhaps the weighting could be chosen in an intentional way, but most such approaches are just ad hoc.

For comparison, you could elicit similar behavior by including a condition metatoken in the prompt (see decision transformers for an example). With that structure, you can be more explicit about what exactly the condition token is supposed to represent, and you can do fancy interpretability techniques to see what the condition is actually causing mechanistically.[3]

  1. ^

    If you could enumerate all possible policies with a hypercomputer and choose the one that performs the best on the specified reward function, that would train, and it would also cause infinite cosmic horror. If you have a hypercomputer, don't do that.

  2. ^

    Or in the case of RLHF on LLMs, the fine-tuning process is effectively just etching a precondition into the predictor, not building complex new functions. Current LLMs, being approximators of probabilistic inference to start with, have lots of very accessible machinery for this kind of conditioning process.

  3. ^

    There are other options here, but I find this implementation intuitive.

porby180

Stated as claims that I'd endorse with pretty high, but not certain, confidence:

  1. There exist architectures/training paradigms within 3-5 incremental insights of current ones that directly address most incapabilities observed in LLM-like systems. (85%; if false, my median strong AI estimate would jump by a few years, p(doom) effect would vary depending on how it was falsified)
  2. It is not an accident that the strongest artificial reasoners we have arose from something like predictive pretraining. In complex and high dimensional problem spaces like general reasoning, successful training will continue to depend on schemes with densely informative gradients that can constrain the expected shape of the training artifact. In those problem spaces, training that is roughly equivalent to sparse/distant reward in naive from-scratch RL will continue to mostly fail.[1] (90%; if false, my p(doom) would jump a lot)
  3. Related to, and partially downstream of, #2: the strongest models at the frontier of AGI will continue to be remarkably corrigible (in the intuitive colloquial use of the word, but not strictly MIRI's use). That is, the artifact produced by pretraining and non-malicious fine tuning will not be autonomously doomseeking even if it has the capability. (A bit less than 90%; this being false would also jump by p(doom) by a lot)
  4. Creating agents out of these models is easy and will get easier. Most of the failures in current agentic applications are not fundamental, and many are related to #1. There are no good ways to stop a weights-available model from, in principle, being used as a potentially dangerous agent, and outcome variance will increase as capabilities increase. (95%; I'm not even sure what the shape of this being false would be, but if there was a solution, it'd drop my current p(doom) by at least half)
  5. Scale is sufficient to bypass the need for some insights. While a total lack of insights would make true ASI difficult to reach in the next few years, the hardware and scale of 2040 is very likely enough to do it the dumb way, and physics won't get in the way soon enough. (92%; falsification would make the tail of my timelines longer. #1 and #5 being falsified together could jump my median by 10+ years.)
  6. We don't have good plans for how to handle a transition period involving widely available high-capability systems, even assuming that those high-capability systems are only dangerous when intentionally aimed in a dangerous direction.[2] It looks an awful lot like we're stuck with usually-reactive muddling, and maybe some pretty scary sounding defensive superintelligence propositions. (75%; I'm quite ignorant of governance and how international coordination could actually work here, but it sure seems hard. If this ends up being easy, it would also drop my p(doom) a lot.)
  1. ^

    Note that this is not a claim that something like RLHF is somehow impossible. RLHF, and other RL-adjacent techniques that have reward-equivalents that would never realistically train from scratch, get to select from the capabilities already induced by pretraining. Note that many 'strong' RL-adjacent techniques involve some form of big world model, operate in some constrained environment, or otherwise have some structure to work with that makes it possible for the optimizer to take useful incremental steps.

  2. ^

    One simple story of many, many possible stories:

    1. It's 20XY. Country has no nukes but wants second strike capacity.

    2. Nukes are kinda hard to get. Open-weights superintelligences can be downloaded.

    3. Country fine-tunes a superintelligence to be an existential threat to everyone else that is activated upon Country being destroyed.

    4. Coordination failures occur; Country gets nuked or invaded in a manner sufficient to trigger second strike.

    5. There's a malign superintelligence actively trying to kill everyone, and no technical alignment failures occurred. Everything AI-related worked exactly as its human designers intended.

Reply3221
porby30

Yup, exactly the same experience here.

Load More