Lorxus

Mathematician, alignment researcher, doctor. Reach out to me on Discord and tell me you found my profile on LW if you've got something interesting to say; you have my explicit permission to try to guess my Discord handle if so. You can't find my old abandoned-for-being-mildly-infohazardously-named LW account but it's from 2011 and has 280 karma.

A Lorxus Favor is worth (approximately) one labor-day's worth of above-replacement-value specialty labor, given and received in good faith, and used for a goal approximately orthogonal to one's desires, and I like LessWrong because people here will understand me if I say as much.

Apart from that, and the fact that I am under no NDAs, including NDAs whose existence I would have to keep secret or lie about, you'll have to find the rest out yourself.

Wikitag Contributions

Comments

Sorted by

Got it, thanks. I'll see if I can figure out who that was or where to find that claim. Cheers.

Maybe this is the right place to ask/discuss this, and maybe not - if it's not; say so and I'll stop.

IIRC you (or maybe someone once mentioned hearing about people who try to [experience the first jhana][1] and then feeling pain as a result, and that you didn't really understand why that happened. There was maybe also a comment about "don't do that, that sounds like you were doing it wrong".

After some time spent prodding at myself and pulling threads and seeing where they lead... I am not convinced that they were doing it wrong at all. There's a kind of way you can end up where the application of that kind of comfort/pleasure/positive-valence is, of itself, painful/aversive/immiserating, if not necessarily enduringly so. Reframing it 

I don't have a full explicit model for it, so here's some metaphors that hopefully collectively shed light:

  • Hunger beyond reason, hunger for weeks, hunger to the point of starvation. A rich and lavish meal set before you, of all your favorite foods and drinks, prepared expertly. A first bite - overwhelming; so perfect and so intense. You toy with the idea of eating nothing more and find you can neither eat nor decline - at least not comfortably. You gorge yourself and die of refeeding syndrome.
  • Dreams of your childhood home, of the forests around it, of the sparkling beauty of the night sky. The building was knocked down years ago, the forest cut, the sky bleached with light pollution, all long after you moved away anyway.
  • Like the itch/pain of a healing wound, or of a limb fallen asleep, or an amputated limb. Like internal screaming or weeping, suddenly given voice.
  • Like staring at something dazzlingly bright and incomparably precious, even coveted; especially one that you can't touch or even reach - a sapphire the size of your fist, say, or the sun. What would you even do with those, really, if you could grab them?
  1. ^

    Not sure if my terminology is correct here - I'm talking about doing the meditation/mental-action process itself. You know, the one which causes you tons of positive valence in a way you like but don't want.

Here's a game-theory game I don't think I've ever seen explicitly described before: Vicious Stag Hunt, a two-player non-zero-sum game elaborating on both Stag Hunt and Prisoner's Dilemma. (Or maybe Chicken? It depends on the obvious dials to turn. This is frankly probably a whole family of possible games.)

The two players can pick from among 3 moves: Stag, Hare, and Attack.

Hunting stag is great, if you can coordinate on it. Playing Stag costs you 5 coins, but if the other player also played Stag, you make your 5 coins back plus another 10.

Hunting hare is fine, as a fallback. Playing Hare costs you 1 coin, and assuming no interference, makes you that 1 coin back plus another 1.

But to a certain point of view, the richest targets are your fellow hunters. Preparing to Attack costs you 2 coins. If the other player played Hare, they escape you, barely recouping their investment (0 payoff), and you get nothing for your boldness. If they played Stag, though, you can backstab them right after securing their aid, taking their 10 coins of surplus destructively, costing them 10 coins on net. Finally, if you both played Attack, you both starve for a while waiting for the attack, you heartless fools. Your payoffs are symmetric, though this is one of the most important dials to turn: if you stand to lose less in such a standoff than you would by getting suckered, then Attack dominates Stag. My scratchpad notes have payoffs at (-5, -5), for instance.

To resummarize the payoffs:

  • (H, H) = (1, 1)
  • (H, S) = (1, -5)
  • (S, S) = (10, 10)
  • (H, A) = (0, -2)
  • (S, A) = (-10(*), 20)
  • (A, A) = (-n, -n); n <(=) 10(*) -> A >(=) S

So what happens? Disaster! Stag is dominated, so no one plays it, and everyone converges to Hare forever.

And what of the case where n > 10? While initially I'd expected a mixed equilibrium, I should have expected the actual outcome: the sole Nash equilibrium is still the pure all-Hare strategy - after all, we've made Attacking strictly worse than in the previous case! (As given by https://cgi.csc.liv.ac.uk/~rahul/bimatrix_solver/ ; I tested n = 12.)

A snowclone summarizing a handful of baseline important questions-to-self: "What is the state of your X, and why is that what your X's state is?" Obviously also versions that are less generally and more naturally phrased, that's just the most obviously parametrized form of the snowclone.

Classic(?) examples:
"What do you (think you) know, and why do you (think you) know it?" (X = knowledge/belief)
"What are you doing, and why are you doing it?" (X = action(-direction?)/motivation?)

Less classic examples that I recognized or just made up:
"How do you feel, and why do you feel that way?" (X = feelings/emotions)
"What do you want, and why do you want it?" (X = goal/desire)
"Who do you know here, and how do you know them?" (X = social graph?)
"What's the plan here, and what are you hoping to achieve by that plan?" (X = plan)

I think this post is pretty cool, and represents good groundwork on sticky questions of bioethics and the principles that should underpin them that most people don't think about very hard. Thanks for writing it.

The phrasing I got from the mentor/research partner I'm working with is pretty close to the former but closer in attitude and effective result to the latter. Really, the major issue is that string diagrams for a flavor of category and commutative diagrams for the same flavor of category are straight-up equivalent, but explicitly showing this is very very messy, and even explicitly describing Markov categories - the flavor of category I picked as likely the right one to use, between good modelling of Markov kernels and their role doing just that for causal theories (themselves the categorification of "Bayes nets up to actually specifying the kernels and states numerically") - is probably too much to put anywhere in a post but an appendix or the like.

...if there is no available action like snapping a photo that takes less time than writing the reply I'm replying to did...

There is not, but that's on me. I'm juggling too much and having trouble packaging my research in a digestible form. Precarious/lacking funding and consequent binding demands on my time really don't help here either. I'll add you to the long long list of people who want to see a paper/post when I finally complete one.

I guess a major blocker for me is - I keep coming back to the idea that I should write the post as a partially-ordered series of posts instead. That certainly stands out to me as the most natural form for the information, because there's three near-totally separate branches of context - Bayes nets, the natural latent/abstraction agenda, and (monoidal category theory/)string diagrams - of which you need to somewhat understand some pair in order to understand major necessary background (causal theories, motivation for Bayes net algebra rules, and motivation for string diagram use), and all three to appreciate the research direction properly. But I'm kinda worried that if I start this partially-ordered lattice of posts, I'll get stuck somewhere. Or run up against the limits of what I've already worked out yet. Or run out of steam with all the writing and just never finish. Or just plain "no one will want to read through it all".

I guess? I mean, there's three separate degrees of "should really be kept contained"-ness here:

  • Category theory -> string diagrams, which pretty much everyone keeps contained, including people who know the actual category theory
  • String diagrams -> Bayes nets, which is pretty straightforward if you sit and think for a bit about the semantics you accept/are given for string diagrams generally and maybe also look at a picture of generators and rules - not something anyone needs to wrap up nicely but it's also a pretty thin
  • [Causal theory/Bayes net] string diagrams -> actual statements about (natural) latents, which is something I am still working on; it's turning out to be pretty effortful to grind through all the same transcriptions again with an actually-proof-usable string diagram language this time. I have draft writeups of all the "rules for an algebra of Bayes nets" - a couple of which have turned out to have subtleties that need working out - and will ideally be able to write down and walk through proofs entirely in string diagrams while/after finishing specifications of the rules.

So that's the state of things. Frankly I'm worried and generally unhappy about the fact that I have a post draft that needs restructuring, a paper draft that needs completing, and a research direction to finish detailing, all at once. If you want some partial pictures of things anyway all the same, let me know.

Not much to add apart from "this is clean and really good, thanks!".

I promise I am still working on working out all the consequences of the string diagram notation for latential Bayes nets, since the guts of the category theory are all fixed (and can, as a mentor advises me, be kept out of the public eye as they should be). Things can be kept (basically) purely in terms of string diagrams. In whatever post I write, they certainly will be.

I want to be able to show that isomorphism of natural latents is the categorical property I'm ~97% sure it is (and likewise for minimal and maximal latents). I need to sit myself down and at least fully transcribe the Fundamental Theorem of Latents in preparation for supplying the proof to that.

Mostly I'm spending a lot of time on a data science bootcamp and an AISC track and taking care of family and looking for work/funding and and and.

Because RLHF works, we shouldn't be surprised when AI models output wrong answers which are specifically hard for humans to distinguish from a right answer.

This observably (seems like it) generalizes to all humans, instead of (say) it being totally trivial somehow to train an AI on feedback only from some strict and distinguished subset of humanity such that any wrong answers it produced could be easily spotted by the excluded humans.

Such wrong answers which look right (on first glance) also observably exist, and we should thus expect that if there's anything like a projection-onto-subspace going on here, our "viewpoint" for the projection, given any adjudicating human mind, is likely all clustered in some low-dimensional subspace of all possible viewpoints and maybe even just around a single point.

This is why I'd agree that RLHF was so specifically a bad tradeoff in capabilities improvement vs safety/desirability outcomes but still remain agnostic as to the absolute size of that tradeoff.

Load More