Mathematician, alignment researcher, doctor. Reach out to me on Discord and tell me you found my profile on LW if you've got something interesting to say; you have my explicit permission to try to guess my Discord handle if so. You can't find my old abandoned LW account but it's from 2011 and has 280 karma.
A Lorxus Favor is worth (approximately) one labor-day's worth of above-replacement-value specialty labor, given and received in good faith, and used for a goal approximately orthogonal to one's desires, and I like LessWrong because people here will understand me if I say as much.
Apart from that, and the fact that I am under no NDAs, including NDAs whose existence I would have to keep secret or lie about, you'll have to find the rest out yourself.
Sure - I can believe that that's one way a person's internal quorum can be set up. In other cases, or for other reasons, they might be instead set up to demand results, and evaluate primarily based on results. And that's not great or necessarily psychologically healthy, but then the question becomes "why do some people end up one way and other people the other way?" Also, there's the question of just how big/significant the effort was, and thus how big of an effective risk the one predictor took. Be it internal to one person or relevant to a group of humans, a sufficiently grand-scale noble failure will not generally be seen as all that noble (IME).
This makes some interesting predictions re: some types of trauma: namely, that they can happen when someone was (probably even correctly!) pushing very hard towards some important goal, and then either they ran out of fuel just before finishing and collapsed, or they achieved that goal and then - because of circumstances, just plain bad luck, or something else - that goal failed to pay off in the way that it usually does, societally speaking. In either case, the predictor/pusher that burned down lots of savings in investment doesn't get paid off. This is maybe part of why "if trauma, and help, you get stronger; if trauma, and no help, you get weaker".
I didn't enjoy this one as much, but that's likely down to not having had the time/energy to spend on thinking this through deeply. That said... I did not in fact enjoy it as much and I mostly feel like garbage for having done literally worse than chance, and I feel like it probably would have been better if I hadn't participated at all.
Let me see if I've understood point 3 correctly here. (I am not convinced I have actually found a flaw, I'm just trying to reconcile two things in my head here that look to conflict, so I can write down a clean definition elsewhere of something that matters to me.)
factors over . In , were conditionally independent of each other, given . Because factors over and because in , were conditionally independent of each other, given , we can very straightforwardly show that factors over , too. This is the stuff you said above, right?
But if we go the other direction, assuming that some arbitrary factors over , I don't think that we can then still derive that factors over in full generality, which was what worried me. But that break of symmetry (and thus lack of equivalence) is... genuinely probably fine, actually - there's no rule for arbitrarily deleting arrows, after all.
That's cleared up my confusion/worries, thanks!
We’ll refer to these as “Bookkeeping Rules”, since they feel pretty minor if you’re already comfortable working with Bayes nets. Some examples:
- We can always add an arrow to a diagram (assuming it doesn’t introduce a loop), and the approximation will get no worse.
Here's something that's kept bothering me on and off for the last few months: This graphical rule immediately breaks Markov equivalence. Specifically, two DAGs are Markov-equivalent only if they share an (undirected) skeleton. (Lemma 6.1 at the link.)
If the major/only thing we care about here regarding latential Bayes nets is that our Grand Joint Distribution factorize over (that is, satisfy) our DAG (and all of the DAGs we can get from it by applying the rules here), then by Thm 6.2 in the link above, is also globally/locally Markov wrt . This holds even when is not guaranteed for some of the possible joint states in , unlike Hammersley-Clifford would require.
That in turn means that (Def 6.5) there's some distributions can be such that factors over , but not (where trivially has the same vertices as does); specifically, because don't (quite) share a skeleton, they can't be Markov-equivalent, and because they aren't Markov-equivalent, no longer needs to be (locally/globally) Markov wrt (and in fact there must exist some which explicitly break this), and because of that, such need not factor over . Which I claim we should not want here, because (as always) we care primarily about preserving which joint probability distributions factorize over/satisfy which DAGs, and of course we probably don't get to pick whether our is one of the ones where that break in the chain of logic matters.
I'm going to start by attacking this a little on my own before I even look much at what other people have done.
Some initial observations from the SQL+Python practice this gave me a good excuse to do:
So my current best guess (pending understanding which gear is best for which class/race) is:
Willow v Adelon, Varina v Bauchard, Xerxes v Cadagal, Yalathinel v Deepwrack.
If I had to guess what gear to give to who: Warrior v Knight is a rough matchup, so Varina's going to need the help; the rest of my assignments are based thus far on ~vibes~ for whether speed or power will matter more for the class. Thus:
Willow gets +2 Boots and +1 Gauntlets, Varina gets +4 Boots and +3 Gauntlets, Xerxes gets +1 Boots and +2 Gauntlets, and Yalathinel gets +3 Boots.
Some theories I need to test:
Wild speculation:
I'm gonna leave my thoughts on the ramifications for academia, where a major career step is to repeatedly join and leave different large bureaucratic organizations for a decade, as an exercise to the reader.
Like, in a world where the median person is John Wentworth (“Wentworld”), I’m pretty sure there just aren’t large organizations of the sort our world has.
I have numerous thoughts on how Lorxusverse Polity handles this problem but none of it is well-worked out enough to share. In sum though: Probably cybernetics (in the Beer sense) got discovered way earlier and actually ever used as stated and that was that, no particular need for dominance-status as glue or desire for it as social-good. (We'd be way less social overall, though, too, and less likely to make complex enduring social arrangements. There would be careful Polity-wide projects for improving social-contact and social-nutrition. They would be costly and weird. Whether that's good or bad on net, I can't say.)
Sure, but you obviously don't (and can't even in principle) turn that up all the way! The key is to make sure that that mode still exists and that you don't simply amputate and cauterize it.
[2.] maybe one could go faster by trying to more directly cleave to the core philosophical problems.
...
An underemphasized point that I should maybe elaborate more on: a main claim is that there's untapped guidance to be gotten from our partial understanding--at the philosophical level and for the philosophical level. In other words, our preliminary concepts and intuitions and propositions are, I think, already enough that there's a lot of progress to be made by having them talk to each other, so to speak.
OK but what would this even look like?\gen
Toss away anything amenable to testing and direct empirical analysis; it's all too concrete and model-dependent.
Toss away mathsy proofsy approaches; they're all too formalized and over-rigid and can only prove things from starting assumptions we haven't got yet and maybe won't think of in time.
Toss away basically all settled philosophy, too; if there were answers to be had there rather than a few passages which ask correct questions, the Vienna Circle would have solved alignment for us.
What's left? And what causes it to hang together? And what causes it not to vanish up its own ungrounded self-reference?
Do you know when you started experiencing having an internal music player? I recall that that started for me when I was about 6. Also, do you know whether you can deliberately pick a piece of music, or other nonmusical sonic experiences, to playback internally? Can you make them start up from internal silence? Under what conditions can you make them stop? Do you ever experience long stretches where you have no internal music at all?