felt as though the framework of these books provide an interesting lens to model systems and agents that could be of interest, and subsequently prove various properties that are necessary/faborable
Your feelings might be right! I don't have a not a strong prior, and in general I'd say that people should follow their inner compass and work on what they're excited about. It's very hard to convey your illegible intuitions to others, and all too easy for social pressure to squash them. Not sure what someone should really do in this situation, beyond keeping your eyes on the hard problems of alignment and finding ways to get feedback from reality on your ideas as fast as possible.
We had a bit more usage of the formalism of those theories in the 2010s, like using modal logics to investigate co-operation/defection in logical decision theories. As for Dynamic Epistemic logic, well, the blurb does make it look sort of relevant.
Perhaps it might have something interesting to say on the tiling agents problem, or on decision theory, or so on. But other things have looked superficially relevant in the past, too. E.g. fuzzy logics, category theory, homotopy type theory etc. And AFAICT, no one has really done anything that really used the practical tools of these theories to make any legible advances. And of what was legibly impressive, it didn't seem to be due to the machinery of those theories, but rather the cleverness of the people using them. Likewise for the past work in alignment using modal logics.
So I'm not sure what advantage you're seeing here, because I haven't read the books and don't have the evidence you do. But my priors are that if you have any good ideas about how to make progress in alignment, it's not going to be downstream of using the formalism in the books you mentioned.
We imagine all possible quantum observables as having marginal distributions that obey the Born rule
Dumb question, but does this approach of yours cash out to representing quantum states as a probability distribution function? How's this rich enough to represent interference of states and all that quantum phenomena absent from stochastic dynamics?
It is an idiosyncratic mental technique. Look up trigger action plans, say. What you're doing there is a variant of what EY describes.
Hmm, interesting. I think what confused me is: 1) Your warning. 2) You sound like you have deeper access to your unconscious, somehow "closer to the metal", rather than what I feel like I do, which is submitting an API request of the right type. 3) Your use cases sound more spontaneous.
I'm not referring to more advanced TAPs, just the basics, which I also haven't got much mileage out of. (My bottleneck is that a lot of the most useful actions require pretty tricky triggers. Usually, I can't find a good cue to anchor on, and have to rely on more delicate or abstract sensations, which are too subtle for me to really notice in the moment, recall or simulate. I'd be curious to know if you've got a solution to this problem.)
That said, playing with TAPs helped me realize what type of conscious signals my unconscious can actually pick up on, which is useful. For me, a big use case is updating my value estimator for various actions. I query my estimator, do the action, reflect on the experience, and submit it to my unconscious and blam! Suddenly I'm more enthusiastic about pushing through confusion when doing maths.
BTW, is this class of skills we're discussing all that you meant by "thinking at the 5-second level"? Because for some reason, I thought you meant I should reconstruct your entire mental stack-trace during the 5 seconds I made an error, simulate plausible counterfactual histories and upvote the ones that avoid the error. This takes like an hour to do, even for chains of thought that last like 10 seconds, which was entirely impractical. Yet, I've just been assuming you could somehow do this in like 30s, which meant I had a massive skill issue. It would be good to know if that's not the case so I can avoid a dead-end in the cognitive-surgery skill tree.
Besides being a thing I can just decide, my decision to stay sane is also something that I implement by not writing an expectation of future insanity into my internal script / pseudo-predictive sort-of-world-model that instead connects to motor output.
Does implementing a trigger action plan by simulating observing the trigger and then taking the action, which needs to call up your visual, kinaesthetic and other senses, route through similar machinery to what you're describing here? Because it sounds vaguely similar, but: A) I wouldn't describe what I do the way you did, B) the interpretation I'm making feels vague and free-floating instead of rigidly binding to my experience with interfacing with my unconscious cognition, so I suspect talking about different things even if the rest of your description (e.g. the brain having a muddled type system) felt familiar.
$100 for what I already got. I could pay less, but I am not sure if that would make the signal/noise ratio too low to be worthwhile. Maybe @yams could tell us?
That sounds reasonable, but how do you know this? Also, any recommendations for better ways to get this information w/o being more than 2x as costly? (Cost $100 for 10 people, who spent 35 minutes reading and giving feedback).
The snack bar was so tempting, I gained about 10 pounds. I just don’t have willpower in regards to snacks.
And you were doing so well, brother.
EDIT: What fiction have Ozy and Daystar written?
Thank you for posting this. It's like a warped reflection of my own experiences with people who were/are mentally unwell. Though the intra-masculine competition thing confused me for a bit, till i realized it was David the psychotic talking about Edward.