Keenan Pepper

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

As far as I’ve been told, left-TMS is for depression, right-TMS is for anxiety. Why that’s the case, I have no idea.)

As I was reading this I intuited there would be something to predict here so I successfully stopped reading before the "left-TMS is for depression, right-TMS is for anxiety" part and scrolled it out of view so I could do the prediction myself based on what I understand to be the roles of the right and left hemispheres.

As I understand it, the left hemisphere of the brain is sort of focused "forwards", on whatever tool you're using or prey you're hunting, and in particular on words you're speaking/hearing and plans you're making to pursue some goal. In contrast the right hemisphere of the brain is focused "outwards" on the environment as a whole and sort of on guard for any threats or interesting irregularities that ought to pull your attention away.

Therefore I predicted that left-TMS would be for kind of general depression stuff about all your plans seeming like bad ideas or whatever, and right-TMS would be for worrying about a bunch of stuff that's in your environment and being distracted, which sounds more like either an ADHD kind of thing or anxiety!

So you'll have to take my word for it, but I got it right.

...never making excuses to myself such as "I wanted to do A, but I didn't have the willpower so I did B instead", but rather owning the fact I wanted to do B and thinking how to integrate this...

 

AKA integrating the ego-dystonic into the homunculus

I think what’s happening in this last one is that there’s a salient intuitive model where your body is part of “the crowd”, and “the crowd” is the force controlling your actions.

This strongly reminds me of this excellent essay: https://meltingasphalt.com/music-in-human-evolution/

Can we expect to see code for this on https://github.com/agencyenterprise sometime soon? I'm excited to fiddle with this.

In HCH, the human user does a little work then delegates subquestions/subproblems to a few AIs, which in turn do a little work then delegate their subquestions/subproblems to a few AIs, and so on until the leaf-nodes of the tree receive tiny subquestions/subproblems which they can immediately solve.

This does not agree with my understanding of what HCH is at all. HCH is a definition of an abstract process for thought experiments, much like AIXI is. It's defined as the fixed point of some iterative process of delegation expanding out into a tree. It's also not something you could actually implement, but it's a platonic form like "circle" or "integral".

This has nothing to do with the way an HCH-like process would be implemented. You could easily have something that's designed to mimic HCH but it's implemented as a single monolithic AI system.

Okay so where do most of your hopes route through then?

I'm just here to note that the "canonical example" thing mentioned here is very similar to the "nothing-up-my-sleeve numbers" used in the definition of some cryptographic protocols.

I think you may have meant this as a top-level comment rather than a reply to my comment?

Load More