One thing I'm confused about this post is whether constructivism, subjectivism count as realisms. The cited realists (Enoch and Parfit) are substantive realists.
I agree that substantive realists are a minority in the rationality community, but not that constructivists + subjectivists + substantive realists are a minority.
Re Neuroticism.
Let's just consider emotions. A really simple model of emotions is that they're useful bec they provide info and bec they have motivational power. Neurotic emotions are useful when they provide valuable info or motivate valuable actions.
If you're wondering whether a negative emotion is useful, check whether it's providing valuable info or motivating useful action. I think internal family systems might be especially useful for this.
Of course, sometimes you can get the valuable info or motivation w/o experiencing a negative emotion (see replacing guilt).
Many negative emotions are hypersensitive, which is why we see the trend towards limiting them. Ie. most often anxiety is not providing useful information or motivating useful action. The hypersensitivity would be justified if the costs of being wrong were super high, but for many of the things we experience anxiety about, this is no longer the case. That being so, I imagine for some people, negative emotions can play a useful role in some contexts, but one needs to be concrete here.
Weekly review:
The ontology is key goals, weekly goals. Each goal is grouped under a broad project. I like flexibility, so the ontology is as general as it sounds.
Daily review:
What I could be better at:
I have compiled all of these pieces rather slowly.
Thanks!
I agree that the return to "learning to navigate moods" varies by person.
It sounds to me, from your report, that you tend to be in moods conducive to learning. My sense is that there are many who are often in unproductive moods and many who aware that they spend too much time in unproductive moods. These people would find learning to navigate moods valuable.
Awesome, thanks for the super clean summary.
I agree that the model doesn't show that AI will need both asocial and social learning. Moreover, there is a core difference between the growth of the cost of brain size between humans and AI (sublinear [EDIT: super] vs linear). But in the world where AI dev faces hardware constraints, social learning will be much more useful. So AI dev could involve significant social learning as described in the post.
Thanks, fixed.
Can you say more about what you mean by I. metaphilosophy in relation to AI safety? Thanks.
Have there been explicit requests for web apps that have may solve an operations bottleneck at x-risk organisations? Pointers towards potential projects would be appreciated.
Lists of operations problems at x-risk orgs would also be useful.
There's a sense in which debates over the concept "perception" are purely conceptual, but they're not just a matter of using a different word. This is what I was getting at by cognitive qualia: conceptual debates can shape how you experience the world.