Agent foundations, AI macrostrategy, human enhancement.
I endorse and operate by Crocker's rules.
I have not signed any agreements whose existence I cannot mention.
In the case of engineering humans for increased IQ, Indians show broad support for such technology in surveys (even in the form of rather extreme intelligence enhancement), so one might focus on doing research there and/or lobbying its people and government to fund such research. High-impact Indian citizens interested in this topic seem like very good candidates for funding, especially those with the potential of snowballing internal funding sources that will be insulated from western media bullying.
I've also heard that AI X-risk is much more viral in India than EA in general (in comparative terms, relative to the West).
And in terms of "Anything right-leaning" a parallel EA culture, preferably with a different name, able to cultivate right-wing funding sources might be effective.
Progress studies? Not that they are necessarily right-leaning themselves but if you integrate support for [progress-in-general and doing a science of it] over the intervals of the political spectrum, you might find that center-right-and-righter supports it more than center-left-and-lefter (though low confidence and it might flip if you ignore the degrowth crowd).
With the exception of avoiding rationalists (and can we really blame Moskovitz for that?)
care to elaborate?
Some amphetamines kinda solve akrasia-in-general to some extent (much more so than caffeine), at least for some people.
I'm not claiming that they're worth it.
I imagine "throw away your phone" will get me 90% of the way there.
I recommend https://www.minimalistphone.com/
It didn't get me 90% of the way there ("there" being eliminating akrasia) but it probably did reduce [spending time on my phone in ways I don't endorse] by at least one order of magnitude.
Active inference is an extension of predictive coding in which some beliefs are so rigid that, when they conflict with observations, it’s easier to act to change future observations than it is to update those beliefs. We can call these hard-to-change beliefs “goals”, thereby unifying beliefs and goals in a way that EUM doesn’t.
You're probably aware of it but it makes sense to explicitize that this move also puts in the goal category many biases, addictions, and maladaptive/disendorsed behaviors.
EUM treats goals and beliefs as totally separate. But in practice, agents represent both of these in terms of the same underlying concepts. When those concepts change, both beliefs and goals change.
Active inference is one framework that attempts to address it. Jeffrey-Bolker is another one, though I haven't dipped my toes into it deep enough to have an informed opinion on whether it's more promising than active inference for the thing you want to do.
Based on similar reasoning, Scott Garrabrant rejects the independence axiom. He argues that the axiom is unjustified because rational agents should be able to lock in values like fairness based on prior agreements (or even hypothetical agreements).
I first thought that this introduces epistemic instability because vNM EU theory rests on the independence axiom (so it looked like: to unify EU theory with active inference you wanted to reject one of the things defining EU theory qua EU theory) but then I realized that you hadn't assumed vNM as a foundation for EU theory, so maybe it's irrelevant. But still, as far as I remember, different foundations of EU theory give you slightly different implications (and many of them have some equivalent of the independence axiom; at least Savage does), so it might be good for you to think explicitly about what kind of EU foundation you're assuming. But it also might be irrelevant. I don't know. I'm leaving this thought-train-dump in case it might be useful.
I have Ubuntu and I also find myself opening apps mostly by searching. I think the only reason I put anything on desktop is to be reminded that these are the things I'm doing/reading at the moment (?).
There's a psychotherapy school called "metacognitive therapy" and some people swear by it being simple and a solution to >50% of psychological problems because it targets the root causes of psychological problems (saying from memory of what was in the podcast that I listened to in the Summer of 2023 and failed to research the topic further; so my description might be off but maybe somebody will find some value in it).
https://podcast.clearerthinking.org/episode/173/pia-callesen-using-metacognitive-therapy-to-break-the-habit-of-rumination/