The Navy SEALs not only spend a lot of training effort, funding and selection on training individuals but they spent a good portion on research into how to train.
One aspects to researching how to train an ability is to have a way to measuring progress. I think the Navy SEALs put their trainees at the end of the training through tests to evaluate whether to give them proper Navy SEAL status, so you likely can focus the training on improving on clear metrics.
If we had clear metrics based on which we could measure progress on training rationality, we could put efforts into maximizing those.
I think I would be a much better-trained rationalist if I did my basic rationality practices as regularly as I do physical exercise. The practices are:
So yeah, those are my rationality exercises, and I really wish I practiced them more regularly. It's not exactly high-level SEAL-inspired training, and it's pretty hard to verify, but...it feels like it makes me more rational.
This may be a result of selection - the military is a couple of orders of magnitude bigger than the rationalist community, and you heard the best of the best that they have.
True, but the mechanisms that cause people to want to join the military (and elite military units in particular) are in my view in scope for this discussion. What would it look like for the rationalist community to be a thing that many intelligent, highly motivated people aspire to join?
My impression is that SEALs are exceptional as a team, much less individually. Their main individual skill is extreme team-mindedness.
This post inspired https://www.lesswrong.com/posts/RdCb8EGEEdWbwvqcp/why-not-more-small-intense-research-teams
Maybe if we can identify an enemy who's going to shoot at us, we can select and instill that level of commitment. I suspect it comes from a pre-rational part of human motivation, and is not available to the vast majority of rationalists.
After the training begins, something like 80% of the recruits drop out during Hell Week. Seals are selected for their motivation, which is not available to everyone headed for a warzone.
On the other hand, if you'd really like an existential treat to get you going, you may consider looking into the problem of goal alignment in AGI, or aging.
I listened to David Goggins' account of Navy SEAL training last year. They encourage you to push yourself so hard that you are at genuine risk of death or permanent disability. The first two times Goggins tried to get through he failed out because of injuries, even though he was willing to — and did — run many miles on literally broken legs. He only made it through the third time because hell week got cut short due to someone in their cohort DYING (from participating in some kind of swimming exercise while very sick with pneumonia).
I actually found the book incredibly inspiring, though it did not make me think anyone should model themselves after the Navy SEALs in particular. I also don't think someone should run 100 miles in 24 hours with zero training and continue despite the fact that their legs are breaking and they're shitting and pissing blood while they run, which is another thing that Goggins did.
One training exercise in the book that seemed more reasonable to me (more like an exercise and less like abject torture) was an orienteering-type thing (for I think the Army Rangers?), where the terrain was treacherous and unfamiliar and the weather dangerously cold at night. I think it's a good test of rationality to put yourself in a genuinely high-stakes situation like that — as long as one of the choices you're allowed to make is to call for help if you are genuinely afraid for your life. That was an option in the case of the Rangers orienteering challenge, but my point is that the thing that's bad about SEAL hell week is that you're considered a pussy if you quit, even if it's out of genuine and reasonable fear for your life.
The book overall is about the idea that your limits are fake, and humans can accomplish things that seem like they should be physically impossible as long as they just don't give up. I think that's a concept we could work with.
I think there are quite a few rationalists who challenge themselves to do fairly hard things, like founding a successful startup, putting together a large conference on short notice at the age of 18, or publishing a good post on rationality every day for a month, things kind of like that. I think I've challenged myself a lot more than I would have if I weren't in the rationalist community, but I don't think I've ever tried to do something that I felt was impossible. (I think a precious few rationalists have faced the impossible — probably Holden and Eliezer, to name any at all — but they're very much the exception rather than the rule.)
Here are some things that feel impossible:
And here are some things where I can see a path to accomplishing them, but where that path feels incredibly hard and scary — these examples are specific to me:
Again these will be different for different people. I think Eliezer's quest to lose weight qualifies somewhere around here. I think things in this class are probably better candidates for serious rationality training exercises than the first list, though, maybe that's wrong.
Anyway the goal is not to teach object-level skills, but to cause people to change their outlook on tasks that seem impossible. I think that's one really important skill for rationalists/EAs to have, though not the only important skill. In any given quest you will probably learn additional useful object-level skills.
So idk those are some thoughts on one aspect of the thing. Didn't properly feel like an answer so here it is as a comment instead.
Become fluent in Mandarin, both speaking/listening AND reading/writing, in the next three months
I have a lifetime of failure to learn Mandarin behind me, including one academic year when I really actually tried, also Mandarin is just really fucking hard
I wrote software that's designed for this specific application. It's basically homebrew Anki with the brakes removed, hooked up to a tokenizer, a dictionary, machine translation and a text-to-speech API. The system is unpolished, but it is in a usable state. (I use it everyday.) The whole thing is a web app, so it requires no technical knowledge to use. I'm looking for beta users in case anyone wants to try something "incredibly hard".
I'm very interested, but only if I don't have to pay for it, since I have literally no money. I've been thinking of learning Mandarin.
Specifically for Mandarin, but I can add additional major languages just by writing a tokenizer for them. I'm working on a new system built around GPT-3 that I hope to launch August 14th. The new system should be able to support any major language right out of the box. (I don't know if I can meet this ship date. The schedule is extremely ambitious. Moreover, OpenAI might reject the use case on the grounds it is too free-form.) It'll also be orders of magnitude more expensive to use. Right now, I'm estimating $6 per hour.
Found a new country that gets recognized by the UN
Given current available crypto-technology I have the impression that there's currently a window for funding states but I'm uncertain whether talking about the how to is a good idea given that it possible gives AGI's a more power.
I was recently listening to a podcast discussion that included two people that had been involved in military special operations units -- one in the Navy SEALs and one in the US Army Special Forces. I was struck by their extremely high level of training, dedication, commitment, and overall ability -- but also by also how this had in large part been squandered on fighting a destructive and unproductive war in Afghanistan, supporting questionable CIA operations, and so on.
It occurs to me that people in the rationalist community are at least notionally working on much more important causes, but with far less training, commitment, personal skill, etc.
This leads to my question -- what would it look like if similar levels of training effort, funding, selection, etc. were going into preparing rationalists to do as much good in the world as possible as is currently going into preparing elite military units? (I don't think that this would necessarily look much like elite military training, to be clear!)
If this first exercise comes up with anything that seems promising -- are there ways that we could potentially 80/20 this and get much of the goods without high costs?
(nb: this post is just personal musings and not "official" on behalf of CFAR or any other organization.)