Epistemic status: Anger. Not edited.
TL;DR The hamster wheel is bad for you. Rationalists often see participation in the hamster wheel as instrumentally good. I don't think that is true.
Meet Alice. She has had the opportunity to learn many skills in her school years. Alice is a bright high school student with a mediocre GPA and a very high SAT score. She doesn't particularly enjoy school, and has no real interest in engaging in the notoriously soul-crushing college admissions treadmill.
Meet Bob. Bob understands that AGI is an imminent existential threat. Bob thinks AI alignment is not only urgent and pressing but also tractable. Bob is a second-year student at Ivy League U studying computer science.
Meet Charlie. Charlie is an L4 engineer at Google. He works on applied machine learning for the Maps team. He is very good at what he does.
Each of our characters has approached you for advice. Their terminal goals might be murky, but they all empathize deeply with the AI alignment problem. They'd like to do their part in decreasing X-risk.
You give Alice the following advice:
It's statistically unlikely that you're the sort of genius who'd be highly productive without at least undergraduate training. At a better college, you will not only receive better training and have better peers; you will also have access to opportunities and signalling advantages that will make you much more useful.
I understand your desire to change the world, and it's a wonderful thing. If you'd just endure the boredom of school for a few more years, you'll have much more impact.
Right now, MIRI wouldn't even hire you. I mean, look at the credentials most AI researchers have!
Statistically, you are not Eliezer.
You give Bob the following advice:
Graduating is a very good signal. A IvyLeagueU degree carries a lot of signalling value! Have you gotten an internship yet? It's great that you are looking into alignment work, but it's also important that you take care of yourself.
It's only your second year. If the college environment does not seem optimal to you, you can certainly change that. Do you want study tips?
Listen to me. Do not drop out. All those stories you hear about billionaires who dropped out of college might be somewhat relevant if you actually wanted to be a billionaire. If you're optimizing for social impact, you do not do capricious things like that.
Remember, you must optimize for expected value. Seriously consider grad school, since it's a great place to improve your skills at AI Alignment work.
You give Charlie the following advice:
Quit your job and go work on AI Alignment. I understand that Google is a fun place to work, but seriously, you're not living your values.
But it is too late, because Charlie has already been injected with a deadly neurotoxin which removes his soul from his skeleton. He is now a zombie, only capable of speaking to promo committees.
--
You want geniuses, yet you despise those who attempt to attain genius.
It seems blatantly obvious to you that the John von Neumanns and Paul Erdoses of the world do not beg for advice on internet forums. They must have already built a deep confidence in their capabilities from fantastical childhood endeavors.
And even if Alice wrote a working C++ compiler in Brainfuck at 15 years old, it's unlikely that she can solve such a momentous problem alone.
Better to keep your head down. Follow the career track. Deliberate. Plan. Optimize.
So with your reasonable advice, Alice went to Harvard and Bob graduated with honors. All of them wish to incrementally contribute to the important project of building safe AI.
They're capable people now. They understand jargon like prosaic alignment and myopic models. They're good programmers, though paralyzed whenever they are asked the Hamming questions. They're not too far off from a job at MIRI or FHI or OpenPhil or Redwood. They made good, wise decisions.
--
I hate people like you.
You say things like, "if you need to ask questions like this, you're likely not cut out for it. That's ok, I'm not either."
I want to grab you by your shoulders, shake you, and scream. Every time I hear the phrase "sphere of competence," I want to cry. Are you so cynical as to assume that people cannot change their abilities? Do you see people rigid as stone, grey as granite?
Do I sound like a cringey, irrational liberal for my belief that people are not stats sheets? Is this language wishful and floaty and dreamy? Perhaps I am betraying my young age, and reality will set in.
Alternatively, perhaps you have Goodharted. You saw cold calculation and wistful "acceptance" as markers of rationality and adopted them. In your wise, raspy voice you killed dreams with jargon.
Don't drop out. Don't quit your job. Don't get off the hamster wheel. Don't rethink. Don't experiment. Optimize.
You people hate fun. I'd like to package this in nice-sounding mathematical terms, but I have nothing for you. Nothing except for a request that you'd be a little less fucking cynical. Nothing except, reflect on what Alice and Bob could've accomplished if you hadn't discouraged them from chasing their dreams.
This is slightly complicated.
If your goal is something like "become wealthy while having free time," the Prep school->Fancy college->FIRE in finance or software path is actually pretty darn close to perfect.
If your goal is something like "make my parents proud" or "gain social validation," you probably go down the same route too.
If your goal is something like "seek happiness" or "live a life where I am truly free" I think that the credentialing is something you probably need to get off ASAP. It confuses your reward mechanisms. There's tons of warnings in pop culture about this.
If you have EA-like goals, you have a "maximize objective function"-type goal. It's in the same shape as "become as rich as possible" or "make the world as horrible as possible." Basically, the conventional path is highly highly unlikely to get you all the way there. In this case, you probably want to get into the
Loop.
For a lot of important work, the resources required are minimal and you already have it. You only need skills. If you have skills, people will also give you resources.
It shouldn't matter how much money you have.
Also, even if you were totally selfish, stopping apocalypse is better for you than earning extra money right now. If you believe the sort of AI arguments made on this forum, then it is probably directly irrational for you to optimize for things other than save the world.
So, do you think it's instrumental to saving the world to focus on credentials? Perhaps it's even a required part of this loop? (Perhaps you need credentials to get opportunities to get skills?)
I basically don't think that is true. Even accepting colleges teach more effectively than people can learn auto-didactically, the amount of time wasted on bullshit and the amount of health wasted on stress probably makes this not true. It seems like you'd have to get very lucky for the credential mill to not be a significant skill cost.
--
I guess it's worthwhile for me to reveal some weird personal biases too.
I'm personally a STEM student at a fancy college with a fancy (non-alignment) internship lined up. I actually love and am very excited about the internship (college somewhat less so. I might just be the wrong shape for college.), because I think it'll give me a lot of extra skills.
My satisfaction with this doesn't negate the fact that I mostly got those things by operating in a slightly different (more wheeled) mental model. A side effect of my former self being a little more wheeled is that I'd have to mess up even more badly to get into a seriously precarious situation. It's probably easier for me to de-wheel at the current point, already having some signalling tools, then it is for the average person to de-wheel.
I'm not quite sure what cycles you were referring to (do you have examples?), but this might be me having a bad case of "this doesn't apply to me so I will pay 0 thought to it," and thus inadvertently burning a massive hole in my map.
Despite this though, that I probably mostly wish I de-wheeled earlier (middle school, when I first started thinking somewhat along these lines) rather than later. I'd be better at programming, better at math, and probably more likely to positively impact the world. At the expense of being less verbally eloquent and having less future money. I can't say honestly that I would take the trade, but certainly a very large part of me wants to.
Certainly, I'd at least argue that Bob should de-wheel. The downside is quite limited.
--
There definitely is a middle path, though. Most of the AI alignment centers pay comparable salaries to top tech companies. You can start AI alignment companies and get funding, etc. There's an entire gradient there. I also don't entirely see how that is relevant.
ranked-biserial's point was largely about Charlie, who wasn't really a focus of the essay. What they said about Charlie might very much be correct. But it's not given that Alice and Bob secretly want this. They may very well have done something else if not given the very conservative advice.
I'll reply to ranked-biserial later.
Edit: typo