Any particular reason you've linked all those tweets, but blocked general access to them? I'd probably be interested in reading some of those threads just going by the titles.
Oh, it's not clear but this thread was originally written over 6(?) months ago (my Twitter was public back then).
I just updated it today to add the top line.
I have this thread linked in my Twitter bio (and I guess that's where most visitors to it come from), so that's the main use case.
I don't really blog outside LW, so don't have a separate home page.
Sorry that you can't see the linked tweets but:
I do not plan to make my Twitter account public (I accept follow requests by default, but there are many bad incentives of public Twitter that I enjoy being shielded from).
I aspire to become an alignment theorist — all other details are superfluous — I leave them here anyway for historical purposes.
Introduction
I have a set of questions I habitually ask online acquaintances that pique my interest/who I want to get to know better. Many want to know my answer to those same questions.
It would be nice to have a central repository introducing myself that I can keep up to date.
Questions
A. What do you care about?
B. What do you think is important?
C. What do you want/hope to do with your life?
D. What do you want/hope out of life?
E. Where are you coming from?
F. How do you spend your time?
G. What do you do for recreation/leisure/pleasure/fun time?
Answers
A.
I care about creating a brighter future for humanity. I believe a world far better than any known to man is possible, and I am willing to fight for it.
I want humanity to be fucking awesome. To take dominion over the natural world and remake it in our own image, better configured to serve our values.
I want us to be as gods.
I outlined what godhood means for me here.
I think that vision is largely what drives me, what pushes me forward and keeps me going.
B.
Mitigating Existential Risk/Pursuing Existential Security
The obvious reasons are obvious.
But I am personally swayed by astronomical waste. I don't want us to squander our cosmic endowment. Especially because our future can be so wonderful, I think it would be very sad if we never realise it.
Promoting Existential Hope
I want to give people a positive vision of the future they can rally around and get excited by. Something that makes them glad to be alive. Eager to wake up each day. A goal to yearn for and aspire to.
To reach out to with relentless determination.
I'd like to communicate that:
AI Safety
I believe that safely navigating the development of transformative artificial intelligence may be the most important project of the century.
Transformative AI could plausibly induce a paradigm shift in the human condition.
To explain what I mean by "paradigm shift in the human condition", I think we may see GDP doubling multiple times a year later this century.
(Depending on timelines and takeoff dynamics, doubling periods of a month or even shorter seem plausible.)
I'd like to approach AI safety from an agent foundations perspective (I think agent foundations work is neglected relative to its potential value and is a better fit for me). In particular, agent foundations solutions to alignment seem more likely to be:
Agent foundations style approaches aren't the only approaches I'm considering, but it's what I plan to start out with.
C.
I want to maximise my expected positive impact on the world conditioning on who I am and the resources available to me:
(The net present values thereof.)
And on the broader effective altruist community (I consider myself a member of the effective altruist community, so I should take actions from the perspective of maximising positive impact of the effective altruist community, not maximising my personal positive impact. Or rather, I believe that as a matter of normative decision theory, this is how best to maximise my values.)
I think my comparative advantage is something like:
Well considered, my plan to improve the world is something like "learn a lot of useful stuff then exploit that knowledge to make intellectual contributions to the most important problems confronting human civilisation".
Mathematics
I plan to learn a fuckton of abstract mathematics.
I want to learn the fundamental structure of reality. To become good at abstract thinking and formal reasoning. To abstract about abstraction itself.
Studying abstract maths seems like it would be useful.
By "formal style reasoning", I'm referring to reasoning within formal systems: logics, mathematical models, computer programs/algorithms, explicit ontologies, other abstractions, etc.
I want to become good at building up a formal model of a domain (picking "good" models ["all models are wrong, but some are useful", generating "maps that better reflect the territory", "carving reality at its joints", etc.]), and navigating that model (making correct inferences, deriving insight about the underlying domain from the model, making predictions about the domain from the model, explaining the domain through the model, building up intuition about the domain from the model, deriving knowledge about the domain via the model, etc.).
I'd basically try to autodidact my way to the competence of a professional mathematician (e.g., an algebraic abstractologist).
I expect this would make me more proficient at agent foundations work, and some of the other important problems confronting human civilisation (now or later this century).
Computation
I have a bunch of questions that I want to resolve:
I stumbled on the above questions when trying to dissolve: "is consciousness reducible?"
General
I want to learn how the world works. I want to build a rich and coherent world model of human civilisation and of the physical reality we inhabit.
I would like to use that model to figure out how to uplift said civilisation. I'll advocate for said uplifting (ideally through my writing).
Artificial Intelligence
Stuff I'd like to understand on a fundamental level:
Use this understanding to assist in the project of safely developing transformative artificial intelligence.
I want to take a stab at agent foundations style approaches to alignment while I'm still young (< 40, do mathematicians really stop making novel contributions after their 40th birthday? I don't know, but I'll try to milk my cognitive youth for all its worth).
Digital Minds
Stuff I'd like to learn at a fundamental level:
I'd try to use this understanding (coupled with an understanding of computation) to solve technical/practical (especially safety/security/robustness/reliability/assurance) problems/challenges related to digital minds.
I'll probably write whitepapers specifying infrastructure to support human uploads. Depending on how AGI goes, I might get involved in human upload projects.
Eventually, advocate for transitioning to primarily digital substrate.
I expect to switch to working digital minds after AI safety. Ideally, when I think my marginal impact from AI safety work would be below my marginal impact if I switched to digital minds. This might be because:
Future
The career trajectories I described above are conditional on a normal human lifespan (e.g. I expect to have retired in 50 years). If I was able to attain indefinite life, there's a lot of stuff I'd like to do. But it's still the same basic template of "learn a lot of useful stuff then exploit that knowledge to make intellectual contributions to the most important problems confronting human civilisation".
I covered them in this Twitter thread.
D.
In no particular order:
E.
(Oh wow, this one is pretty extensive. This section took me the longest to complete.)
I'm an immature brat who hasn't lost their childlike wonder and enthusiasm. I cherish the dreams of childhood ambition and reach out towards a brighter world.
I've written a few reflections on different aspects of my person. Some of them that I don't currently disavow:
F
Currently (the last week or two):
I have a lot I want to learn about, and so I let my muse and mental stamina dictate what I learn about.
Curiosity driven learning is probably more productive, I guess.
(I also have severe attention challenges, so forcing myself to focus against my muse is probably unproductive.)
G.
I don't have a meatspace social life and haven't had one since I graduated university. I want to change that soon though.
(
One of the reasons I'm excited about returning to formal education is to have a meatspace social life again.Returning to school did not change this. )I (still) expect I'd enjoy meatspace human contact.