I’m allowed to spend two days a week at Trajan House, a building in Oxford which houses the Center for Effective Altruism (CEA), along with a few EA-related bodies. Two days is what I asked for, and what I received. The rest of the time I spend in the Bodleian Library of the University of Oxford (about £30/year, if you can demonstrate an acceptable “research need”), a desk at a coworking space in Ethical Property (which houses Refugee Welcome, among other non-EA bodies, for £200/month), Common Ground (a cafe/co-working space which I’ve recommended to people as a place where the staff explicitly explain, if you ask, that you don’t need to order anything to stay as long as you like), a large family house I’m friends with, and various cafes and restaurants where I can sit for hours while only drinking mint tea.
I’m allowed to use the hot-desk space at Trajan House because I’m a recipient of an EA Long Term Future Fund grant, to research Alignment. (I call this “AI safety” to most people, and sometimes have to explain that AI stands for Artificial Intelligence.) I judged that 6 months of salary at the level of my previous startup job, with a small expenses budget, came to about £40,000. This is what I asked for, and what I received.
At my previous job I thought I was having a measurable, meaningful impact on climate change. When I started there, I imagined that I’d go on to found my own startup. I promised myself it would be the last time I’d be employed.
When I quit that startup job, I spent around a year doing nothing-much. I applied to Oxford’s Philosophy BPhil, unsuccessfully. I looked at startup incubators and accelerators. But mostly, I researched Alignment groups. I visited Conjecture, and talked to people from Deep Mind, and the Future of Humanity Institute. What I was trying to do, was to discern whether Alignment was “real” or not. Certainly, I decided, some of these people were cleverer than me, more hard-working than me, better-informed. Some seem deluded, but not all. At the very least, it’s not just a bunch of netizens from a particular online community, whose friend earned a crypto fortune.
During the year I was unemployed, I lived very cheaply. I’m familiar with the lifestyle, and – if I’m honest – I like it. Whereas for my holidays while employed I’d hire or buy a motorbike, and go travelling abroad, or scuba dive, instead my holidays would be spent doing DIY at a friend’s holiday home for free board, or taking a bivi bag to sleep in the fields around Oxford.
The exceptions to this thrift were both EA-related, and both fully-funded. In one, for which my nickname of “Huel and hot-tubs” never caught on, I was successfully reassured by someone I found very smart that my proposed Alignment research project was worthwhile. In the other, I and others were flown out to the San Francisco Bay Area for an all-expenses-paid retreat to learn how to better build communities. My hotel room had a nightly price written on the inside of the door: $500. Surely no one ever paid that. Shortly afterwards, I heard that the EA-adjacent community were buying the entire hotel.
While at the first retreat, I submitted my application for funding. While in Berkeley for the second, I discovered my application was successful. (“I should hire a motorbike, while I’m here.” I didn’t have time, between networking opportunities.) I started calling myself an “independent alignment researcher” to anyone who would listen and let me into offices, workshops, or parties. I fit right in.
At one point, people were writing plans on a whiteboard for how we could spend the effectively-infinite amount of money we could ask for. Somehow I couldn’t take it any more, so I left, crossed the road, and talked to a group of homeless people I’d made friends with days earlier, in their tarp shelter. We smoked cigarettes, and drank beer, and they let me hold their tiny puppy. Then I said my thank-yous and goodbyes, and dived back into work.
Later, I’m on my canal boat in Oxford. For a deposit roughly the price of my flight tickets, I’ve been living on the boat for months. I get an email: the first tranche of my funding is about to be sent over, it’ll probably arrive in weekly instalments. I’ll be able to pay for the boat’s pre-purchase survey.
Then I check my bank account, and it seems like it wasn’t the best use of someone’s time for them to set up a recurring payment, and instead the entire sum has been deposited at once. My current account now holds as much money as my life savings.
I’m surprised by how negative my reaction is to this. I am angry, resentful. After a while I work out why: every penny I’ve pinched, every luxury I’ve denied myself, every financial sacrifice, is completely irrelevant in the face of the magnitude of this wealth. I expect I could have easily asked for an extra 20%, and received it.
A friend later points out that this is irrational. (I met the friend through Oxford Rationalish.) Really, he points out, I should have been angry long before. I should have been angry when I realised that there were billionaires in the world at all, not when their reality-warping influence happens to work in my favour. My feelings continue to be irrational.
But now I am funded, and housed, and fed (with delicious complementary vegan catering), and showered (I’m too sparing of water to shower on the boat). I imagine it will soon be cold enough on the boat that I come to the office to warm up; this will be my first winter. And so all my needs are taken care of. I am safe, while the funding continues. And even afterwards, even with no extension, I’ll surely survive. So what remains is self-actualisation. And what I want to do, in that case, is to explore the meaning of the good life, to break it down into pieces which my physics-trained, programmer’s brain can manipulate and understand. And what I want to do, also, is to understand community, build community, contribute love and care. And, last I thought about these things, I’m exactly where I need to be to be asking these questions and developing these skills.
(I realise, in this moment of writing, that I am not building a house and a household, not working with my hands, not designing spaces. I am also not finding a wife.)
I have never felt so obliged, so unpressured. If I produce nothing, before Christmas, then nothing bad will happen. Future funds will be denied, but no other punishment will ensue. If I am to work, the motivation must come entirely from myself.
My writing has been blocked for months. I know what I want to write, and I have explained it in words to people dozens of times. But I don’t believe, on some level, that it’s valuable. I don’t think it’s real, I don’t think that my writing will bring anyone closer to solving Alignment. (This is only partially true.) I have no idea what I could meaningfully offer, in return or exchange. And I can’t bear the thought of doing something irrelevant, of lying, cheating, stealing. Of distracting. Instead, I procrastinate, and – in seeking something measurable – organise an EA-adjacent retreat.
I wander over to the library bookshelves in Trajan House. I pick up a book about community-building, which looks interesting. I see a notice: “Like a book? Feel free to take it home with you. Please just scan this QR code to tell us which book you take :)” I’m pleased: I assume that they’ll ask for my name, so they can remind me later to return the book. This seeming evidence of a high-trust society highlights what I like about EA: everyone is trying to be kind. Then I scan the QR code, and a form loads. But I’m not asked for my name, nor is my email shared with them. They only ask for the title of the book. I realise that – of course – they’re just going to buy a replacement. Of course. It would be ridiculously inefficient to ask for the book back: what if I’m still reading it? What if I’m out of town? And whose time would be used to chase down the book? Much better to solve the problem with money. This isn’t evidence of a high-trust society, after all, only of wealth I still haven’t adjusted to. I submit the form, and pocket the book.
I am very sorry that you feel this way. I think it is completely fine for you, or anyone else, to have internal conflicts about your career or purpose. I hope you find a solution to your troubles in the following months.
Moreover, I think you did an useful thing, raising awareness about some important points:
Epistemic status for what follows: medium-high for the factual claims, low for the claims about potential bad optics. It might be that I'm worrying about nothing here.
However, I do not think this place should be welcoming of posts displaying bad rhetoric and epistemic practices.
Posts like this can hurt hurt the optics of the research done in the LW/AF extended universe. What does a prospective AI x-safety researcher think when they get referred to this site and see this post above several alignment research posts?EDIT: The above paragraph was off. See Ben's excellent reply for a better explanation of why anyone should care.
I think this place should be careful about maintaining:
For some examples:
I tried for 15 minutes to find a good faith reading of this, but I could not.
Most people would read this as "the hotel room costs $500 and the EA-adjacent community bought the hotel complex in which that hotel is a part of", while being written in a way that only insinuates and does not commit to meaning exactly that. Insinuating bad optics facts while maintaining plausible deniability, without checking the facts, is a horrible practice, usually employed by politicians and journalists.
The poster does not deliberately lie, but this is not enough when making a "very bad optics" statement that sounds like this one. At any point, they could have asked for the actual price of the hotel room, or about the condition of the actual hotel that might be bought.
This is true. But it is not much different from working a normal software job. The worst thing that can happen is getting fired after not delivering for several months. Some people survive years coasting until there is a layoff round.
An important counterfactual for a lot of people reading this is a PhD degree.
There is no punishment for failing to produce good research, except getting dropping out of the program after a few years.
This might be true. Again, I think it would be useful to ask: what is the counterfactual?
All of this is applicable for anyone that starts working for Google or Facebook, if they were poor beforehand.
This feeling (regretting saving and not spending money) is incredibly common in all people that have good careers.
I would suggest going through the post with a cold head and removing parts which are not up to the standards.
Again, I am very sorry that you feel like this.
(that all said, after some reflection I did weak downvote the OP because I thought 98 karma felt a bit too high. ((I'm someone who thinks it's fine to vote based on the total karma, not just on whether I thought it was overall good or bad)). I would feel like the site-karma-health was off if this got like 200 karma, and IMO an emotional report like this should get, like, a respectable 40-80-ish karma, but if it's getting over 100 I expect that's largely coming from people who are applauding the general concept of wealth-is-sinful or something, and I do worry about the cultural effects of that)