I hope you've smiled today :)
I really want to experience and learn about as much of the world as I can, and pride myself on working to become a sort of modern day renaissance man, a bridge builder between very different people if you will. Some not-commonly-seen-in-the-same-person things: I've slaughtered pigs on my family farm and become a vegan, done HVAC (manual labor) work and academic research, been a member of both the Republican and Democratic clubs at my university.
Discovering EA has been one of the best things to happen to me in my life. I think I likely share something really important with all the people that consider themselves under this umbrella. EA can be a question, sure, but I hope more than that that EA can be a community, one that really works towards making the world a little better than it was.
Below are some random interests of mine. I'm happy to connect over any of them, and over anything EA, please feel free to book a time whenever is open on my calendly.
How you can help me: I've done some RA work in AI Policy now, so I'd be eager to try to continue that moving forward in a more permanent position (or at least a longer period funded) and any help better myself (e.g. how can I do research better?) or finding a position like that would be much appreciated. Otherwise I'm on the look for any good opportunities in the EA Community Building or General Longtermism Research space, so again any help upskilling or breaking into those spaces would be wonderful.
Of a much lower importance, I'm still not for sure on what cause area I'd like to go into, so if you have any information on the following, especially as to a career in it, I'd love to hear about it: general longtermism research, EA community building, nuclear, AI governance, and mental health.
How I can help others: I don't have domain expertise by any means, but I have thought a good bit about AI policy and next best steps that I'd be happy to share about (i.e. how bad is risk from AI misinformation really?). Beyond EA related things, I have deep knowledge in Philosophy, Psychology and Meditation, and can potentially help with questions generally related to these disciplines. I would say the best thing I can offer is a strong desire to dive deeper into EA, preferably with others who are also interested. I can also offer my experience with personal cause prioritization, and help others on that journey (as well as connect with those trying to find work).
When thinking on this, you seriously do not think that one candidate will be better than the other? Your world view doesn't bring you to a view where one is even a slightly better candidate?
Mmm okay a bit confused by the thrust of the first bit. Is it that you wish to set yourself apart from my view because you see it unavoidably leading to untenable positions (like self-extinguishing)?
Jumping to the rest of it, I liked how you put the latter option for the positioning of the shepard. I'm not sure the feeling out of the "shepard impulse" is something where the full sort of appreciation I think is important has come out.
But I think you're right to point towards a general libertarian viewpoint as a crux here, because I'm relatively willing to reason through what's good and bad for the community and work towards designing a world more in line with that vision, even if it's more choice constrained.
But yeah, the society is a good example to help us figure out where to draw that line. It makes me most immediately wonder: is there anything so bad that you'd want to restrict people from doing it, even if they voluntarily entered into it? Is creating lives one of the key goods to you, such that most forms of lives will be worth just existing?
To answer your last question, it's the latter, a world where synthetic alternatives and work on ecological stability yields a possibility of a future for predators who no longer must kill for survival. It would certainly mean a lot less cows and chickens exist, but my own conclusions from the above questions lead me to thinking this would be a better world.
Thanks for the continued dialogue, happy to jump back in :)
I think it's very reasonable to take a "what would they consent to" perspective, and I do think this sort of set up would likely lead you to a world where humane executions and lives off the factory farm were approved of. But I guess I'd turn back to my originial point that this sort of relation seems apt to encourage a certain relation to the animal that I think will be naturally unstable and will naturally undermine a caring relationship with that animal.
Perhaps I just have a dash too much of deontology in me, but if you asked me to choose between a world where many people had kids but they ate them in the end, or a world of significantly fewer kids but where there was no such chowing down at the end of their life, I'd be apt to choose the latter. But deontology isn't exactly the right frame because again, I think this will just sort of naturally encourage relationships that aren't whole, relationships where you have to do the complicated emotional gymnastics of saying that you love an animal like their your friend one day and then chopping their head from their body the next and savoring the flavor of the flesh on the grill.
Maybe my view of love is limited, but I also think nearly every example you'd give me of people who've viewed animals as "sacred or people" but still ate them likely had deficient relationship to the animal. Take goats and the Islamic faith, for example. It's not fully the "sacred" category like cows for Hindus, but this animal has come to take a ritualistic role in various celebrations of the relgion, and when I've talked to Muslims about what the reason for this treatment, or things being Halal are, they will normally point out that this is a more humane relation to have with the animal. The meat being "clean" is supposed to imply, to some degree, "moral", but I think this relation isn't quite there. I've seen throat cuttings from Eid which involve younger members of the family being brought into the fold by serving as axeman, often taking multiple strikes to severe the head in a way of slaughter that seems quite far from caring. One friend of mine, who grew up in India with his family raising a number of goats for this occassion, often saw the children loving the goats and having names for them and such. But on Eid this would stop, and I think what the tradition left my friend with is a far more friendly view to meat consumption than he would have developed otherwise.
My last stab at a response might be to bring up an analogy to slavery. I take the equivalent of your position here to be "look, if each slave can look at the potential life he will hold and prefer that life to no life at all, then isn't that better than him not existing at all?" And to me it seems like I'd be again called to say "no". We can create the life of a slave, we can create the life of a cow who we plan to eat in the end, but I'd rather just call off the suffering all together and refuse to create beings that will be shackled to such a life. It's not a perfect analogy, but I hope it illustrates that we can deny the category entirely, and that that denial can open us up to a better future, one without slaves who prefer their life to not existing, but fellow citizens, one without farmed animals who prefer their life to not existing, but of pets we welcome happily into our families. That is the sort of world I hope for.
Garrett responded to the main thrust well, but I will say that watermarking synthetic media seems fairly good as a next step for combating misinformation from AI imo. It's certainly widely applicable (not really even sure what the thrust of this distinction was) because it is meant to apply to nearly all synthetic content. Why exactly do you think it won't be helpful?
Yeah, I think the reference class for me here is other things the executive branch might have done, which leads me to "wow, this was way more than I expected".
Worth noting is that they at least are trying to address deception by including it in the full bill readout. The type of model they hope to regulate here include those that permit "the evasion of human control or oversight through means of deception or obfuscation". The director of the OMB also has to come up with tests and safeguards for "discriminatory, misleading, inflammatory, unsafe, or deceptive outputs".
(k) The term “dual-use foundation model” means an AI model that is trained on broad data; generally uses self-supervision; contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters, such as by:
(i) substantially lowering the barrier of entry for non-experts to design, synthesize, acquire, or use chemical, biological, radiological, or nuclear (CBRN) weapons;
(ii) enabling powerful offensive cyber operations through automated vulnerability discovery and exploitation against a wide range of potential targets of cyber attacks; or
(iii) permitting the evasion of human control or oversight through means of deception or obfuscation.
Models meet this definition even if they are provided to end users with technical safeguards that attempt to prevent users from taking advantage of the relevant unsafe capabilities.
Hmm, I get the idea that people value succinctness a lot with these sorts of things, because there's so much AI information to take in now, so I'm not so sure about the net effect, but I'm wondering maybe if I could get at your concern here by mocking up a percentage (i.e. what percentage of the proposals were risk oriented vs progress oriented)?
It wouldn't tell you the type of stuff the Biden administration is pushing, but it would tell you the ratio which is what you seem perhaps most concerned with.
[Edit] this is included now
What alternative would you propose? I don't really like mundane risk but agree that an alternative would be better. For now I'll just change to "non-existential risk actions"
This post didn't do well in the games of LessWrong karma, but it was probably the most personally fruitful use of my time on the site in 2023. It helped me clarify my own views which I had already formed but hadn't put to paper, or cohered properly.
I also got to think about the movement as a whole, and really enjoyed some of what Elizabeth had to share. Particularly I remember her commentary on the lack of positivity in the movement, and have taken that to heart and really thought about how I can add more positivity in.