Gordon Seidoh Worley

I'm writing a book about epistemology. It's about The Problem of the Criterion, why it's important, and what it has to tell us about how we approach knowing the truth.

I've also written a lot about AI safety. Some of the more interesting stuff can be found at the site of my currently-dormant AI safety org, PAISRI.

Sequences

Advice to My Younger Self
Fundamental Uncertainty: A Book
Zen and Rationality
Filk
Formal Alignment
Map and Territory Cross-Posts
Phenomenological AI Alignment

Comments

Sorted by

If someone has gone so far as to buy supplements, they have already done far more to engineer their nutrition than the vegans who I've known who struggle with nutrition.

I generally avoid alts for myself, and one of the benefits I see is that I feel the weight of what I'm about to post.

Maybe I would sometimes writer funnier, snarkier things on Twitter that would get more likes, but because my name is attached I'm forced to reconsider. Is this actually mean? Do I really believe this? Does this joke go to far?

Strange to say perhaps, but I think not having alts makes me a better person, in the sense of being better at being the type of person I want to be, because I can't hide behind anonymity.

Thanks for writing this up. This is something I think a lot of people are struggling with, and will continue to struggle with as AI advances.

I do have worries about AI, mostly that it will be unaligned with human interests and we'll build systems that squash us like bugs because they don't care if we live or die. But I have no worries about AI taking away our purpose.

The desire to feel like one has a purposes is a very human characteristic. I'm not sure that any other animals share our motivation to have a motivation. In fact, past humans seemed to have less of this, too, if reports of extant hunter-gatherer tribes are anything to go by. But we feel like we're not enough if we don't have a purpose to serve. Like our lives aren't worth living if we don't have a reason to be.

Maybe this was a historically adaptive fear. If you're in a small band or living in a pre-industrial society, every person had a real cost to existing. Societies existed up against the Malthusian limit, and there was no capacity to feed more mouths. You either contributed to society, or you got cast out, because everyone was in survival mode, and surviving is what we had to do to get here.

But AI could make it so that literally no one has to work ever again. Perhaps we will have no purpose to serve to ensure our continued survival if we get it right. Is that a problem? I don't think it has to be!

Our minds and cultures are build around the idea that everyone needs to contribute. People internalize this need, and one way it can come out is as feeling like life is not worth living without purpose.

But you do have a purpose, and it's the same one all living things share: to exist. It is enough to simply be in the world. Everything else is contingent on what it takes to keep existing.

If AI makes it so that no one has to work, that most of us our out of jobs, that we don't even need to contribute to setting our own direction, that need not necessarily be bad. It could go badly, yes, but it also could be freeing to be as we wish, rather than as we must.

I speak from experience. I had a hard time seeing that simply being is enough. I've also met a lot of people who had this same difficulty, because it's what draws them to places like the Zen center where I practice. And everyone is always surprised to discover, sometimes after many years of meditation, that there was never anything that needed to be done to be worthy of this life, and if we can eliminate the need to do things to get to keep living this life, so that none may need lose it due to accident or illness or confusion or anything else, then all the better.

I want to push back a little in that I was fully vegan for a few years with no negative side effects, other than sometimes being hungry because there was nothing I would eat and annoying my friends with requests to accommodate my dietary preferences. I even put on muscle and cut a lot of fat from my body!

I strongly suspect, based on experience with lots of other vegans, that vegans who struggle with nutritional deficiencies are bad at making good choices about macro nutrients.

Broadly speaking, the challenge in a vegan diet is getting enough lysine. Most every other nutrient you need is found in abundance, but lysine is tricky because humans mostly get that amino acid from meat. Getting enough isn't that hard if you know what to eat, but you have to eat enough of it in enough volume to avoid problems.

What does it take to get enough lysine? Beans, lots of beans! If you're vegan and not eating beans you are probably lysine deficient and need to eat more beans. How many beans? Way more than you think. Beans have lots of fiber and aren't nutrient dense like meat.

I met lots of vegans who didn't eat enough beans. They'd eat mushrooms, but not enough, and lots of other protein sources, but not ones with enough lysine. They'd just eat a random assortment of vegan things without really thinking hard about if they were eating the right things. It's a strategy that works if you eat a standard diet that's been evolved by our culture to be relatively complete, but not eating a constructed diet like modern vegans do.

Now, I have met a few people who seem to have individual variation issues that make it hard for them to eat vegan and stay healthy. In fact, I'm now one of those, because I developed some post-COVID food sensitivities that forced me to go vegetarian and then start eating meat when that wasn't enough. And some people seem to process protein differently in a way that is weird to me but they insist if they don't eat some meat every 4 hours or so they feel like crap.

So I'm not saying there aren't some people who do need to eat meat and just reduce the amount and that's the best they can safely do, but I'm also saying that I think a lot of vegans screw up not because they don't eat meat but because they don't think seriously enough about if they are getting enough lysine every day.

What would it mean for this advice to not generalize? Like what cases are you thinking of where what someone needs to do to be more present isn't some version of resolving automatic predictions of bad outcomes?

I ask because this feels like a place where disagreeing with the broad form of the claim suggests you disagree with the model of what it means to be present rather than that you disagree with the operationalization of the theory, which is something that might not generalize.

I think you still have it wrong, because being present isn't a skill. It's more like an anti-skill: you have stop doing all the stuff you're doing that keeps you from just being.

There is, instead, a different skill that's needed to make progress towards being present. It's a compound skill around noticing what you do out of habit rather than in response to present conditions, figuring out why you have those habits, practice not engaging in those habits when you otherwise would, and thereby developing trust that you can safely drop those habits, thus retraining yourself to do less out of habit and be closer to just being and responding.

I can't think of a time where such false negatives were a real problem. False positives, in this case, are much more costly, even if the only cost is reputation.

If you never promise anything that could be a problem. Same if you make promises but no one believes them. Being able to make commitments is sometimes really useful, so you need to at least keep live the ability to make and hit commitments so you can use them when needed.

As AI continues to accelerate, the central advice presented in this post to be at peace with doom will become incresingly important to help people stay sane in a world where it may seem like there is no hope. But really there is hope so long as we keep working to avert doom, even if it's not clear how we do that, because we've only truly lost when we stop fighting.

I'd really like to see more follow up on the ideas made in this post. Our drive to care is arguably why we're willing to cooperate, and making AI that cares the same way we do is a potentially viable path to AI aligned with human values, but I've not seen anyone take it up. Regardless, I think this is an important idea and think folks should look at it more closely.

This post makes an easy to digest and compelling case for getting serious about giving up flaws. Many people build their identity around various flaws, and having a post that crisply makes the case that doing so is net bad is helpful to be able to point people at when you see them suffering in this way.

Load More