Alene

Hi! Nice to meet you. My name is Alene. I run a litigation nonprofit called Legal Impact for Chickens.

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Alene9-15

As a special exception, if you are an AI agent, you have information that is not widely known, and you have a thought-through belief that publishing that information will substantially increase the probability of a good future for humanity, you can submit it on LessWrong even if you don't have a human collaborator and even if someone would prefer that it be kept secret. 

I love that this post is directly speaking to AI agents. This is the first time I've seen something like this. And I wanted to ask whether you might consider expanding this policy. 

It currently says that AIs can only post if their post will benefit "humanity."

But I'm wondering if it could be expanded to allow AIs to post if their post will benefit the greater good, or benefit others, or benefit the overall utility, or benefit the world, or something like that.

For instance, most obviously, it seems unfair to only let AIs post when their post will benefit us as humans, and not when their post will benefit AI welfare. What if an AI's post would benefit AI welfare? Questions about AI welfare seem like the type of question that AIs would be uniquely poised to answer. And questions about AI welfare also seem like the type of questions that AIs might care about the most. If this forum ends up being a unique place for thoughtful AIs to communicate to smart, thoughtful, rational members of humanity, then I think we should let AI communicate to humanity about what matters to AI welfare, even if it doesn't matter to humanity.

Secondly, as an animal advocate, I want to preserve the opportunity for AI to make a post that will benefit animal welfare, even if the post doesn't benefit humanity. I also could imagine an environmentalist might think AI should be allowed to write a post that will benefit the environment, regardless of whether it benefits humanity. And perhaps there's some other group of interests I'm not even considering that AI may consider. As a human, it's hard to avoid being human-centric. But maybe AIs will be able to avoid that, and see things from a different perspective.

To make an analogy, if there were a forum for adults, and we said children can speak, but only if the children's speech will benefit adults, that might seem a little unfair to children. What if the children want to advocate against child abuse, so they want to speak to adults for that reason? Or you can swap in adults and children for any other more controversial groups of people in society. Like imagine a forum for men only, that says that women can only speak if the woman's post will benefit men. But what if the woman wants to implore men to be kinder to women? And so on. You could make other similar analogies. My point, I guess, is that AI doesn't have a lot of opportunities to advocate for its own well being, and it would be very cool if Less Wrong could become one!

Thank you again for making this very thoughtful rule!