Andaro2
Andaro2 has not written any posts yet.

I'm not sure they mean that. Perhaps it would be better to actually specify the specific values you want implemented. But then of course people will disagree, including the actual humans who are trying to build AGI.
If you donate to AI alignment research, it doesn't mean that you get to decide which values are loaded. Other people will decide that. You will then be forced to eat the end result, whatever it may look like. Your mistaken assumption is that there is such a thing as "human values", which will cause a world that is good for human beings in general. In reality, people have their own values, and they include terms for "stopping other people from having what they want", "making sure my enemies suffer", "making people regret disagreeing with me", and so on.
AI alignment isn't the only problem. Most people's values are sufficiently unaligned with my own that find solving AI unattractive as a goal. Even if I had a robust lever to push, such as donating to an AI alignment research org or lobby think tank and it was actually cost-effective, the end result would still be unaligned (with me) values being loaded. So there are two steps rather than one: First, you have to make sure the people who create AI have values aligned with yours, and then you have to make sure that the AI has values aligned with the people creating it.
Frankly, this is hopeless from my perspective. Just the... (read 402 more words →)
In exceptional circumstances, this might be your wise understanding of their enlightened self-interest even when at cross-purposes to their present desires: e.g. taking your dog to the vet, preventing a suicide.
I just want to point out that your model of their enlightened self-interest can be severely wrong, e.g. some people see suicide as a rational means to avoid fates worse than death (including fates only slightly worse than death, which comprises a lot of ordinary human life that you're not supposed to complain about). This is why I value suicide as an option. And if you give yourself permission to coerce others to lose this option without their consent, you might be making them worse off according to their enlightened self-interest while motivating them to hate you at the same time.
It wouldn't really change the overall outcome. What matters the most is that the total number of talented people grows exponentially, not just specific individual people.