ChristianKl comments on Earning money with/for work in AI safety - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (31)
Consider more carefully your ranking of preferences, and expand your horizons quite a bit. There're lots of ways to improve the predicted future state of humanity that are less direct, but possibly more effective, than this particular topic.
That's sweet of you. I'm glad.
That's a pretty big jump. I'll grant that human existential risk is important, but why is your best contribution to work directly on it? Perhaps you'd do a lot more good with a slight reduction in shipping costs or tiny improvements in safety or enjoyment of some consumer product. In the likely case that your marginal contribution to x-risk doesn't save the world, a small improvement for a large number of people does massive amounts more good.
Regardless of whether you focus on x-risk or something else valuable, the fact that you won't consider leaving Kagoshima is an indication that you aren't as fully committed as you claim. IMO, that's ok: we all have personal desires that we put ahead of the rest of the world. But you should acknowledge it and include it in your calculations.
Perhaps you would also do more good by working in a slight increase in shipping costs.
Quite. Whatever you consider an improvement to be. Just don't completely discount small, likely improvements in favor of large (existential) unlikely ones.