If someone is altruistic because they've maxed out their own egoistic values (or has gotten to severely diminishing returns), I certainly wouldn't count that against their rationality. But if "egoistic returns" include abstract values that the rest of humanity doesn't necessarily share, "large apparent asymmetry" is unclear to me.
I just meant that it seems to be possible to improve a lot of other people's expected quality of life at the expense of relatively small decreases to one's own (but that people are generally not doing so), and that this seems like it should cause the outcome of a process with moral uncertainty between egoism and altruism to skew more toward the altruist side in some sense, though I don't understand how to deal with moral uncertainty (if anyone else does, I'd be interested in your answers to this). If by "abstract values" you mean something like making the universe as simple as possible by setting all the bits to zero, then I agree there's no asymmetry, but I wouldn't call that "egoistic" as such.
Where did you say that? (I wrote Shut Up and Divide? which may or may not be relevant depending on what you mean by "the topic".)
Here. Yes, SUAD was a good and relevant contribution.
Why "surely", given that I'm not a random member of humanity, and may have more values in common with a less altruistic candidate than a more altruistic candidate?
You're right that it's not certain that altruism in a FAI team candidate is, all else equal, more desirable. I guess I'm just saying that if it is, then sufficiently large differences in altruism outweigh sufficiently small differences in rationality.
I have written a few more posts that are relevant to the "egoism vs altruism" question:
I guess we don't have more discussions of altruism vs egoism because making progress on the problem is hard. Typical debates about moral philosophy are not very productive, and it's p...
Series: How to Purchase AI Risk Reduction
A key part of SI's strategy for AI risk reduction is to build toward hosting a Friendly AI development team at the Singularity Institute.
I don't take it to be obvious that an SI-hosted FAI team is the correct path toward the endgame of humanity "winning." That is a matter for much strategic research and debate.
Either way, I think that building toward an FAI team is good for AI risk reduction, even if we decide (later) that an SI-hosted FAI team is not the best thing to do. Why is this so?
Building toward an SI-hosted FAI team means:
Both (1) and (2) are useful for AI risk reduction even if an SI-hosted FAI team turns out not to be the best strategy.
This is because: Achieving part (1) would make SI more effective at whatever it is doing to reduce AI risk, and achieving part (2) would bring great human resources to the cause of AI risk reduction, which will be useful to a wide range of purposes (FAI team or otherwise).
So, how do we accomplish both these things?
Growing SI into a better organization
Like many (most?) non-profits with less than $1m/yr in funding, SI has had difficulty attracting the top-level executive talent often required to build a highly efficient and effective organization. Luckily, we have made rapid progress on this front in the past 9 months. For example we now have (1) a comprehensive donor database, (2) a strategic plan, (3) a team of remote contractors used to more efficiently complete large and varied projects requiring many different skillsets, (4) an increasingly "best practices" implementation of central management, (5) an office we actually use to work together on projects, and many other improvements.
What else can SI do to become a tighter, larger, and more effective organization?
They key point, of course, is that all these things cost money. They may be "boring," but they are incredibly important.
Attracting and creating superhero mathematicians
The kind of people we'd need for an FAI team are:
There are other criteria, too, but those are some of the biggest.
We can attract some of the people meeting these criteria by using the methods described in Reaching young math/compsci talent. The trouble is that the number of people on Earth who qualify may be very close to 0 (especially given the "committed to AI risk reduction" criterion).
Thus, we'll need to create some superhero mathematicians.
Math ability seems to be even more "fixed" than the other criteria, so a (very rough) strategy for creating superhero mathematicians might look like this:
All these steps, too, cost money.