What's your evidence that you're a marginal IMO medalist?
I only ask because I've noticed that my perception of a person's actual ability and my perception of their ego seem to be negatively correlated among the people I've met, including Less Wrong users. For example, I once met a guy at a party who told me he wasn't much of a coder; next semester he left undergrad to be the CTO of a highly technical Y Combinator startup.
This is part of the reason why I'm a little skeptical of SI's of telling people "send us an e-mail if you did well on the Putnam"--I would guess a large fraction of those who did well on the Putnam think they did well by pure luck. (Imposter syndrome.) SI might be better off trying to collect info on everyone who thinks they might want to work on FAI, no matter how untalented, and judge relative competence for themselves instead of letting FAI contributor wannabes judge themselves. (Or at least specify a score above which one should definitely contact them, regardless of how lucky one feels one got.)
Less Wrong post on mathematicians and status:
http://lesswrong.com/lw/2vb/vanity_and_ambition_in_mathematics/
IAWYC, and so does Wikipedia:
One of the main effects of illusory superiority in IQ is the Downing effect. This describes the tendency of people with a below average IQ to overestimate their IQ, and of people with an above average IQ to underestimate their IQ.
(I personally am a very good example of this, because although I think I'm not terribly bright, I am in fact a genius.)
Series: How to Purchase AI Risk Reduction
A key part of SI's strategy for AI risk reduction is to build toward hosting a Friendly AI development team at the Singularity Institute.
I don't take it to be obvious that an SI-hosted FAI team is the correct path toward the endgame of humanity "winning." That is a matter for much strategic research and debate.
Either way, I think that building toward an FAI team is good for AI risk reduction, even if we decide (later) that an SI-hosted FAI team is not the best thing to do. Why is this so?
Building toward an SI-hosted FAI team means:
Both (1) and (2) are useful for AI risk reduction even if an SI-hosted FAI team turns out not to be the best strategy.
This is because: Achieving part (1) would make SI more effective at whatever it is doing to reduce AI risk, and achieving part (2) would bring great human resources to the cause of AI risk reduction, which will be useful to a wide range of purposes (FAI team or otherwise).
So, how do we accomplish both these things?
Growing SI into a better organization
Like many (most?) non-profits with less than $1m/yr in funding, SI has had difficulty attracting the top-level executive talent often required to build a highly efficient and effective organization. Luckily, we have made rapid progress on this front in the past 9 months. For example we now have (1) a comprehensive donor database, (2) a strategic plan, (3) a team of remote contractors used to more efficiently complete large and varied projects requiring many different skillsets, (4) an increasingly "best practices" implementation of central management, (5) an office we actually use to work together on projects, and many other improvements.
What else can SI do to become a tighter, larger, and more effective organization?
They key point, of course, is that all these things cost money. They may be "boring," but they are incredibly important.
Attracting and creating superhero mathematicians
The kind of people we'd need for an FAI team are:
There are other criteria, too, but those are some of the biggest.
We can attract some of the people meeting these criteria by using the methods described in Reaching young math/compsci talent. The trouble is that the number of people on Earth who qualify may be very close to 0 (especially given the "committed to AI risk reduction" criterion).
Thus, we'll need to create some superhero mathematicians.
Math ability seems to be even more "fixed" than the other criteria, so a (very rough) strategy for creating superhero mathematicians might look like this:
All these steps, too, cost money.