Let's think of examples of groups of ten thousand genius-level people working together towards a common narowly-defined goal.
Wikipedia claims that the LHC “was built in collaboration with over 10,000 scientists and engineers from over 100 countries, as well as hundreds of universities and laboratories”. I doubt they were all von Neumann level, and I imagine most of them weren’t working exclusively on the LCH. And no matter how nice scientists and engineers are, the group probably didn’t cooperate as well the one Wei proposed. (Although diversity probably does count for something.)
Other groups of similar size I can think of are NASA, IBM, Google and Microsoft. (Though, like the LHC, I don’t think they’re hiring only von Neumann level geniuses. Probably many multinational companies would exceed the size but would be even further from the genius requirements.) But they don’t quite work in a single direction (NASA has many missions, Google and Microsoft have many products).
That said, I wouldn’t object strongly to calling such groups weakly superintelligent. Building stuff like the LHC or the Apollo program in ten years is so vastly beyond the ability of a single man that I don’t quite classify an entity that can do it as a “human-level intelligence”, even though it is assembled from humans.
(Also, I could see a group like this building an MSI-2, though it’d take more than ten years if starting now.)
The "Working in a single direction" part seems hard: are you so single-minded? I know I'm not.
I'm skeptical about trying to build FAI, but not about trying to influence the Singularity in a positive direction. Some people may be skeptical even of the latter because they don't think the possibility of an intelligence explosion is a very likely one. I suggest that even if intelligence explosion turns out to be impossible, we can still reach a positive Singularity by building what I'll call "modest superintelligences", that is, superintelligent entities, capable of taking over the universe and preventing existential risks and Malthusian outcomes, whose construction does not require fast recursive self-improvement or other questionable assumptions about the nature of intelligence. This helps to establish a lower bound on the benefits of an organization that aims to strategically influence the outcome of the Singularity.
(To recall what the actual von Neumann, who we might call MSI-0, accomplished, open his Wikipedia page and scroll through the "known for" sidebar.)
Building a MSI-1 seems to require a total cost on the order of $100 billion (assuming $10 million for each clone), which is comparable to the Apollo project, and about 0.25% of the annual Gross World Product. (For further comparison, note that Apple has a market capitalization of $561 billion, and annual profit of $25 billion.) In exchange for that cost, any nation that undertakes the project has a reasonable chance of obtaining an insurmountable lead in whatever technologies end up driving the Singularity, and with that a large measure of control over its outcome. If no better strategic options come along, lobbying a government to build MSI-1 and/or influencing its design and aims seems to be the least that a Singularitarian organization could do.