Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

I highly doubt most people reading this are "around 2-4 sigmas above the mean", if that's even a meaningful concept.

The choice between earning to give and direct work is definitely nontrivial though: there are many precedents of useful work done by "average" individuals, even in mathematics.

 

But I do get the feeling that MIRI thinks the relative value of hiring random expensive people would be <0, which seems consistent with how other groups trying to solve hard problems approach things.
E.g. I don't see Tesla paying billions to famous mathematicians/smart people to "solve self-driving".

 

Edit: Yudkowsky answered https://www.lesswrong.com/posts/34Gkqus9vusXRevR8/late-2021-miri-conversations-ama-discussion?commentId=9K2ioAJGDRfRuDDCs , apparently I was wrong and it's because you can't just pay top people to work on problems that don't interest them.

They would need to compete with lots of other projects working on AI Alignment.
But yes, I fundamentally agree: if there was a project that convincingly had a >1% chance of solving AI alignment it seems very likely it would be able to raise ~1M/year (maybe even ~10?)

not just sitting on piles of cash because it would be "weird" to pay a Fields medalist 500k a year.

 

They literally paid Kmett 400k/year for years to work on some approach to explainable AI in Haskell.

I think people in this thread vastly overestimate how much money MIRI has (they have ~10M, see the 990s and the donations page https://intelligence.org/topcontributors/), and underestimate how much would top people cost.
I think the top 1% earners in the US all make >500k/year? Maybe if not the top 1% the top 0.5%?


Even Kmett (who is famous in the Haskell community, but is no Terence Tao) is almost certainly making way more than 500k$ now