That is certainly a broad remit. But very safe and conservative grants so far:
To coincide with the launch of AI2050, the initiative has also announced an inaugural cohort of AI2050 Fellows, who collectively showcase the range of research that will be critical toward answering our motivating question. This inaugural cohort includes Erik Brynjolfsson, Professor at Stanford and Director of the Stanford Digital Economy Lab; Percy Liang, Associate Professor of Computer Science and Director of the Center for Research on Foundation Models at Stanford University; Daniela Rus, Professor of Electrical Engineering and Computer Science and Director of the Computer Science and AI Laboratory at MIT, Stuart Russell, Professor of Computer Science and Director of the Center for Human-Compatible Artificial Intelligence at UC Berkeley; and John Tasioulas, Professor of Ethics and Legal Philosophy and Director of the Institute for Ethics in AI at the University of Oxford.
Some of the problems these fellows are working on include: Percy Liang is studying and improving massive “foundation models” for AI. Daniela Rus is developing and studying brain-inspired algorithms called liquid neural networks. Stuart Russell is studying probabilistic programming with a goal of improving AI’s interpretability, provable safety, and performance.
Also note the Percy Liang's Stanford Center for Research on Foundation Models seems to have a strong focus on potential risks as well as potential benefits. At least that's what it seemed to me based on their inaugural paper and from a lot of the talks at the associated workshop last year.
It appears that this initiative will advance capabilities as well. I'm really glad to see that at least some of the funds look likely to go to safety researchers, but it's unclear whether the net result will be positive or negative.
This is a linkpost for https://www.schmidtfutures.com/schmidt-futures-launches-ai2050-to-protect-our-human-future-in-the-age-of-artificial-intelligence/
I am posting this here as it may be of interest to some members.
I was particularly interested to see the following items listed in their Hard Problems Working List:
...
...
...