Schmidt Futures commits $125 million to ensure AI’s positive impact

Schmidt Futures has announced a five-year, $125 million commitment in support of research into issues around artificial intelligence “that are critical to get right for society to benefit from AI.”

The AI2050 initiative will support researchers from around the globe in undertaking ambitious multidisciplinary work that is critical yet typically difficult to fund. Conceived and co-chaired by Eric Schmidt and James Manyika, the initiative has developed a working list of the “hard problems” to be addressed collaboratively. Themes include “develop more capable and more general AI that is safe and earns public trust”; “leverage AI to address humanity’s greatest challenges and deliver positive benefits for all”; and “develop, deploy, use, and compete for AI responsibly.”

“Our motivating question for AI2050 has been this,” said Manyika, a senior partner emeritus at McKinsey & Company and chair and director emeritus of McKinsey Global Institute who has served as an unpaid senior advisor to Schmidt Futures since 2019. “It’s 2050. AI has turned out to be hugely beneficial to society. What happened? What are the most important problems we solved and the opportunities and possibilities we realized to ensure this outcome? We hope that this list of problems and opportunities in AI will drive discovery and crowd talent into the field today—when we can still shape how AI will affect society.”

AI2050 also announced its inaugural cohort of AI2050 Fellows, which includes Erik Brynjolfsson, professor at Stanford University and director of the Stanford Digital Economy Lab; Percy Liang, associate professor of computer science and director of the Center for Research on Foundation Models at Stanford University; Daniela Rus, professor of electrical engineering and computer science and director of the Computer Science and AI Laboratory at the Massachusetts Institute of Technology, Stuart Russell, professor of computer science and director of the Center for Human-Compatible Artificial Intelligence at University of California, Berkeley; and John Tasioulas, professor of ethics and legal philosophy and director of the Institute for Ethics in AI at the University of Oxford. A second cohort of AI2050 Fellows will be announced later this year.

“As we chart a path forward to a future with AI, we need to prepare for the unintended consequences that might come along with doing so,” said Schmidt. “In the early days of the internet and social media, no one thought these platforms would be used to disrupt elections or to shape every aspect of our lives, opinions, and actions. Lessons like these make it even more urgent to be prepared moving forward. Artificial intelligence can be a massive force for good in society, but now is the time to ensure that the AI we build has human interests at its core.”

(Photo credit: Getty Images/Anna Bliokh)