Grants

This Fund launched in December 2023 and hasn’t made any grants yet. We hope to start making grants in early 2024. Read about the team behind this fund here.

Past grants

Below are AI safety grants our grantmaking team has recommended under the Long-Term Future Fund. As we launch the ARM Fund, we plan to take a similar approach solely focused on mitigating catastrophic risks from AI.

David Krueger
University of Cambridge
Building research capacity
Start-up funds for computing resources for a deep learning and AI alignment research group at the University of Cambridge
$200,000
Noemi Dreksler
Centre for the Governance of AI
Policy
Two-year funding to conduct public and expert surveys on AI governance and forecasting.
$231,608
Caleb Withers
Various
Policy
Exploratory research on machine learning, China’s attitude towards AI, and cybersecurity
$15,000
Marius Hobbhahn
International Max Planck Research School for Intelligent Systems
Technical research
9-month stipend for independent research on AI safety parallel to their PhD
$30,103
Alan Chan
Mila
Building research capacity
4-month stipend for a research visit to collaborate with academics in Cambridge on evaluating non-myopia in language models and RLHF systems
$12,321
Viktor Warlop and Oliver Zhang
ML Alignment & Theory Scholars
Building research capacity
Funding to run the second iteration of SERI MATS, an alignment research seminar and mentorship program
$300,000
Alexander Turner
Oregon State University
Technical research
Year-long stipend for research into shard theory and mechanistic interpretability in reinforcement learning
$220,000
Sage Bergerson
Independent
AI Policy and Governance
5-month part time stipend for collaborating on a research paper analyzing the implications of compute access with Epoch, FutureTech (MIT CSAIL), and GovAI.
$2,500
Morgan Simpson
Independent
Policy
Research on AI safety infrastructure and legal instruments to contain technical knowledge
$31,600
Jessica Rumbelow
Leap Laboratories
Technical AI safety research
Seed funding for a new AI interpretability research organization
$195,000
Arthur Conmy
Independent
Technical research
6 months of funding to work with Neel Nanda on mechanistic interpretability research
$52,000
Akbir Khan
University College London
Technical research
Compute for empirical work on AI Safety Via Debate
$55,000
Help reduce catastrophic risks from advanced AI