Our grantmaking process

Evaluation criteria

Our grantmakers have a broad range of views on the specific dangers posed by AI, when certain risks might arise, and mechanisms for mitigating risks. We embrace this diversity in the projects we fund and in our process for choosing projects. That said, we all broadly agree in our focus on the following factors: 

  • Theory of change: We are most excited about projects and people with a clear theory of change – a concrete vision in mind for how their work can contribute to a safe transition to powerful advanced AI or otherwise reduce AI risk. That said, we are still willing to fund projects that we believe are sufficiently positive, even if the case for impact is more vague or the theory of change is less direct.

  • Track record: We consider the past successes and expertise of people and projects, favoring candidates with a track record of previous valuable work, especially in an area related to their grant application. However, we are very open to applicants who have no prior experience specifically in AI Safety, assuming they have other signals indicating that they may do great work in the field.

  • Marginal impact and room for more funding: Often, even successful projects struggle to use additional funding as productively as their earlier funding: a researcher who can do good work with $100K might not be able to productively use millions. Thus, we look at how additional funding might be used effectively instead of looking solely at the cost-effectiveness of existing work.

  • Hits-based giving: We’re open to funding work that has a high risk of failing to accomplish its goals if the value upon success could be very large. For instance, a niche policy proposal may be completely ignored, but it may alternatively have outsized impact by influencing national governments. We think it's important to bet on such projects, and we ultimately think this approach leads to a higher-impact portfolio. In this way, our grantmaking approach looks less like typical philanthropy and more like a hedge fund or a venture capital firm that invests in many startups, hoping to profit from the few that succeed.

  • Information value: Relatedly, some grants help us to explore new areas, providing insights that could inform future grantmaking decisions by us and other grantmakers in the space. For example, our team was the first group to fund AI safety explainer videos by Robert Miles (under the Long-Term Future Fund), at a time when other funders were skeptical of public, technical communication outside of academic papers and dense blog posts. Our initial grant was based not just on the immediate value of the project, but the prospect of learning whether this approach could be impactful. We consider this grant to have been one of our better grants historically and have continued to fund Robert Miles’s work, which has reached hundreds of thousands of new people with accurate messages on AI risk.

  • Field-wide effects: In addition to the direct impact of our grants, we also want to be mindful of the growth and health of the AI safety field more broadly. We want to fund projects that can help the field grow sustainably and healthily. Additionally, we want to avoid funding projects with individuals who might engage in academic fraud or otherwise create a hostile environment for intellectual inquiry, and we want to promote high-integrity behavior (academic and otherwise) when possible.

A careful eye towards dangerous downside risks: Unlike with traditional venture capital, where (from the investor’s perspective) often the worst thing that can happen from a failed startup investment is a return of zero, bad grants can end up being actively harmful. Thus, we try to screen out grants with a sufficiently high probability of large downside risks. We are particularly concerned about work that may contribute to significantly faster AI progress, which may reduce the time remaining to develop and implement risk-mitigation measures. We are additionally concerned about work that can easily contribute to “safety-washing” (akin to greenwashing) of dangerous AI, potentially leading to complacency on AI risk.

Process

Each grant application we receive is assigned to a Principal Investigator (PI) who assesses its potential benefits, drawbacks, and financial cost. The PI writes up their assessment of the grant and scores the application from -5 to +5. 

Subsequently, other fund managers are invited to read the grant application and the PI’s assessment and score the grant. The grant gets approved if its average score surpasses a funding threshold, likely set to between 2.5 and 3.

Approved grants are then sent to other reviewers to conduct due diligence on aspects of the grant proposal that are outside the core expertise of our main fund managers (in particular, the nuances of charity law). Almost all approved applications successfully pass the due diligence checks. 

We then inform the successful applicants about their grant, and ask them to work with grant administrators to sign the relevant paperwork to receive their grant.

Help reduce catastrophic risks from advanced AI