Our Regrants

You can use this database to see information about a wide selection of the grants and investments recommended by our regrantors.1

Our hope is to empower a range of interesting, ambitious, and altruistic people to drive funding decisions through a rewarding, low-friction process. Thus far they have recommended grants and investments totaling over $30 million.2

We’re currently running a 6 month test of this model. We gave over 100 people access to discretionary budget and another group over 60 access to a streamlined grant recommendation form but no discretionary budget. Read more about our regranting program here.

The specific projects or organizations supported by these grants are not necessarily endorsed by the Future Fund. For regrants from discretionary pots, we screen for downsides, community effects, conflicts of interest, and similar, but are largely giving the regrantors autonomy. 

We are only publishing regrants above $25k. We had 87 regrants below $25k, totaling ~$1M. Many of these were stipends for summer research work, support to attend conferences, funding for talent development, or funding for writing or other media on impactful topics.

 

Last updated: June 2022

Organization Name
Funding Type
Area of Interest
Date of Grant
Amount
Funding Stream
May 2022

School of Thinking

This regrant will support a global media outreach project to create high quality video and social media content about rationalism, longtermism and Effective Altruism.

 

$250,000
May 2022

Legal Services Planning Grant

This regrant will support six months of research on topics including how legal services can be effectively provided to the Effective Altruism community, materials to be included in a legal services handbook for EA organizations, novel legal questions particular to the EA community that might benefit from further research initiatives, and ways to create an effective EA professional network for practicing lawyers.

$100,000
March 2022

Manifold Markets

​​This regrant will support Manifold Markets in building a play-money prediction market platform. The platform is also experimenting with impact certificates and charity prediction markets.

$1,000,000
manifold.markets
March 2022

David Xu

This regrant will support six months of research on AI safety.
$50,000
May 2022

Trojan Detection Challenge at NeurIPS 2022

This regrant will support prizes for a trojan detection competition at NeurIPS, which involves identifying whether a deep neural network will suddenly change behavior if certain unknown conditions are met.

$50,000
May 2022

Effective Altruism Office Zurich

This regrant will support renting and furnishing an office space for a year.
$52,000
March 2022

Akash Wasil

This regrant will support an individual working on supporting students who are interested in focusing their careers on the world’s most pressing problems.
$26,000
April 2022

Fiona Pollack

This regrant will support six months of salary for an individual working to support Harvard students interested in working on the world’s most pressing problems and protecting and improving the long term future.
$30,000
April 2022

Peter McLaughlin

This regrant will support six months of research on criticisms of effective altruism.
$46,000
April 2022

Dwarkesh Patel

This regrant will support a promising podcaster to hire a research assistant and editor, purchase equipment, and cover travel to meet guests in person. The podcast covers technological progress, existential risk, economic growth, and the long term future.
$76,000
May 2022

ALERT

This regrant will support the creation of the Active Longtermist Emergency Response Team, an organization to rapidly manage emerging global events like Covid-19.

$150,000
forum.effectivealtruism.org
May 2022

EA Critiques and Red Teaming Prize

This regrant will support prize money for a writing contest for critically engaging with theory or work in Effective Altruism. The goal of the contest is to produce thoughtful, action oriented critiques.

$100,000
forum.effectivealtruism.org
May 2022

Federation for American Scientists

This regrant will support a researcher and research assistant to work on high-skill immigration and AI policy at FAS for three years.

$1,000,000
fas.org
May 2022

Ought

This regrant will support Ought’s work building Elicit, a language-model based research assistant. This work contributes to research on reducing alignment risk through scaling human supervision via process-based systems.

$5,000,000
April 2022

ML Safety Scholars Program

This regrant will fund a summer program for up to 100 students to spend 9 weeks studying machine learning, deep learning, and technical topics in safety.

$490,000
course.mlsafety.org
March 2022

AntiEntropy

This regrant will support a project to create and house operations-related resources and guidance for EA-aligned organizations.

$120,000
resourceportal.antientropy.org
May 2022

Everett Smith

This regrant will support a policy retreat on governing artificial intelligence.
$35,000
May 2022

Olle Häggström, Chalmers University of Technology

This regrant will support research on statistical arguments relating to existential risk and work on risks from artificial intelligence, as well as outreach, supervision, and policy work on these topics.
$380,000
May 2022

Essay Contest on Existential Risk in US Cost Benefit Analysis

This regrant will support an essay contest on “Accounting for Existential Risks in US Cost-Benefit Analysis,” with the aim of contributing to the revision of OMB Circular-A4, a document which guides US government cost-benefit analysis. The Legal Priorities Project is administering the contest.

$137,500
legalpriorities.org
May 2022

MineRL BASALT competition at NeurIPS

This regrant will support a NeurIPS competition applying human feedback in a non-language-model setting, specifically pretrained models in Minecraft. The grant will be administered by the Berkeley Existential Risk Initiative.

$155,000
minerl.io
May 2022

QURI

This regrant will support QURI to develop a programming language called "Squiggle" as a tool for probabilistic estimation. The hope is this will be a useful tool for forecasting and fermi estimates.

$200,000
squiggle-language.com
May 2022

Andi Peng

This regrant will support four months of salary and compute for research on AI alignment.
$42,600
May 2022

CSIS

This regrant will support initiatives including a CSIS public event focused on the importance of investments in human capital to ensure US national security; roundtables with policymakers, immigration experts, national security professionals, and company representatives to discuss key policy actions that should be taken to bolster US national security through immigration reform; and two episodes of the “Vying for Talent” podcast focusing on the importance of foreign talent in bolstering America’s innovative capacity.
$75,000
May 2022

Aaron Scher

This regrant will support a summer of research on AI alignment in Berkeley.
$28,500
April 2022

Kris Shrishak

This regrant will support research on how cryptography might be applied to AI safety research.
$28,000
June 2022

AI Impacts

This regrant will support rerunning the highly-cited survey “When Will AI Exceed Human Performance? Evidence from AI Experts” from 2016, analysis, and publication of results.

$250,000
May 2022

Chinmay Ingalagavi

This regrant will support a Masters at LSE for a talented STEM student.
$50,000
May 2022

Apart Research

This regrant will support the creation of an AI Safety organization which will create a platform to share AI safety research ideas and educational materials, connect people working on AI safety, and bring new people into the field.

$95,000
apartresearch.com aisafetyideas.com
May 2022

Tereza Flidrova

This regrant will support a one year master’s program in architecture for a student interested in building civilizational shelters.
$32,000
May 2022

J. Peter Scoblic

This regrant will fund a nuclear risk expert to construct nuclear war-related forecasting questions and provide forecasts and explanations on key nuclear war questions.
$25,000
April 2022

AI Risk Public Materials Competition

This regrant will support two competitions to produce better public materials on the existential risk from AI.
$40,000
May 2022

Moncef Slaoui

This regrant will fund the writing of Slaoui's memoir, especially including his experience directing Operation Warp Speed.

$150,000
May 2022

Artificial Intelligence Summer Residency Program

This regrant will support a six week summer residency in Berkeley on AI safety.
$60,000
March 2022

Public Editor

This regrant will support a project to use a combination of human feedback and Machine Learning to label misinformation and reasoning errors in popular news articles.

$500,000
publiceditor.io
May 2022

The Good Ancestors Project

This regrant will support the creation of The Good Ancestors Project, an Australian-based organization to host research and community building on topics relevant to making the long term future go well.

$75,000
goodancestorsproject.org.au
April 2022

Thomas Kwa

This regrant will support three months of research on AI safety.
$37,500
March 2022

Joshua Greene, Harvard University

This regrant will support the real-world testing and roll-out of 'Red Brain, Blue Brain', an online quiz designed to reduce negative partisanship between Democrats and Republicans in the US.
$250,000
April 2022

Braden Leach

This regrant supported a recent law school graduate to work on biosecurity. Braden will research and write at the Johns Hopkins Center for Health Security.
$175,000
April 2022

Adversarial Robustness Prizes at ECCV

This regrant will support three prizes for the best papers on adversarial robustness research at a workshop at ECCV, the main fall computer vision conference. The best papers are selected to have higher relevance to long-term threat models than usual adversarial robustness papers.

$30,000
May 2022

Confido Institute

The Confido Institute is working on developing a user-friendly interactive app, Confido, for making forecasts and communicating beliefs and uncertainty within groups and organizations. They are also building interactive educational programs about forecasting and working with uncertainty based around this app.

$190,000
confido.tools
April 2022

Supporting Agent Foundations AI safety research at ALTER

This regrant will support 1.5-3 years of salary for a mathematics researcher to work with Vanessa Kosoy on the learning-theoretic AI safety agenda.

$200,000
lesswrong.com
May 2022

Modeling Transformative AI Risks (Aryeh Englander, Sammy Martin, Analytica Consulting)

This regrant will support two AI researchers, one or two additional assistants, and a consulting firm to continue to build out and fully implement the quantitative model for how to understand risks and interventions around AI safety, expanding on their earlier research on “Modeling Transformative AI Risk.”

$272,000
alignmentforum.org
March 2022

Impact Markets

This regrant will support the creation of an “impact market.” The hope is to improve charity fundraising by allowing profit-motivated investors to earn returns by investing in charitable projects that are eventually deemed impactful.

$215,000
impactmarkets.io
May 2022

AI Alignment Prize on Inverse Scaling

This regrant will support prizes for a contest to find tasks where larger language models do worse (“inverse scaling”).
$250,000
March 2022

Swift Centre for Applied Forecasting

This regrant will support the creation of the Swift Centre for Applied Forecasting, including salary for a director and a team of expert forecasters. They will forecast trends from Our World in Data charts, as well as other topics related to ensuring the long term future goes well, with a particular focus on explaining the “why” of forecast estimates.

$2,000,000
swiftcentre.org
March 2022

Lawrence Newport

This regrant will support the launch and first year of a youtube channel focusing on video essays presented by Dr Lawrence Newport on longtermism, the future of humanity, and related topics.
$95,000
May 2022

Aidan O’Gara

This regrant will find salary, compute, and a scholarship for an undergraduate student doing career development and research on language model safety.
$46,000

1. All grantees and investees were given an opportunity to review their listing and offer corrections before this list was published. As with our direct grants and investments, we sometimes do not publish grants because the grantee asks us not to or because we believe it would undermine our or the grantee’s work. We also do not necessarily publish all grants that are small, initial, or exploratory.

2. The Future Fund is a project of the FTX Foundation, a philanthropic collective. Grants and donations are made through various entities in our family of organizations, including FTX Philanthropy Inc., a nonprofit entity. Investment profits are reserved for philanthropic purposes.