Our Regrants
School of Thinking
This regrant will support a global media outreach project to create high quality video and social media content about rationalism, longtermism and Effective Altruism.
Legal Services Planning Grant
This regrant will support six months of research on topics including how legal services can be effectively provided to the Effective Altruism community, materials to be included in a legal services handbook for EA organizations, novel legal questions particular to the EA community that might benefit from further research initiatives, and ways to create an effective EA professional network for practicing lawyers.
Manifold Markets
This regrant will support Manifold Markets in building a play-money prediction market platform. The platform is also experimenting with impact certificates and charity prediction markets.
David Xu
Trojan Detection Challenge at NeurIPS 2022
This regrant will support prizes for a trojan detection competition at NeurIPS, which involves identifying whether a deep neural network will suddenly change behavior if certain unknown conditions are met.
Effective Altruism Office Zurich
Akash Wasil
Fiona Pollack
Peter McLaughlin
Dwarkesh Patel
ALERT
This regrant will support the creation of the Active Longtermist Emergency Response Team, an organization to rapidly manage emerging global events like Covid-19.
EA Critiques and Red Teaming Prize
This regrant will support prize money for a writing contest for critically engaging with theory or work in Effective Altruism. The goal of the contest is to produce thoughtful, action oriented critiques.
Federation for American Scientists
This regrant will support a researcher and research assistant to work on high-skill immigration and AI policy at FAS for three years.
Ought
This regrant will support Ought’s work building Elicit, a language-model based research assistant. This work contributes to research on reducing alignment risk through scaling human supervision via process-based systems.
ML Safety Scholars Program
This regrant will fund a summer program for up to 100 students to spend 9 weeks studying machine learning, deep learning, and technical topics in safety.
AntiEntropy
This regrant will support a project to create and house operations-related resources and guidance for EA-aligned organizations.
Everett Smith
Olle Häggström, Chalmers University of Technology
Essay Contest on Existential Risk in US Cost Benefit Analysis
This regrant will support an essay contest on “Accounting for Existential Risks in US Cost-Benefit Analysis,” with the aim of contributing to the revision of OMB Circular-A4, a document which guides US government cost-benefit analysis. The Legal Priorities Project is administering the contest.
MineRL BASALT competition at NeurIPS
This regrant will support a NeurIPS competition applying human feedback in a non-language-model setting, specifically pretrained models in Minecraft. The grant will be administered by the Berkeley Existential Risk Initiative.
QURI
This regrant will support QURI to develop a programming language called "Squiggle" as a tool for probabilistic estimation. The hope is this will be a useful tool for forecasting and fermi estimates.
Andi Peng
CSIS
Aaron Scher
Kris Shrishak
AI Impacts
This regrant will support rerunning the highly-cited survey “When Will AI Exceed Human Performance? Evidence from AI Experts” from 2016, analysis, and publication of results.
Chinmay Ingalagavi
Apart Research
This regrant will support the creation of an AI Safety organization which will create a platform to share AI safety research ideas and educational materials, connect people working on AI safety, and bring new people into the field.
Tereza Flidrova
J. Peter Scoblic
AI Risk Public Materials Competition
Moncef Slaoui
This regrant will fund the writing of Slaoui's memoir, especially including his experience directing Operation Warp Speed.
Artificial Intelligence Summer Residency Program
Public Editor
This regrant will support a project to use a combination of human feedback and Machine Learning to label misinformation and reasoning errors in popular news articles.
The Good Ancestors Project
This regrant will support the creation of The Good Ancestors Project, an Australian-based organization to host research and community building on topics relevant to making the long term future go well.
Thomas Kwa
Joshua Greene, Harvard University
Braden Leach
Adversarial Robustness Prizes at ECCV
This regrant will support three prizes for the best papers on adversarial robustness research at a workshop at ECCV, the main fall computer vision conference. The best papers are selected to have higher relevance to long-term threat models than usual adversarial robustness papers.
Confido Institute
The Confido Institute is working on developing a user-friendly interactive app, Confido, for making forecasts and communicating beliefs and uncertainty within groups and organizations. They are also building interactive educational programs about forecasting and working with uncertainty based around this app.
Supporting Agent Foundations AI safety research at ALTER
This regrant will support 1.5-3 years of salary for a mathematics researcher to work with Vanessa Kosoy on the learning-theoretic AI safety agenda.
Modeling Transformative AI Risks (Aryeh Englander, Sammy Martin, Analytica Consulting)
This regrant will support two AI researchers, one or two additional assistants, and a consulting firm to continue to build out and fully implement the quantitative model for how to understand risks and interventions around AI safety, expanding on their earlier research on “Modeling Transformative AI Risk.”
Impact Markets
This regrant will support the creation of an “impact market.” The hope is to improve charity fundraising by allowing profit-motivated investors to earn returns by investing in charitable projects that are eventually deemed impactful.
AI Alignment Prize on Inverse Scaling
Swift Centre for Applied Forecasting
This regrant will support the creation of the Swift Centre for Applied Forecasting, including salary for a director and a team of expert forecasters. They will forecast trends from Our World in Data charts, as well as other topics related to ensuring the long term future goes well, with a particular focus on explaining the “why” of forecast estimates.
Lawrence Newport
Aidan O’Gara
1. All grantees and investees were given an opportunity to review their listing and offer corrections before this list was published. As with our direct grants and investments, we sometimes do not publish grants because the grantee asks us not to or because we believe it would undermine our or the grantee’s work. We also do not necessarily publish all grants that are small, initial, or exploratory.
2. The Future Fund is a project of the FTX Foundation, a philanthropic collective. Grants and donations are made through various entities in our family of organizations, including FTX Philanthropy Inc., a nonprofit entity. Investment profits are reserved for philanthropic purposes.