Our Grants, Investments, and Regrants
School of Thinking
This regrant will support a global media outreach project to create high quality video and social media content about rationalism, longtermism and Effective Altruism.
Legal Services Planning Grant
This regrant will support six months of research on topics including how legal services can be effectively provided to the Effective Altruism community, materials to be included in a legal services handbook for EA organizations, novel legal questions particular to the EA community that might benefit from further research initiatives, and ways to create an effective EA professional network for practicing lawyers.
Manifold Markets
This regrant will support Manifold Markets in building a play-money prediction market platform. The platform is also experimenting with impact certificates and charity prediction markets.
David Xu
Trojan Detection Challenge at NeurIPS 2022
This regrant will support prizes for a trojan detection competition at NeurIPS, which involves identifying whether a deep neural network will suddenly change behavior if certain unknown conditions are met.
Effective Altruism Office Zurich
Akash Wasil
Fiona Pollack
Peter McLaughlin
Dwarkesh Patel
ALERT
This regrant will support the creation of the Active Longtermist Emergency Response Team, an organization to rapidly manage emerging global events like Covid-19.
EA Critiques and Red Teaming Prize
This regrant will support prize money for a writing contest for critically engaging with theory or work in Effective Altruism. The goal of the contest is to produce thoughtful, action oriented critiques.
Federation for American Scientists
This regrant will support a researcher and research assistant to work on high-skill immigration and AI policy at FAS for three years.
Ought
This regrant will support Ought’s work building Elicit, a language-model based research assistant. This work contributes to research on reducing alignment risk through scaling human supervision via process-based systems.
ML Safety Scholars Program
This regrant will fund a summer program for up to 100 students to spend 9 weeks studying machine learning, deep learning, and technical topics in safety.
AntiEntropy
This regrant will support a project to create and house operations-related resources and guidance for EA-aligned organizations.
Everett Smith
Olle Häggström, Chalmers University of Technology
Essay Contest on Existential Risk in US Cost Benefit Analysis
This regrant will support an essay contest on “Accounting for Existential Risks in US Cost-Benefit Analysis,” with the aim of contributing to the revision of OMB Circular-A4, a document which guides US government cost-benefit analysis. The Legal Priorities Project is administering the contest.
MineRL BASALT competition at NeurIPS
This regrant will support a NeurIPS competition applying human feedback in a non-language-model setting, specifically pretrained models in Minecraft. The grant will be administered by the Berkeley Existential Risk Initiative.
QURI
This regrant will support QURI to develop a programming language called "Squiggle" as a tool for probabilistic estimation. The hope is this will be a useful tool for forecasting and fermi estimates.
Andi Peng
CSIS
Aaron Scher
Kris Shrishak
AI Impacts
This regrant will support rerunning the highly-cited survey “When Will AI Exceed Human Performance? Evidence from AI Experts” from 2016, analysis, and publication of results.
Chinmay Ingalagavi
Apart Research
This regrant will support the creation of an AI Safety organization which will create a platform to share AI safety research ideas and educational materials, connect people working on AI safety, and bring new people into the field.
Tereza Flidrova
J. Peter Scoblic
AI Risk Public Materials Competition
Moncef Slaoui
This regrant will fund the writing of Slaoui's memoir, especially including his experience directing Operation Warp Speed.
Artificial Intelligence Summer Residency Program
Public Editor
This regrant will support a project to use a combination of human feedback and Machine Learning to label misinformation and reasoning errors in popular news articles.
The Good Ancestors Project
This regrant will support the creation of The Good Ancestors Project, an Australian-based organization to host research and community building on topics relevant to making the long term future go well.
Thomas Kwa
Joshua Greene, Harvard University
Braden Leach
Adversarial Robustness Prizes at ECCV
This regrant will support three prizes for the best papers on adversarial robustness research at a workshop at ECCV, the main fall computer vision conference. The best papers are selected to have higher relevance to long-term threat models than usual adversarial robustness papers.
Confido Institute
The Confido Institute is working on developing a user-friendly interactive app, Confido, for making forecasts and communicating beliefs and uncertainty within groups and organizations. They are also building interactive educational programs about forecasting and working with uncertainty based around this app.
Supporting Agent Foundations AI safety research at ALTER
This regrant will support 1.5-3 years of salary for a mathematics researcher to work with Vanessa Kosoy on the learning-theoretic AI safety agenda.
Modeling Transformative AI Risks (Aryeh Englander, Sammy Martin, Analytica Consulting)
This regrant will support two AI researchers, one or two additional assistants, and a consulting firm to continue to build out and fully implement the quantitative model for how to understand risks and interventions around AI safety, expanding on their earlier research on “Modeling Transformative AI Risk.”
Impact Markets
This regrant will support the creation of an “impact market.” The hope is to improve charity fundraising by allowing profit-motivated investors to earn returns by investing in charitable projects that are eventually deemed impactful.
AI Alignment Prize on Inverse Scaling
Swift Centre for Applied Forecasting
This regrant will support the creation of the Swift Centre for Applied Forecasting, including salary for a director and a team of expert forecasters. They will forecast trends from Our World in Data charts, as well as other topics related to ensuring the long term future goes well, with a particular focus on explaining the “why” of forecast estimates.
Lawrence Newport
Aidan O’Gara
Legal Priorities Project
We recommended a grant to support the Legal Priorities Project’s ongoing research and outreach activities. This will allow LPP to pay two new hires and to put on a summer institute for non-US law students in Oxford.
Oded Galor, Brown University
We recommended a grant to support two years of academic research on long-term economic growth.
The Atlas Fellowship
We recommended a grant to support scholarships for talented and promising high school students to use towards educational opportunities and enrolling in a summer program.
Sherlock Biosciences
We recommended an investment to support the development of universal CRISPR-based diagnostics, including paper-based diagnostics that can be used in developing-country settings without electricity.
Rethink Priorities
SecureBio
We recommended a grant to support the hiring of several key staff for Dr. Kevin Esvelt’s pandemic prevention work. SecureBio is working to implement universal DNA synthesis screening, build a reliable early warning system, and coordinate the development of improved personal protective equipment and its delivery to essential workers when needed.
Lionel Levine, Cornell University
We recommended a grant to Cornell University to support Prof. Levine, as well as students and collaborators, to work on alignment theory research at the Cornell math department.
Claudia Shi, Academic CS Research at Columbia University
We recommended a grant to pay for research assistants over three years to support the work of a PhD student working on AI safety at Columbia University.
Institute for Progress
We recommended a grant to support the Institute’s research and policy engagement work on high skilled immigration, biosecurity, and pandemic prevention.
Good Judgment Project
Peter Hrosso, Researcher
We recommended a grant to support a project aimed at training large language models to represent the probability distribution over question answers in a prediction market.
Michael Jacob, MITRE
We recommended a grant to support research that we hope will be used to help strengthen the bioweapons convention and guide proactive actions to better secure those facilities or stop the dangerous work being done there.
Charity Entrepreneurship
Michael Robkin
Legal Priorities Project
This grant will support one year of operating expenses and salaries at the Legal Priorities Project, a longtermist legal research and field-building organization.
AI Safety Camp
Anca Dragan, UC Berkeley
Association for Long Term Existence and Resilience
We recommended a grant to support ALTER, an academic research and advocacy organization, which hopes to investigate, demonstrate, and foster useful ways to improve the future in the short term, and to safeguard and improve the long-term trajectory of humanity. The organization's initial focus is building bridges to academia via conferences and grants to find researchers who can focus on AI safety, and on policy for reducing biorisk.
Manifold Markets
We recommended a grant to support Manifold Markets in building a charity prediction market, as an experiment for enabling effective forecasters to direct altruistic donations.
Guoliang (Greg) Liu, Virginia Tech
Stimson South Asia Program
Prometheus Science Bowl
We recommended a grant to support a competition for work on Eliciting Latent Knowledge, an open problem in AI alignment, for talented high school and college students who are participating in Prometheus Science Bowl.
Maxwell Tabarrok
HelixNano
We recommended an investment to support Helix Nano running preclinical and Phase 1 trials of a pan-variant Covid-19 vaccine.
Giving What We Can
We recommended a grant to support Giving What We Can’s mission to create a world in which giving effectively and significantly is a cultural norm.
Gabriel Recchia, University of Cambridge
We recommended a grant to support research on how to fine-tune GPT-3 models to identify flaws in other fine-tuned language models' arguments for the correctness of their outputs, and to test whether these help nonexpert humans successfully judge such arguments.
Simon Institute for Longterm Governance
We recommended a grant to support SI’s policy work with the United Nations system on the prevention of existential risks to humanity.
Centre for Effective Altruism
We recommended a grant for general support for their activities, including running conferences, supporting student groups, and maintaining online resources.
Nonlinear
Konstantinos Konstantinidis
Apollo Academic Surveys
We recommended a grant to support Apollo’s work aggregating the views of academic experts in many different fields and making them freely available online.
AI Safety Support
Daniel Brown, University of Utah
Khalil Lab at Boston University
We recommended a grant to support the development of a cheap, scalable, and decentralized platform for the rapid generation of disease-neutralizing therapeutic antibodies.
Sergey Levine, UC Berkeley
Non-trivial Pursuits
We recommended a grant to support outreach to help students to learn about career options, develop their skills, and plan their careers to work on the world’s most pressing problems.
Rational Animations
Justin Mares, Biotech Researcher
We recommended a grant to support research on the feasibility of inactivating viruses via electromagnetic radiation.
Lightcone Infrastructure
We recommended a grant to support Lightcone’s ongoing projects including running the LessWrong forum, hosting conferences and events, and maintaining an office space for Effective Altruist organizations.
Confirm Solutions
High Impact Athletes
We recommended a grant to support HIA’s work encouraging professional athletes to donate more of their earnings to high impact charities and causes, and to promote a culture of giving among their fans.
High Impact Professionals
Berkeley Existential Risk Initiative
We recommended a grant to support BERI in hiring a second core operations employee to contribute to BERI’s work supporting university research groups.