Project Ideas

There are lots of projects we’d love to see launched within our Areas of Interest. This page is a longlist of ideas.

If you want to launch one of these projects, please apply for funding! And if you don’t have a ready-to-go project proposal but are excited by what you see here, please submit an expression of interest

We want to fund ambitious projects, even if they’re fairly small. But we’re particularly interested in funding massively scalable projects.

Please don’t feel limited to the specific projects we describe—we want to fund whatever projects will do the most to positively impact the long term, and that will include many projects not on this list.

AI alignment prizes

Artificial Intelligence

What if you could make lots of money by reducing existential risk, just as easily you can make lots of money by increasing clicks on internet ads? We want to experiment with this vision.

In particular, we’d love to see well-designed prizes for solving open problems in AI alignment. Some examples of projects that we would consider prize-worthy include: reverse engineering 100% of a neural network the size of GoogLeNet; solving Redwood Research’s current project; achieving near-perfect robustness on adversarial examples in vision without lowering average performance; solving the Unrestricted Advex challenge; training a version of GPT-3 that performs equally well but outputs sentences humans label as false very rarely; a convincing demonstration of AI deliberately deceiving human overseers despite it not being explicitly trained to do that; and excellent work related to Eliciting Latent Knowledge (such as a general solution to its basic puzzle). 

We think it would be worth paying tens of millions of dollars—or more—to teams that solve these problems. We’d consider running this operation in-house, or funding someone else to do it. We’re especially interested to hear great ideas for prize-worthy targets.

AI-based cognitive aids

Artificial Intelligence, Values and Reflective Processes

One of the great hopes for advanced AI systems is that they might enhance human reason—allowing people to explore lines of argument more carefully and efficiently, and to detect important errors in their reasoning. We’d like to kickstart this work, so that it keeps up with AI progress as much as possible. For example, could a fine-tuned version of GPT-3 be trained to identify misleading statements, and provide the best arguments and counterarguments for different views? We’d love to see products with this sort of technology that people will actually want to use.

AI ethics

Artificial Intelligence, Values and Reflective Processes

Advanced AI will pose novel quandaries. Could AIs have consciousness, and does that make them deserving of legal protection? What should law and global governance do about autonomous weapons? We think analytic philosophers, economists, and people from the effective altruism community could make strong contributions in this area, and we expect that building up capacity, expertise, and reputation in AI ethics could become important in the long run.

We’d be particularly interested in approaches that integrate with the existing AI ethics landscape, discussing, for example, fairness and transparency in current ML systems alongside risks from misaligned superintelligence. We’d be excited to see new textbooks in this area and/or integration with major AI labs.

High-quality human data for AI alignment 

Artificial Intelligence

Many proposals for aligning advanced AI require high-quality human data on complex tasks such as evaluating whether an argument is valid, breaking down a difficult question into easier subquestions, or examining the output of interpretability tools. Data collected from humans is also used in current alignment research, e.g., a project by Redwood Research.

Some alignment research teams currently manage their own contractors because existing services (such as scale.ai) don’t fully address their needs. We therefore might like to see an organization collecting and selling alignment-relevant human data that prioritizes data quality over cost minimization, is able to meet complex customer needs, and optimizes its operations to save the client researchers as much time as possible.

Such an organization could also build capacities that might be required at ‘crunch time’ – i.e., when safety-critical AI capability advances are imminent. This might include rapidly producing large amounts of human data or checking a large amount of outputs from interpretability tools with very high reliability.

(This project idea is based on submissions by Marc-Everin Carauleanu and Beth Barnes to our Project Ideas Competition, with contributions by Jonas Vollmer.)

Biological weapons shelters

Biorisk and Recovery from Catastrophe

One thing that would be helpful to protect against biorisk would be shelters that are optimized to defend against worst-case WMD attacks. See here for an explanation of the idea. 

Infrastructure to recover after catastrophes

Biorisk and Recovery from Catastrophe

We want to ensure that humanity is in a position to recover from worst-case catastrophes. For example, we’d like to make sure that humanity has reliable access to the tools, resources, skills and knowledge necessary to rebuild industrial civilization if there were a global nuclear war or a worst-case global pandemic. 

We’d be especially keen to see “civilizational recovery drills”: attempts to rebuild key industrial technology with only the tools and knowledge available to survivors.

Early detection center

Biorisk and Recovery from Catastrophe

By the time we find out about novel pathogens, they’ve already spread far and wide, as we saw with Covid-19. Earlier detection would increase the amount of time we have to respond to biothreats. Moreover, existing systems are almost exclusively focused on known pathogens—we could do a lot better by creating pathogen-agnostic systems that can detect unknown pathogens. We’d like to see a system that collects samples from wastewater or travelers, for example, and then performs a full metagenomic scan for anything that could be dangerous.

Better PPE

Biorisk and Recovery from Catastrophe

We’d love to see personal protective equipment (PPE) that’s abundant, super easy to use, reusable, comfortable enough that you barely notice it, and designed to withstand the most extreme events. Current PPE tends to be a pain, especially the small minority of PPE designed for the most extreme use cases (e.g. BSL4 suits or military-grade PPE). Ultimately, we hope to have a massive stockpile of easy-to-use PPE, to protect societal functioning even in the face of worst-case attacks. We are keen to experiment with different approaches to funding the development and production of this PPE, such as advance purchase commitments.

One concrete commercial goal would be to produce a suit designed to allow severely immunocompromised people to lead relatively normal lives, at a cost low enough that the US government could acquire 100 million units for the Strategic National Stockpile. Another goal would be for the suit to simultaneously meet military-grade specifications, e.g. protecting against a direct hit of anthrax. (Credit to this post by Andrew-Synder Beattie and Ethan Alley for this idea, and for a bunch of the other biorisk project ideas on this page.)

Pathogen sterilization technology

Biorisk and Recovery from Catastrophe

Convenient, practical, and cheap pathogen sterilization technology is an underexplored strategy for biodefense. In particular, pathogen sterilization techniques that rely on physical principles (e.g. ionizing radiation) or broadly antiseptic properties (e.g. hydrogen peroxide, bleach), rather than molecular details (e.g. gram-negative antibiotics), could be broadly applicable, difficult for attackers to circumvent, and have little dual-use downside potential.

Strengthening the Bioweapons Convention

Biorisk and Recovery from Catastrophe

The biological weapons convention (BWC) is staffed by just four people and lacks any form of verification. We’re interested in creative ways of making it more difficult to get away with violations of the treaty—like whistleblowing prizes, for example, or teams of people who scour open sources (publication records, job specs, equipment supply chains). We’re also interested in exploring more fundamental reforms to the international governance of bioweapons.

Crisis-aware regulation for global catastrophic biological risks

Biorisk and Recovery from Catastrophe

Preparedness for global catastrophic biological risks may require a flexible regulatory approach: public health measures that would have a poor risk-benefit balance in normal times may be warranted or even critical in a crisis in which the benefits to eradicating or delaying a public health threat are unusually large. 

This was exemplified by emergency use authorizations for vaccines and non-pharmaceutical interventions such as mask mandates during the COVID-19 pandemic. Yet the pandemic response also highlighted instances of insufficiently flexible regulation, such as when the declaration of a public health emergency led to a severe shortage of COVID-19 tests in the US in early 2020.

To improve our capacity to respond to future pandemics, we’d be interested in supporting work that aims to research, develop, or advocate for regulatory approaches that better accommodate the particular needs of crisis scenarios.

(This project idea is based on a submission by Mackenzie Arnold to our Project Ideas Competition, with contributions by Kyle Fish.)

Rapid development and approval of emergency vaccination and therapeutics 

Biorisk and Recovery from Catastrophe

Imagine a team of high-powered vaccine developers, large-scale manufacturing capability on standby, and ready-to-go infrastructure for rapid human challenge trials in a regulatorily compliant manner. For any new pathogen, this team would be able to develop, test, and produce hundreds of millions of vaccine doses within weeks or months. We are very excited about that prospect. More generally, we are interested in projects that build and massively scale our civilization’s capacity to deploy medical countermeasures against biological threats.

Demonstrate the ability to rapidly scale food production in the case of nuclear winter

Biorisk and Recovery from Catastrophe

In addition to quickly killing hundreds of millions of people, a nuclear war could cause nuclear winter and stunt agricultural production due to blocking sunlight for years. We’re interested in funding demonstration projects that are part of an end-to-end operational plan for scaling backup food production and feed the world in the event of such a catastrophe. Thanks to Dave Denkenberger and ALLFED for inspiring this idea.

Talent search

Economic Growth, Values and Reflective Processes, Empowering Exceptional People, Effective Altruism

We’re keen on finding and empowering the very most talented people in the world, especially those born into poverty in low-income countries. Imagine a program that finds outstandingly gifted adolescents, wherever they may be, and then offers them full scholarships to attend accelerated academic programs with a hybrid of high school and college coursework. By age seventeen, these youths might be pushing forward the frontiers of their fields—thereby addressing the most important problems for the future of humanity.

Innovative educational experiments

Economic Growth, Values and Reflective Processes, Empowering Exceptional People, Effective Altruism

We’re interested in bold experiments with new educational institutions, including new summer courses, schools, or colleges especially targeted at the most talented youths. We think there are a lot of ways existing institutions could be better: exceptionally able students could skip high school, one-on-one tutoring could be offered more readily, teacher compensation could be increased in order to recruit the very best teachers, and educational curricula could be redesigned to pay more attention to the most important problems and most useful tools for reasoning.

A new university or institute

Values and Reflective Processes, Research That Will Help Us Improve, Epistemic Institutions, Empowering Exceptional People  

Much of academia has become too incremental, overly bureaucratic, and preoccupied with status, rather than focused on the most important problems. We’d love to see people try new approaches.

In particular, the research we most want to see often doesn’t fit well into academia, because it’s too messy or too interdisciplinary, or isn’t likely to publish well in existing journals. For example, we doubt we’d get great answers from current institutions if we asked them this question: “Is all that stuff about the most important century basically right or not? Please explain without appealing to authority.” 

We’d be excited to fund new academic institutes, or a wholly new university. We’d also be excited about trying to attract the most exceptional talent with new incentive strategies, for example by paying salaries competitive with tech and finance jobs.

Fellowships to work on pressing problems

Economic Growth, Values and Reflective Processes, Empowering Exceptional People, Effective Altruism

We’re interested in projects that give people with outstanding skills and initiative—whether bright-eyed twenty-year-olds, seasoned executives, or professors at the top of their fields—the time and financial freedom to work on especially pressing problems. We’d love for them to be able to step back from their ordinary professional careers and throw themselves into learning, writing, and launching entrepreneurial projects that will secure the future of human civilization.

Infrastructure to support independent researchers

Epistemic Institutions, Empowering Exceptional People, Effective Altruism, Research That Can Help Us Improve

Independent researchers have made valuable contributions to our areas of interest while receiving little supervision or being entirely affiliated with an academic institution or other research organization. Examples include early work on AI alignment by Paul Christiano, research relevant to priority-setting by Carl Shulman, and a comparative analysis of primate versus bird brains by Tegan McCaslin.

We believe that there is continued potential for independent research since work at many institutions can suffer from poor incentives (e.g. ‘publish or perish’), and organizations focused on our areas of interest can only absorb a fraction of the available research talent because of management bottlenecks.

We therefore think it’s valuable to identify and remove challenges faced by independent researchers, such as difficulties accessing paywalled publications, lack of mentorship, and limited opportunities for personal development. This could both enable valuable research and increase the demographic diversity and geographical inclusivity of the research communities working in our areas of interest.

We would be excited to see infrastructure for independent researchers, such as a scalable mentor-mentee matching process or a virtual research organization.

(This project idea is based on a submission by gavintaylor to our Project Ideas Competition.)

Advocacy for US high-skill immigration

Economic Growth, Empowering Exceptional People

We think there’s room for a bipartisan consensus that high-skill immigrants benefit the United States and the global economy. We’d be excited to see new think tanks, grassroots campaigns, and other approaches to forging this consensus. And we’d love to see creative approaches, such as executive action that widens the criteria for the O-1 visa, to enable more high-skilled immigration to the US.

Population decline

Economic Growth

Population decline could lead to economic and technological stagnation. So we’re interested in hearing proposals for how to reckon with this challenge. 

Institutional experimentation

Values and Reflective Processes

We think it’s more likely for the best ideas to win out if there is more experimentation. For example, we’d be interested to see new political jurisdictions that try different experiments in governance. 

Alternative voting systems

Values and Reflective Processes

We’re excited about civic advocacy for alternative voting systems, like approval voting.

A constitution for the future

Values and Reflective Processes, Space Governance

We’d love to see workshops or a mock constitutional convention where sharp people think hard about how to structure international governance institutions for the long-term future, or how to govern space settlement. As explained in more detail on our areas of interest page, we believe that the onset of space settlement could be a watershed moment in human history. We want people to start thinking about how it should work.

Prediction markets

Epistemic Institutions

We’re excited about new prediction market platforms that can acquire regulatory approval and widespread usage. We’re especially keen if these platforms include key questions relevant to our priority areas, such as questions about the future trajectory of AI development.

Forecasting Our World in Data

Epistemic Institutions

We’d love to see a project that takes one hundred of the most important charts in Our World in Data (we think the Technological Progress charts would be especially interesting) and employs superforecasters to plot out how the charts will go over the next one, three, ten, thirty and one hundred years. Ideally, the output would be well-presented and easily understandable, and display probability distributions for each year.

Forecasting that will affect important decisions

Epistemic Institutions

We think a key challenge for making forecasting organizations better is ensuring that the questions asked are interesting and important. We’d be especially excited about forecasting projects that have a great plan for ensuring that the questions asked are of significant interest to influential and altruistic actors, potentially including thoughtful government officials and large funders in the EA ecosystem.

More generally, we’re interested in a “superforecasting institute.” Few jobs are more important than rigorously forecasting the future, but currently it’s hard to do that job full-time. We want to allow excellent forecasters to make superforecasting their career. And we want to explore creating prizes and fellowships that will optimally incentivize outstanding forecasting work.

Expert polling for everything

Epistemic Institutions

We think it would be great if it were easy to know the distribution of opinion from top experts on the questions within their expertise that are (a) most important and (b) the most common focal points of public debate. Model examples we like are the IGM Economic Experts Panel, and this survey by Grace et al. 2017. We’d love to see someone create and maintain panels like this spanning a variety of fields (including economics, philosophy, computer science, physics, biology, and history), and continually ask these panels important and interesting questions of this type. In the world where this project succeeds, “do the experts really believe X?” would no longer be the crux of any serious argument. Perhaps one could generate a sustainable business model where customers who want to know what experts really think are paying to generate this data.

Epistemic appeals process

Epistemic Institutions

We wonder if it would be possible to create a for-hire epistemic appeals process that is widely known for its impartiality, transparency, and reliability, and use it to deliver trusted and trustworthy verdicts on a wide set of consequential questions, such as “How likely is 8C global mean temperature increase to cause human extinction?” or “Will the total economic losses from phosphorus shortages by 2050 exceed $1 trillion?”

Cost-benefit analysis for everything

Epistemic Institutions, Effective Altruism

We’d be interested to see comprehensive and standardized cost-benefit analysis of all major categories of government spending and philanthropy, judged from an impartial perspective. We would be excited to fund an organization that does this kind of analysis for all programs that the federal government spends over 1% of its annual expenditures on, clearly presents the results, and then advocates for scaling spending up or down until marginal cost per unit benefit equalizes across programs.

Policy evaluation and forecasting

Epistemic Institutions

We’re interested in creative experiments with forecasting and policy evaluation. For example, we’re quite interested in the following idea: 

  • Run periodic surveys with retrospective evaluations of policy. For example, each year we pick some policy decisions from ten, twenty, or thirty years ago and ask “Was this policy a mistake?”, “Did the government do too much, or too little?”, and so on.
  • Subsidize liquid prediction markets about the results of these surveys in all future years. For example, we can bet about people in 2045’s answers to “Did we do too much or too little about climate change in 2015 – 2025?”
  • We will get to see market odds on what people in 10, 20, or 30 years will say about our current policy decisions. For example, people arguing against a policy can cite facts like “The market expects that in 20 years we will consider this policy to have been a mistake.”

We might start by running this kind of poll a few times; then opening a prediction market on next year’s poll about policy decisions from a few decades ago; then lengthening the time horizon. (Credit for this idea to a comment by Paul Christiano.)

More competition in the EA ecosystem

Effective Altruism

There are a number of organizations in the EA ecosystem that are doing good work, but we think more is possible and wonder if some additional competition would be healthy. For example, we’d be excited to see people try to make alternative versions of 80,000 Hours, CEA’s student groups work, CFAR, EA Funds, FHI, or GiveWell. We’d also be excited to see organizations that provide targeted career advice (for example, an organization that specializes in advice about careers in government, politics, or law).

Increasing diversity in EA

Effective Altruism

We think the effective altruism movement would benefit from a broader set of perspectives and experiences. We’re interested in proposals for increasing racial, gender, geographical, ideological, and educational diversity in EA.

Translating key content into Spanish, Mandarin, and other languages

Effective Altruism

Most work in our areas of interest is published in English, and the effective altruism community (which conducts much of this work) is concentrated in English-speaking countries.

We think that making this work more widely accessible is valuable for growing and increasing the diversity of the effective altruism community. It could also help build the international support required to make progress on some of our areas of interest such as reducing the risk of great power war, nuclear arms control, or improving the Biological Weapons Convention.

We’d therefore be excited to see translations of key materials, especially into major languages such as Spanish and Mandarin Chinese (both of which have more native speakers than English). Publications we might start with include effectivealtruism.org, the 80,000 Hours ‘key idea’ series, and Toby Ord’s The Precipice.

We believe it’s challenging to do this well, but that these challenges can be overcome by teams that combine a deep understanding of the source material and the target cultures.

(This project idea is based on a submission by Konstantin Pilz to our Project Ideas Competition.)

EA ops

Effective Altruism

There are lots of exciting projects that could be launched in the EA/longtermist space (e.g., the ones on this page!). But they’re all bottlenecked on finding really capable operations staff. We’re not sure what the right solution is, and we’re open to all proposals. One idea might be to launch organizations that help provide relevant services: legal, immigration, HR, tax, managing office space, organizing events, etc. Another idea could be providing headhunting services to find people with relevant skills. Solutions here could unlock a lot of value, by substantially reducing the friction between having a good idea and building a new project. 

New publishing houses and publications

Values and Reflective Processes, Epistemic Institutions, Effective Altruism

We’d love to see more books—fiction and nonfiction—by thoughtful people on the most important topics. A new publishing house could financially and operationally support this. It could have an in-house team of generalist researchers and fact-checkers, ensuring the books meet an especially high standard of epistemic rigor. Because the aim would be impact rather than profit, books could be sold at zero or close to zero cost, and marketing budgets and advances on sales could be much larger than is typical. Similarly, we’d be excited to see a news publication that places an unusually strong emphasis on formulating its claims precisely (and probabilistically), transparently distinguishing between fact and inference, discussing the weak points in its own analysis, and discussing opposing viewpoints charitably. We’d love to see this publication highlight the very most important issues for understanding and improving the world. See The Scout Mindset and Open Philanthropy on reasoning transparency for illustrations of what we have in mind.

A fund for movies and documentaries

Values and Reflective Processes, Effective Altruism

Participant Media has already demonstrated impact by funding movies such as Contagion and Countdown to Zero, and the documentary An Inconvenient Truth. We’d be interested in a project that funds and helps create new movies and documentaries—aimed squarely at raising public consciousness of issues relevant to our priority areas. We’d also directly fund impactful movies and documentaries ourselves.

New publications on the most pressing issues

Values and Reflective Processes, Epistemic Institutions, Effective Altruism

We’d be excited to see newspapers, magazines, and other media outlets that (i) focus on especially pressing issues for protecting humanity’s long-term future, and (ii) promote thoughtful and reasoned discourse about it. We’d also be excited about verticals within existing outlets, like Future Perfect, that do the same.

Detailed stories about the future

Artificial Intelligence, Epistemic Institutions, Values and Reflective Processes

We’d like to see stories about how the present evolves into the future that are as specific and realistic as possible. Such stories should be set in a world that is “on trend” with respect to technological development and aim to consider realistic sets of technologies coexisting in a global economy. We are especially interested in stories of this kind which are especially thoughtful about the development of artificial intelligence. We think such stories might help make it easier to feel, rather than just abstractly understand, that this might be the most important century.

(This project idea is based on a submission by Mark Xu to our Project Ideas Competition.)

EA-relevant Substacks, Youtube, social media, etc. 

Effective Altruism

We’re interested in directly funding blogs, Substacks, or channels on YouTube, TikTok, Instagram, Twitter, etc. that help to grow the effective altruism movement or call attention to issues of major significance for the long-term future.

Critiquing our approach

Research That Can Help Us Improve

We’d love to fund research that changes our worldview—for example, by highlighting a billion-dollar cause area we are missing—or significantly narrows down our range of uncertainty. We’d also be excited to fund research that tries to identify mistakes in our reasoning or approach, or in the reasoning or approach of effective altruism or longtermism more generally.