funding

From Stampy's Wiki
Funding
funding
LessWrong Tag

Description

Many LessWrong readers actively rely on grants or fundraising opportunities to support their work (for example by running non-profits or startups, working as independent researchers, or being supported by academic grants). 

Many LessWrong readers actively rely on grants or fundraising opportunities to support their work (for example by running non-profits or startups, working as independent researchers, or being supported by academic grants). 

This tag lists concrete opportunities for fundraising of interest to the LessWrong community. This typically means projects working on improving the long-term future, refining the art of rationality, and related missions.  

This tag should not be used for meta-discussion, e.g. of fundraising strategies, coordination between funders, or cost-benefit analyses of particular funding opportunities. 

Canonically answered

The organizations which most regularly give grants to individuals working towards AI alignment are the Long Term Future Fund, Survival And Flourishing (SAF), the OpenPhil AI Fellowship and early career funding, the Future of Life Institute, the Future of Humanity Institute, and the Center on Long-Term Risk Fund. If you're able to relocate to the UK, CEEALAR (aka the EA Hotel) can be a great option as it offers free food and accommodation for up to two years, as well as contact with others who are thinking about these issues. There are also opportunities from smaller grantmakers which you might be able to pick up if you get involved.

If you want to work on support or infrastructure rather than directly on research, the EA Infrastructure Fund may be able to help. In general, you can talk to EA funds before applying.

Each grant source has their own criteria for funding, but in general they are looking for candidates who have evidence that they're keen and able to do good work towards reducing existential risk (for example, by completing an AI Safety Camp project), though the EA Hotel in particular has less stringent requirements as they're able to support people at very low cost. If you'd like to talk to someone who can offer advice on applying for funding, AI Safety Support offers free calls.

Another option is to get hired by an organization which works on AI alignment, see the follow-up question for advice on that.

Unanswered canonical questions