Mission

BERI is an independent 501(c)(3) public charity. Our mission is to improve human civilization’s long-term prospects for survival and flourishing. Currently, our main strategy is collaborating with university research groups working to reduce existential risk (“x-risk”), by providing them with free services and support.

What we do

We try to help each of our collaborators work more effectively by spending money to support them in whatever way they need to pursue their mission (which we’ve vetted for relevance to our mission). The idea is to make operations faster and more flexible for these groups—not only to make them more directly effective, but also to improve morale by unblocking tasks and projects they care about that are hard to do efficiently through other means (e.g. existing university administration channels).

Currently, our main collaborators are:
  • CHAI — the Center for Human Compatible AI at UC Berkeley
  • CSER — the Centre for the Study of Existential Risk at Cambridge University
  • FHI — the Future of Humanity Institute at Oxford University
  • SERI — the Stanford Existential Risks Initiative
  • ALL — the Autonomous Learning Laboratory at UMass Amherst
  • The Sculpting Evolution Group at the MIT Media Lab
  • InterACT — the Interactive Autonomy and Collaborative Technologies Laboratory at UC Berkeley
  • KASL — the Krueger AI Safety Lab at the University of Cambridge
  • CLTC — the Center for Long-Term Cybersecurity at UC Berkeley
  • MATS - ML Alignment & Theory Scholars Program
  • OCPL — The Oxford China Policy Lab
  • The Safe Robotics Laboratory at Princeton University

We think of these organizations and their surrounding networks as forming what we call “the x-risk ecosystem”— a network of think tanks, non-profits, individual researchers, philanthropists, and others, all working to reduce existential risk. Our goal is to support this ecosystem by providing a source of flexible funding and helpful services to support its most important and neglected projects. By finding and solving problems that are common across organizations, we hope to add another vector of coordination to the x-risk ecosystem, accelerating humanity’s progress toward eliminating existential risks.