Foresight Institute's AI Safety Grants Program added a new focus area in response to the continually evolving field. Moving forward, our funding ($1.5M-$2M annually) will be allocated across the following four focus areas:

 

  1. Automating AI-relevant research and forecasting
  • Scaling AI-enabled research to support safe AGI development
  • Scaling efficient forecasting methods relevant for safe AGI
  • Other approaches in this area

 

2. Neurotech to integrate with or compete against AGI

  • Brain Computer Interfaces (BCI) to enhance human cognition or facilitate human-AGI collaboration
  • Whole Brain Emulations (WBE) which might function as human-like general intelligences that are more interpretable and alignable than AGI
  • Lo-fi emulations using behavioral and neural data with deep learning, potentially offering a cost-effective alternative to full WBEs
  • Other approaches in this area

 

3. Security technologies for securing AI systems

  • Implementations of computer security techniques (including POLA, SeL4-inspired systems, and hardened hardware security) to safeguard AI systems
  • Automated red-teaming for AI security and capabilities
  • Cryptographic and related techniques to enable trustworthy coordination architectures
  • Other concrete approaches in this area

 

4. Safe multipolar human AI scenarios

  • Game theory that addresses interactions between multiple humans, AIs, or ultimate AGIs
  • Avoiding collusion and deception and/or encouraging pareto-preferred/positive-sum dynamics
  • Approaches for addressing principal-agent problems in multi-agent systems
  • Other concrete approaches in this area

 

Application Process

We accept applications on a quarterly cycle, with deadlines at the end of March, June, September, and December. Decisions are made within 8 weeks of each deadline.

Next Deadline: December 31st, 2024.

For more information and to apply: https://foresight.org/ai-safety

New Comment