Laude Institute announces first batch of ‘Slingshots’ AI grants

The Laude Institute announced its first Slingshots grants — 15 research projects focused on advancing AI evaluation, including Terminal Bench, ARC-AGI, and CodeClash by SWE-Bench co-founder John Boda Yang.

Nov 6, 2025 - 17:57
Nov 6, 2025 - 17:59
 0  5
Laude Institute announces first batch of ‘Slingshots’ AI grants

Laude Institute Announces First ‘Slingshots’ Grants to Accelerate AI Research

The Laude Institute has announced the first round of its Slingshots grants, a new funding initiative designed to advance the science and practice of artificial intelligence by supporting ambitious research projects that might otherwise struggle to find resources in traditional academic settings.

The Slingshots program serves as an accelerator for AI researchers, offering a combination of funding, compute power, and engineering support. In return, each recipient agrees to produce a tangible outcome — whether that’s a startup, an open-source codebase, or a technical artefact that contributes to the broader AI community.

Fifteen Projects in the Inaugural Cohort

The first cohort comprises 15 projects, many of which focus on one of AI’s most persistent challenges: evaluation — the ability to assess model performance, reasoning, and reliability rigorously.

Some familiar initiatives include Terminal Bench, a command-line coding benchmark, and the latest iteration of ARC-AGI, a long-running test designed to measure general intelligence in AI systems.

Other projects are exploring novel approaches to long-standing evaluation problems:

  • Formula Code, led by researchers from Caltech and UT Austin, aims to measure AI agents’ ability to optimise and improve existing codebases.
  • BizBench, a project from Columbia University, proposes a comprehensive benchmark to assess “white-collar AI agents” — tools designed for professional and administrative work.
  • Several additional projects will tackle challenges in reinforcement learning and model compression, two key areas for efficiently scaling AI.

Building the Next Generation of AI Evaluation Tools

Among the cohort’s most notable names is John Boda Yang, co-founder of the popular SWE-Bench benchmark for evaluating software engineering performance in AI models. Yang now leads a new project called CodeClash, which introduces a competition-based framework for dynamic code assessment.

Inspired by SWE-Bench’s impact, Yang hopes CodeClash will continue pushing evaluation research forward:

“I do think people continuing to evaluate on core third-party benchmarks drives progress,” Yang said. “I’m a little bit worried about a future where benchmarks just become specific to companies.”

A Push for Open, Independent AI Evaluation

The Laude Institute’s Slingshots program arrives at a time when independent benchmarking and transparency are increasingly seen as essential to the field’s health. As major AI companies move toward proprietary in-house evaluation systems, academic and open-source initiatives like Slingshots help ensure the community continues to have credible, third-party measures of AI capability.

By funding and supporting projects that promote openness, reproducibility, and interdisciplinary collaboration, the Laude Institute hopes to accelerate progress toward trustworthy and scientifically grounded AI systems.

What's Your Reaction?

Like Like 0
Dislike Dislike 0
Love Love 0
Funny Funny 0
Angry Angry 0
Sad Sad 0
Wow Wow 0
TechAmerica.ai Staff TechAmerica.ai’s editorial team, consisting of expert editors, writers, and researchers, crafts accurate, clear, and valuable content focused on technology and education. We deliver in-depth technology news and analysis, with a special emphasis on founders and startup teams, covering funding trends, innovative startups, and entrepreneurial insights to empower our readers.