Anthropic hires a top Biden official to lead its new ‘AI for social good’ team (exclusive)

Anthropic is turning to a Biden administration alum to run its new Beneficial Deployments team, which is tasked with helping extend the benefits of its AI to organizations focused on social good—particularly in areas such as health research and education—that may lack market-driven incentives.
The new team will be led by Elizabeth Kelly, who in 2024 was tapped by the Biden administration to lead the U.S. AI Safety Institute within the National Institute of Standards and Technology (NIST). Kelly helped form agreements with OpenAI and Anthropic that let NIST safety-test the companies’ new models prior to their deployment. She left the government in early February, and in mid-March joined Anthropic.
“Our mission is to support the development and deployment of AI in ways that are good for the world but might not be incentivized by the market,” Kelly tells Fast Company.
Anthropic views the new group as a reflection of its mission as a public benefit corporation, which commits it to distribute the advantages of its AI equitably, not just to deep-pocketed corporations. In an essay he published last year, Anthropic CEO Dario Amodei emphasized AI’s potential to drive progress in areas like life sciences, physical health, education, and poverty alleviation.
The Beneficial Deployments team sits within Anthropic’s go-to-market organization, which it says ensures that the company’s AI software and services are designed and deployed with customer needs in mind. Kelly says her team will collaborate across departments—including with Anthropic’s Applied AI group and science and social impact specialists—to help mission-aligned customers build successful products and services powered by Anthropic models.
“We need to treat nonprofits, ed techs, health techs, those organizations that are developing really transformative solutions the same way that we treat our biggest enterprise customers,” Kelly says. In fact, the smaller organizations, which often lack budget and in-house AI expertise, may get a level of support that’s not considered standard for Anthropic’s larger customers.
“Our primary focus here is making sure that . . . the work that we’re doing has the biggest impact in terms of lives that we’re improving, diseases that we’re curing, educational outcomes we’re improving,” Kelly says. When considering new beneficiaries, Kelly says she’ll take input from members of Anthropic’s “long-term benefit trust,” an independent governance body whose five trustees have experience in global development.
The Beneficial Deployments team will also grant partner organizations free access to Anthropic’s models. One of the team’s first initiatives is an “AI for Science” program, which will provide up to $20,000 in API credits over a six-month period to qualifying scientific research organizations, with the possibility of renewal. Anthropic wants to work with at least 25 science organizations that use its large language model (LLM) Claude for starters, then expand the program to additional industry verticals.
“As publicly funded support for scientific endeavors faces increasing challenges, this program aims to democratize access to cutting-edge AI tools for researchers working on topics with meaningful scientific impact, particularly in biology and life sciences applications,” Anthropic said in a statement.
From special cases to a new program
Anthropic began piloting the Beneficial Deployments concept earlier this year, providing API credits and consulting to several ed-tech organizations. Amira Learning, for example, leverages Anthropic AI to teach millions of students reading comprehension. With the advent of sophisticated new LLMs like Claude, Amira recognized the possibility of an AI tool that can have deeper, humanlike conversations with students about the context and meaning of words. Amira uses Claude to generate dialogues that are personalized to students and designed to measure and enhance reading comprehension skills. The AI can create custom instructional content for students, like questions and hints. Amira says that more than 90% of its users approve of their interactions with AI.
Anthropic then began engaging with other types of organizations using the same model. FutureHouse, for example, is an Eric Schmidt-backed nonprofit dedicated to automating scientific research, particularly in biology, with the help of AI systems. Modern biological research is often stalled by information overload, with researchers spending countless hours combing through papers in order to avoid duplicating existing work. Fortunately, this information comes mainly in the form of text and graphs—both of which are right in Claude’s wheelhouse. FutureHouse has used Anthropic’s Claude models (alongside models from OpenAI and Google) to underpin a suite of agents that can help with science and drug discovery research.
“We’ve recently been working with the Beneficial Deployments team at Anthropic to share how we’ve been using their models to build our scientific agents on our platform,” says Michael Skarlinski, head of platform at FutureHouse. “Their team has been interested in learning which use cases Anthropic models are uniquely capable of, and how they can help improve our development process.”
Another partner, Benchling, operates a cloud-based data management platform to help life sciences researchers manage and share (often fragmented and complex) scientific data and collaborate efficiently. Benchling is using Anthropic’s AI within Amazon’s Bedrock cloud application environment to embed AI agents directly into scientific workflows. Scientists spend up to 25% of their time on tedious data tasks.
“AI will transform the biotech industry: automating toil, improving experiment design, and even generating novel hypotheses,” says Ashu Singhal, Benchling’s cofounder and president. “But today, only a handful of R&D teams—with the budget, tooling, and technical expertise—are at the frontier.”
With the Beneficial Deployments team now in place, the terms of those earlier engagements will be formalized, expanded, and offered to more qualifying organizations—most of them academic and nonprofit groups. The size of the new team hasn’t been disclosed, but Anthropic has already posted several open roles within the group, including specialists in public health and economic mobility.
“I’m incredibly excited about the potential of these efforts to support organizations, companies, and causes that are sometimes left behind and need to really be part of the AI transformation,” Kelly says.
What's Your Reaction?






