a group of people sitting around a table
AI
admin  

At The Mox, Bay Area animal-welfare advocates turned AI into practical tools, new funding channels, and governance questions — not just sentience speculation

In February 2025 at The Mox in San Francisco, animalwelfare advocates and AI researchers convened to move beyond simplistic narratives about AI either replacing human advocacy or only raising sci‑fi questions about machine minds. The meeting laid out concrete deployments, shifts in philanthropic supply, and governance gaps that could matter within years, not decades.

A focused convening, not a tech utopia

The summit hosted by Sentient Futures brought together people like Constance Li and Jasmine Brazilek with AI practitioners and donors on one floor of the shoes‑free coworking space. On February 2025’s agenda were immediate projects—hackathon prototypes, automation of advocacy tasks with coding assistants—and benchmarks for how models should reason about animals, not abstract proclamations that AGI will solve everything.

That practical posture was visible in the Code for Compassion hackathon that ran alongside the summit: 81 coders produced demos ranging from AI legal assistants to welfare monitoring tools in hours. Those prototypes illustrated a working pipeline—idea to prototype to potential deployment—that advocates want to scale, rather than an airy faith that machines alone will fix systemic problems.

How tools and training data become policy levers

Speakers described two concrete mechanisms for influence. Jasmine Brazilek of Compassion in Machine Learning proposed a benchmark to test large language models’ reasoning about animal welfare and urged adding synthetic documents that reflect concern for animals into training data; that is a lever to nudge future models’ priors. Separately, attendees sketched how scientific AI—such as AlphaFold‑style protein models—could lower the cost of cultivated meat development, addressing supply‑side barriers in food systems.

Those mechanisms change what advocacy looks like. If models routinely surface animal‑welfare considerations in legal analysis or policy briefs, advocates can scale campaigns. If computational biology tools materially reduce cultivated‑meat costs, lobbying and regulatory strategy shift from persuasion toward industrial facilitation. Both routes create tangible policy and market consequences in the near term.

Where money is moving and why that matters

Philanthropy is shifting in measurable ways. Historically, farm‑animal welfare received gifts from established tech billionaires; speakers at The Mox pointed to a new donor layer emerging inside AI companies. Lewis Bollard of Coefficient Giving warned that Anthropic employees could be a significant source of capital once employee equity becomes liquid—he cited Anthropic’s roughly $380 billion valuation and recent equity cash‑out options as a concrete trigger for donations.

The consequence is twofold: first, more funds could flow into tech‑savvy, scalable interventions rather than traditional grantmaking; second, donor preferences from AI researchers may prioritize projects that either integrate AI tech or target long‑term, model‑compatible risks (including speculative sentience concerns). That reallocation will change which strategies get staffed and which problems attract sustained funding.

Governance edges and practical checkpoints to watch

Young boy and older man in lab coats experiment

The summit did not sidestep ethical edge cases. Discussions ranged from adjusting animal‑welfare frameworks to include non‑organic entities to debating whether future AIs could experience suffering. Those conversations are not mere thought experiments: they create questions for regulators and grantmakers about scope, standing, and enforcement.

Area Near‑term indicator (6–18 months) Why it matters
Model training & benchmarks Public release of Brazilek’s benchmark or adoption by an LLM developer Signals whether animal‑welfare reasoning will be represented in base models
Philanthropic flows Notable grants from AI‑company employees or Anthropic‑linked funds Determines scale and strategic orientation of funded projects
Technical prototypes Deployment of advocacy automation (e.g., Claude Code assistants) or AlphaFold‑derived cultivated‑meat methods Shows whether prototypes translate into cost reductions and operational capacity
Governance engagement Regulators or major NGOs publish guidelines addressing AI sentience or welfare‑aligned model training Sets boundaries for obligations and enforcement

Quick Q&A

When will this matter beyond the Bay Area? Watch grant announcements and model benchmark adoptions over the next 6–18 months; if Anthropic‑linked employees or major labs fund projects, the effects go national.

Are advocates prioritizing machine sentience over animal suffering? No — organizers explicitly balanced immediate deployments (tools, cultivated‑meat R&D) with longer ethics debates; the movement is adding sentience to the agenda, not replacing existing priorities.

What’s a clear warning sign? If funds concentrate only on speculative sentience research while fieldwork and direct welfare interventions see declining support, that would indicate a problematic shift away from measurable impact.

>

Leave A Comment