Animal welfare advocates and AI researchers met in early February at Mox, a shoes-free coworking space in the Bay Area, in what organizers described as a deliberate effort to bridge two communities that have rarely overlapped. The gathering, reported by MIT Technology Review, signals a calculated push by parts of the animal welfare movement to embed their concerns into a technology sector that commands enormous cultural influence and, increasingly, political capital.
The strategy is rooted in a simple observation: the people building advanced AI systems tend to think seriously about consciousness, moral status, and the boundaries of ethical consideration. Animal welfare advocates believe that makes AI researchers unusually receptive to arguments about non-human sentience — arguments that have struggled to gain traction in mainstream policy circles for decades.
"If you're already asking whether a large language model might have experiences worth caring about, it's a short logical step to ask the same question about a chicken or a pig," one attendee told MIT Technology Review, capturing the movement's core pitch.
The Bay Area is a natural venue for this experiment. It is simultaneously the geographic center of the effective altruism movement, which has long treated animal welfare as a priority cause area, and the home of virtually every major AI laboratory of consequence. Organizations including Open Philanthropy, which funds both AI safety research and animal welfare work, have helped cultivate networks that make such cross-sector conversations possible.
The February meeting was not a one-off event but part of a longer-term organizing effort. Advocates say they want AI researchers to consider animal welfare in how they design AI systems — for example, in how AI might be used to improve or monitor conditions in industrial farming — but also to use their personal platforms and philanthropic resources to elevate the issue.
Whether the courtship will produce concrete results is unclear. The AI research community is already navigating intense pressure from governments, investors, and the public on questions of safety, bias, and economic disruption. Adding animal welfare to that agenda faces obvious competition for attention.
Still, there are early signs of traction. Several AI researchers who attended the February event have since agreed to advise animal welfare organizations, according to MIT Technology Review, and at least two have made personal donations to groups working on alternatives to factory farming.
White House Unveils Formal AI Policy
The same week as the Bay Area gathering, the White House released its formal AI policy, a document that sets out the federal government's approach to governing artificial intelligence across civilian agencies and, in part, the private sector.
The policy, which builds on earlier executive orders and voluntary commitments extracted from major AI companies in 2023, represents the Biden-era regulatory architecture being codified into standing guidance. It requires federal agencies to designate chief AI officers, conduct risk assessments for high-impact AI applications, and report publicly on how they are using AI systems.
For the private sector, the policy reaffirms the administration's reliance on voluntary safety commitments from companies including OpenAI, Google DeepMind, Anthropic, and Meta, while stopping short of imposing binding legislative mandates — a step that would require Congressional action.
Critics on the left argue the policy lacks enforcement teeth. "Voluntary commitments from companies with trillion-dollar incentives to move fast are not a substitute for law," said one policy analyst cited by MIT Technology Review. Critics on the right, meanwhile, have raised concerns that even the existing guidance places unnecessary compliance burdens on agencies and could slow beneficial AI adoption in government.
The White House framed the policy as a balancing act. "We are committed to harnessing the benefits of AI while managing its risks," a senior administration official told reporters, a formulation that has become something close to a regulatory mantra across democratic governments.
The policy arrives at a moment of acute uncertainty about who will ultimately set the rules for AI. The European Union's AI Act is moving toward enforcement, establishing the most comprehensive binding framework currently in existence. China has issued its own generative AI regulations. The United States has so far opted for a lighter-touch, agency-by-agency approach, and the new White House policy maintains that posture.
Two Movements, One Technology
Taken together, the animal welfare meeting and the White House policy announcement illustrate how AI has become a contested terrain on which a remarkable range of social and political actors are now staking claims.
For animal welfare advocates, AI represents both a tool — drones and computer vision systems are already being deployed to monitor farm conditions — and a cultural moment. The people with the most power to shape how AI develops are, for perhaps the first time, publicly wrestling with questions of moral status and the ethics of creating entities that might suffer. Advocates want to be in the room while those conversations are happening.
For policymakers, the challenge is more immediate and more conventional: how to write rules for a technology that is changing faster than regulatory processes are designed to accommodate, while balancing innovation, safety, and geopolitical competition.
What connects both stories is the recognition that AI is no longer a self-contained technical project. It is becoming infrastructure for nearly every domain of human — and, some would argue, non-human — life. The question of who gets to shape its values and its governance, and by what process, is now genuinely open.
