The fight over who gets to regulate AI is far from over

Welcome to AI Decoded, Fast Company’s weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week here.
The AI regulation freeze that almost silenced the states
The Republicans’ One Big Beautiful Bill Act has passed the Senate and is now headed for a final vote in the House before reaching the president’s desk. But before its passage, senators removed a controversial amendment that would have imposed a five-year freeze on state-level regulation of AI models and apps. (The bill also includes billions in funding for new AI initiatives across federal departments, including Defense, Homeland Security, Commerce, and Energy.)
Had the amendment survived, it could have been disastrous for states, according to Michael Kleinman, policy lead at the Future of Life Institute. “This is the worst possible way to legislate around AI for two reasons: First, it’s making it almost impossible to do any kind of legislation, and second, it’s happening in the most rushed and chaotic environment imaginable,” he says. The bill is over 900 pages long, and the Senate had just 72 hours to review it before debate and voting began.
The original proposal called for a 10-year freeze, but the Senate reduced it to five years and added exceptions for state laws protecting children and copyrights. However, it also introduced vague language barring any state law that places an “undue or disproportionate” burden on AI companies. According to Kleinman, this actually made the situation worse. “It gave AI company lawyers a chance to define what those terms mean,” he says. “They could simply argue in court that any regulation was too burdensome and therefore subject to the federal-level freeze.”
States are already deep into the process of regulating AI development and use. California, Colorado, Illinois, New York, and Utah have been especially active, but all 50 states introduced new AI legislation during the 2025 session. So far, 28 states have adopted or enacted AI-related laws. That momentum is unlikely to slow, especially as real job losses begin to materialize from AI-driven automation.
AI regulation is popular with voters. Supporters argue that it can mitigate risks while still allowing for technological progress. The “freeze” amendment, however, would have penalized states financially—particularly in broadband funding—for attempting to protect the public.
Kleinman argues that no trade-off is necessary. “We can have innovation, and we can also have regulations that protect children, families—jobs that protect all of us,” he says. “AI companies will say [that] any regulation means there’s no innovation, and that is not true. Almost all industries in this country are regulated. Right now, AI companies face less regulation than your neighborhood sandwich shop.”
The “new precedent” for copyrighted AI training data may contain a poison pill
On June 23, Judge William Alsup ruled in Bartz v. Anthropic that Anthropic’s training of its model Claude on lawfully purchased and digitized books is “quintessentially transformative” (meaning Anthropic used the material to make something other than more books) and thus qualifies as fair use under U.S. copyright law. (While that’s a big win for Anthropic, the court also said the firm likely violated copyright by including 7 million pirated digital books in its training data library. That issue will be addressed in a separate trial.)
Just two days later, in Kadrey v. Meta Platforms, Judge Vince Chhabria dismissed a lawsuit filed by 13 authors who claimed that Meta had trained its Llama models on their books without permission. In his decision, Chhabria said the authors failed to prove that Meta’s use of their works had harmed the market for those works. But in a surprisingly frank passage, the judge noted that the plaintiffs’ weak legal arguments played a major role in the outcome. They could have claimed, for example, that sales of their books would suffer in a marketplace flooded with AI-generated competitors.
“In cases involving uses like Meta’s, it seems like the plaintiffs (copyright holders) will often win, at least where those cases have better-developed records on the market effects of the defendant’s use,” Chhabria wrote in his decision. “No matter how transformative LLM training may be, it’s hard to imagine that it can be fair use to use copyrighted books to develop a tool to make billions or trillions of dollars while enabling the creation of a potentially endless stream of competing works that could significantly harm the market for those books.”
Chhabria may have laid out a legal recipe for future victories by copyright holders against AI firms. Copyright attorneys around the country surely took note that they may need only present as evidence the thousands of AI-generated books currently for sale on Amazon. In a legal sense, every one of those titles competes with the human-written books that were used to train the models. Chhabria said news publishers (like The New York Times in its case against OpenAI and Microsoft) could have even more success using this “market delusion” argument than book authors.
Apple is bringing in its ace to rally its troubled AI effort
Siri has a new owner within Apple, and it could help the company finally deliver the AI-powered personal assistant it promised in 2024.
By March, Tim Cook had lost faith that the core Apple AI group led by John Giannandrea could finish and release a new, smarter Siri powered by generative AI, Bloomberg’s Mark Gurman reported. Cook decided to move control of Siri development to a new group reporting to Apple’s software head, Craig Federighi. He also brought in a rising star at the company, Mike Rockwell, to build and manage the new team—one that would sit at the nexus of Apple’s AI, hardware, and software efforts, and aim to bring the new Siri to market in 2026. Apple announced the new Siri features in 2024 but has so far been unable to deliver them.
Rockwell joined Apple in 2015 from Dolby Labs. He first worked on the company’s augmented reality initiatives and helped release ARKit, which enabled developers to build 3D spatial experiences. As pressure mounted for Apple to deliver a superior headset, the company tapped Rockwell to assemble a team to design and engineer what would become the Vision Pro, released in February 2024. The Vision Pro wasn’t a commercial hit—largely due to its $3,500 price tag—but it proved Rockwell’s ability to successfully integrate complex hardware, software, and content systems.
Rockwell may have brought a new sense of urgency to Apple’s AI-Siri effort. Recent reports say that Rockwell’s group is moving quickly to decide whether Siri should be powered by Apple’s own AI models or by more mature offerings from companies like OpenAI or Anthropic. Apple has already integrated OpenAI’s ChatGPT into iPhones, but one report says that Apple was impressed by Anthropic’s Claude models as a potential brain for Siri. It could also be argued that Anthropic’s culture and stance on safety and privacy are more in line with Apple’s.
Whatever the case, it seems the company is set to make some big moves.
More AI coverage from Fast Company:
- AI chatbots are breaking the web—and forcing a 404 makeover
- Inside Wikipedia’s AI revolt—and what it means for the media
- Why this bank is hiring full-time AI employees
- How to tell if the article you’re reading was written by AI
Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.
What's Your Reaction?






