X vs. Minnesota: Elon Musk’s Platform Challenges Deepfake Crackdown

Elon Musk’s social media platform X has filed a federal lawsuit against the state of Minnesota, challenging a new state law that prohibits the use of AI-generated deepfakes to influence elections. The company argues that the legislation violates constitutional protections of free speech and could open the door to sweeping censorship of political content.
The lawsuit, filed Wednesday in a Minnesota federal court, claims the law unlawfully shifts the responsibility for policing content from platforms to the state — and threatens criminal liability for platforms that fail to comply. “This system will inevitably result in the censorship of wide swaths of valuable political speech and commentary,” X stated in its complaint.
The controversial law, part of a growing wave of AI regulation in the U.S., bans the creation or distribution of manipulated videos, images, or audio clips made to appear authentic with the intent of swaying an election. According to advocacy group Public Citizen, at least 22 U.S. states have introduced similar restrictions in response to concerns over AI’s potential to deceive voters.
Elon Musk, who acquired Twitter in 2022 and rebranded it as X, has long positioned himself as a staunch free speech advocate. Under his ownership, the platform dramatically reduced its content moderation policies, a move that has sparked both praise and criticism in tech and policy circles.
X’s legal challenge calls on the court to strike down the Minnesota law, arguing it violates both the First Amendment of the U.S. Constitution and the Minnesota state constitution. The company also contends the law is impermissibly vague and conflicts with Section 230 of the Communications Decency Act, a federal statute that shields social media companies from liability for user-generated content.
The Minnesota Attorney General’s office, led by Keith Ellison, was named in the suit but has not yet issued a public response.
This isn’t the first time the law has faced legal scrutiny. Earlier this year, Republican state lawmaker Mary Franson and social media influencer Christopher Kohls filed a similar challenge. Their bid for a preliminary injunction was denied by U.S. District Judge Laura Provinzino, though an appeal is currently underway.
As concerns over AI-driven misinformation mount ahead of the 2024 U.S. elections, the case could set a significant precedent for how states — and platforms — navigate the delicate balance between curbing digital manipulation and protecting free expression online.