In February 1795, the 11th Amendment to the US Constitution was ratified thereby limiting the federal courts' jurisdiction over cases brought by private citizens against a state in which they did not live. The amendment was a response to Chisholm v. Georgia and suspended the ruling of that case. Hans v. Louisiana solidified the amendment by ruling that a citizen of a state cannot sue that state in federal court.


  • Meta wants to tackle "the most urgent task" facing the tech industry
  • A new report forecasts global antitrust trends in 2024
  • Tips for how to craft your corporate AI policy

Join 7,000+ subscribers getting the 4-minute monthly newsletter with fresh takes on the legal news and industry trends that matter.


Meta Gets Real About AI

With the US presidential race squarely underway, and the recent controversy surrounding deepfake nude videos of Taylor Swift circulating online, it comes as no surprise that Meta is getting serious about AI-generated content and is calling for an industry-wide effort to detect such content.

According to the New York Times, Nick Clegg, Meta's president of global affairs told the crowd at Davos last month that such efforts are "“the most urgent task” facing tech, and promoted a set of standards for such detection developed by Partnership On AI, a non-profit group funded by Meta and OpenAI among others.

Meta's worries are not abstract. Earlier this week, the FCC issued a cease-and-desist order to Lingo Telecom, a Texas-based company, that was making robocalls using an AI-generated Joe Biden voice to voters in New Hampshire ahead of that state's primary. “The FCC’s partnership and fast action in this matter sends a clear message that law enforcement and regulatory agencies are staying vigilant and are working closely together to monitor and investigate any signs of AI being used maliciously to threaten our democratic process,” New Hampshire's attorney general said in a statement, reports Politico.

Meta is not alone in its push to detect and identify AI-generated content. In September, Alphabet introduced a policy requiring users who post AI-altered voices or images to disclose such information (a policy that Meta enforces across its platforms as well). These policies came after the Republican National Committee released an entire AI-generated attack ad in April showing the future if Joe Biden is reelected president, notes PBS.

"Since at least July 2023, Russia-affiliated actors have utilized innovative methods to engage audiences in Russia and the west with inauthentic, but increasingly sophisticated, multimedia content," Microsoft stated in a November 2023 report. " As the election cycle progresses, we expect these actors’ tradecraft will improve while the underlying technology becomes more capable."

Deepfake Harassment

Of course, deepfakes are not just a political problem. As Taylor Swift's recent experience highlights, deepfakes are increasingly used for harassment—especially of women. "In cases where deepfakes are used for harassment or stalking, existing laws in this domain might provide legal recourse for victims," writes Reuters. "Criminal laws are also inadequate as they do not explicitly cover the creation or distribution of deepfakes, other than to the extent they would fall under the traditional categories of cybercrimes or sexual offenses." To address such inadequacies in the law, Congress introduced the bipartisan Disrupt Explicit Forged Images and Non-Consensual Edits, or DEFIANCE Act, that allows victims to sue the creators of such content.


No doubt AI has some deeply troubling potentials that need to be monitored and regulated. That the tech industry (especially its giants) are waking up to this threat and coming together to tackle it is promising. That being said, the pace has been painfully slow and comes amid government scrutiny over past enforcement from platforms like Facebook, Instagram, and TikTok.


The Eye of Global Antitrust Authorities

AmLaw 100 firm Morgan Lewis has released a report detailing global antitrust actions in 2023, and forecasting trends for 2024.

For 2023, the firm saw record fines by authorities in Japan, United States, Australia, and Canada. Meanwhile, No-Poach and Wage-Fixing Agreements continued to draw enforcement spotlight by authorities around the world.

In 2024, "we expect to see an uptake in cases involving labor market collusion in particular, given proposed changes in national laws on both sides of the Atlantic, which may make it more difficult for companies to restrict the freedom of employees to switch jobs, resulting in businesses seeking alternative, unlawful ways of achieving the same end," the predictions begin.

The firm also sees agencies taking on "the role of AI in leading to cartel or collusive outcomes," and (ironically enough) sees international antitrust agencies joining forces more frequently to investigate and prosecute global cartels and corporate collusion.

As details,"the report showed fines from antitrust authorities across a dozen global jurisdictions last year went up 7.7% to $1.4 billion from 2022. While the numbers are below the $4.3 billion in  2021, enforcers are sending a message that 'we’re back.'"

Mark Katz, a competition attorney and partner at Canadian firm Davies Ward Phillips & Vineberg, told that actual cases and wins by agencies have dropped in recent years, but it's a testament to previous enforcement. Those previous cases “created such a stir of publicity” that they've deterred new cases.

The DOJ's Docket

The DOJ has been busy with major antitrust cases over the last few years and 2024 stands to be no different. Its case against Google's ad business—which the DOJ says is anticompetitive—will go to trial in September of this year. An antitrust suit against Apple for its own anticompeitive behaviors in the App Store could come as soon as March, reports Bloomberg. And Amazon is embroiled in an antitrust case brought on the FTC, which alleges the retail titan "prevents sellers from hawking their merchandise at lower prices on other sites," notes the AP. But the Biden Administration isn't done. As Politico writes: "after three years of pushing hard against some of the world’s largest companies, the Biden administration is set to accelerate several of its biggest antitrust fights in 2024 with an intense lineup of lawsuits and investigations." Ryan Sandrock, an antitrust lawyer at Shook, Hardy and Bacon, and a former DOJ staff attorney, told the site that “we’re going to see more new actions in terms of investigations and litigation than any prior year during the administration.” In fact, he added that the current administration has ramped up antitrust activity higher than anything he's seen since 2003.


If 2021 and 2022 were the era of unionization, 2023 and 2024 seem to be the time for antitrust action. Firms should review the compliance with existing labor standards, especially No-Poach and wage agreements, and stay up to date with AI regulations as they continue to appear.


How To Establish Guardrails For Generative AI Usage

Betty Louie, general counsel for marketing tech firm The Brandtech Group, penned a recent op-ed in which she asked the question: "how can we establish guardrails while supporting creative exploration" of new generative AI tools in marketing? To that she offered 5 tips that define a comprehensive AI policy.

First, Louie suggests defining terms. "Don’t assume that everyone is familiar with generative AI terms," she writes in AdExchange. "When employees agree on common language, it eliminates misunderstandings or varied notions of definitions."

Next, set up a list of Do's and Don'ts that help frame ethical behaviors. Some examples: "Do keep comprehensive records of data sources, licenses, permissions, inputs and outputs," and "Don’t include your company’s or client’s proprietary, confidential or sensitive information in the input or training data. "

Third, clearly communicate to your team which tools are approved for use and which are not. "At my company, we performed initial due diligence on generative AI solutions and created a green list of approved tools that is continuously reviewed and updated," Louie notes. Meanwhile, some AI solutions may include copyrighted training data or other legally dubious inputs/outputs.

Louie's fourth tip is to create a policy that clearly states how to use AI tools inline with the "moral compass of the company". The policy should articulate tenets like not creating harmful content, misleading facts, or outputs that violate privacy laws.

Finally, specify the chain of command and responsible parties for your company's AI tools. "Specify who within the organization is responsible for overseeing your generative AI green list and the deployment of approved tools," explains Louie, "who is responsible for reviewing potential new and upcoming tools and features and who will be tasked with answering legal questions and handling incident response."

Implementation Framework

For firms and corporate entities looking to craft AI policies specifically for their legal department, Reuters adds a few ethical considerations to the mix.

"First, lawyers have an ethical duty to understand the risks and benefits the use of AI tools present for both lawyers and clients, and how they may be used (or should not be used) to provide competent representation to clients," Reuters begins. It should also be made clear to clients if an AI tool is being considered for use in a specific case or in general representation, and what are the potential benefits and risks of such tools. Reuters further notes that fees for such services that utilize AI should be considered in such a policy, as well as confidentiality standards and supervision hierarchies. Finally, "lawyers may, at times, need to consult with technology experts to understand an AI tool, how it works, and whether it can be usefully deployed in a particular client matter."


There are still plenty of ethical grayzones and unanswered questions in the AI realm. But these ambiguities should not deter organizations for considering their AI policy and code of conduct now. Louie’s tips create a basis for these policies which can then be tailored to your organization’s unique needs and challenges.