In February 1795, the 11th Amendment to the US Constitution was ratified thereby limiting the federal courts' jurisdiction over cases brought by private citizens against a state in which they did not live. The amendment was a response to Chisholm v. Georgia and suspended the ruling of that case. Hans v. Louisiana solidified the amendment by ruling that a citizen of a state cannot sue that state in federal court.
- Meta wants to tackle "the most urgent task" facing the tech industry
- A new report forecasts global antitrust trends in 2024
- Tips for how to craft your corporate AI policy
Be a smarter legal leader
Join 7,000+ subscribers getting the 4-minute monthly newsletter with fresh takes on the legal news and industry trends that matter.
🤖 ARTIFICIAL INTELLIGENCE
How To Establish Guardrails For Generative AI Usage
Betty Louie, general counsel for marketing tech firm The Brandtech Group, penned a recent op-ed in which she asked the question: "how can we establish guardrails while supporting creative exploration" of new generative AI tools in marketing? To that she offered 5 tips that define a comprehensive AI policy.
First, Louie suggests defining terms. "Don’t assume that everyone is familiar with generative AI terms," she writes in AdExchange. "When employees agree on common language, it eliminates misunderstandings or varied notions of definitions."
Next, set up a list of Do's and Don'ts that help frame ethical behaviors. Some examples: "Do keep comprehensive records of data sources, licenses, permissions, inputs and outputs," and "Don’t include your company’s or client’s proprietary, confidential or sensitive information in the input or training data. "
Third, clearly communicate to your team which tools are approved for use and which are not. "At my company, we performed initial due diligence on generative AI solutions and created a green list of approved tools that is continuously reviewed and updated," Louie notes. Meanwhile, some AI solutions may include copyrighted training data or other legally dubious inputs/outputs.
Louie's fourth tip is to create a policy that clearly states how to use AI tools inline with the "moral compass of the company". The policy should articulate tenets like not creating harmful content, misleading facts, or outputs that violate privacy laws.
Finally, specify the chain of command and responsible parties for your company's AI tools. "Specify who within the organization is responsible for overseeing your generative AI green list and the deployment of approved tools," explains Louie, "who is responsible for reviewing potential new and upcoming tools and features and who will be tasked with answering legal questions and handling incident response."
For firms and corporate entities looking to craft AI policies specifically for their legal department, Reuters adds a few ethical considerations to the mix.
"First, lawyers have an ethical duty to understand the risks and benefits the use of AI tools present for both lawyers and clients, and how they may be used (or should not be used) to provide competent representation to clients," Reuters begins. It should also be made clear to clients if an AI tool is being considered for use in a specific case or in general representation, and what are the potential benefits and risks of such tools. Reuters further notes that fees for such services that utilize AI should be considered in such a policy, as well as confidentiality standards and supervision hierarchies. Finally, "lawyers may, at times, need to consult with technology experts to understand an AI tool, how it works, and whether it can be usefully deployed in a particular client matter."
There are still plenty of ethical grayzones and unanswered questions in the AI realm. But these ambiguities should not deter organizations for considering their AI policy and code of conduct now. Louie’s tips create a basis for these policies which can then be tailored to your organization’s unique needs and challenges.