Evolving Norms: Considerations on Emerging AI Jurisprudence

As governments wrestle with how to regulate artificial intelligence (AI) and the resultant gap in AI specific regulations, the courts are stepping in to draw the first lines. Recent cases across different jurisdictions have shown how despite the absence of AI regulations in some countries, courts are applying existing laws from intellectual property, consumer protection, to data protection in cases involving various aspects of AI. The effect is that legal uncertainty is starting to slowly narrow, thus highlighting to organisations that compliance and risk management cannot be put off until legislators catch up. 

We have highlighted a few cases below in two jurisdictions, the United States of America and South Africa, showing how courts have dealt with cases on AI, particularly in its impacts on data protection, intellectual property and on legal practice.

Cases in the United States

In the American case, Brewer v Otter.ai, Otter.ai, is being sued for allegedly recording and processing conversations using its transcription tool without the required consent from all parties and subsequently using the data to train its models. The legal concern is whether AI notetaker technologies can be covered by consent from one party, i.e. the registered party, or whether consent is required from all parties on the call. The case, which is still ongoing, shows that when dealing with such tools, organisations need to be cognisant of consent and privacy risks, and of the manner in which personal and sensitive data collected on such calls is used or held. Businesses using these tools ought to take steps to ensure that vendors are assessed while considering legal and compliance risks, train its staff on emerging technology risks while promoting data handling practices and safeguards, and develop a governance framework aligned with best practices in AI and data governance.

Another American case is that of Bartz v. Anthropic, where the court addressed the issue of whether training by generative AI tools on copyrighted works without authorisation constitutes infringement, and how far the exception of ‘fair use’ extends when datasets include lawfully purchased material. It was alleged that Anthropic’s generative AI models were trained on copyrighted works without authorisation therefore producing outputs that replicate protected content. The court held that where copyrighted materially has been legally acquired, training AI on them qualifies as ‘fair use.’ However, where the material is pirated or illegally obtained then it is classed as infringing on copyright. 

The Anthropic case raises a central question in AI discourse on the source of accountability for the training of models and the results they generate especially when it infringes on copyright. It further exemplifies how contracts with AI developers or vendors can no longer overlook issues on the origin of training data used in the AI systems. For businesses, that means contracts with AI vendors now need to spell out warranties and indemnities on vendor compliance with existing legal obligations including IP and data protection, as well as ongoing monitoring of the authorised uses of customer data and AI outputs as part of risk management processes.

Cases in South Africa

In South Africa, the High Court ruled in relation to the citation of fictitious case law, generated by an AI research tool used by one of the litigating parties. In Northbound Processing v. South African Diamond & Precious Metals Regulator, the Court noted this fictitious citation and did not let this misconduct slide. The matter was sent to the Legal Practice Council (LPC) for disciplinary review for the citation of fictitious case law. Interestingly, it was noted that the fictious citation was produced by an AI research tool which was allegedly exclusively trained on South African legal judgments and legislation. AI hallucinations are therefore a risk that must be considered in the adoption of AI tools, and such automation of legal research should be accompanied by the presence of human oversight, particularly in high-risk use cases such as in legal and judicial settings, where there are robust obligations on professional and ethical conduct.

In another South African case, Mavundla v MEC, the Court termed the presentation of fictitious case law by counsel who allegedly relied on AI tools, to be unacceptable and irresponsible with the matter also being referred to the LPC for disciplinary action. These judgments highlight the safeguards needed when using AI and the dire need for development of judicial guidelines for using AI in legal practice. 

Whereas the above-mentioned cases are not binding in other jurisdictions, they may serve as persuasive or a guiding force for adjudicators globally on dealing with emerging technologies and legal concerns such as data protection and intellectual property violations. Specifically, it is still uncertain whether the rationale by the court in the Anthropic case will be adopted by courts in other jurisdictions as copyright laws and its expectations vary by jurisdiction, whereas data protection principles remain largely similar across most jurisdictions globally.

For businesses, it is important to note that due diligence on the procurement and adoption of new AI technologies and vendors is imperative to ensure that the significant risks on infringing existing laws are averted, including regulatory and reputational repercussions. As such, despite the absence of AI-specific regulations, business that move early to align with these emerging norms will be in a stronger position than those waiting for regulators to lay down the rules. 

swSwahili