Generative AI raises significant copyright issues, gaining extensive attention.
- AI training may use copyrighted works, risking infringement during the process.
- Identifying copied work in AI output remains a complex challenge under UK law.
- Differences in copyright defences between jurisdictions like the UK and US exist.
- Upcoming EU legislation demands AI services adhere to EU copyright laws.
The rise of generative AI systems, capable of producing new text, images, and other content, is provoking substantial legal debate, particularly concerning potential copyright infringement. The crux of the issue lies in the fact that these AI systems, during their training phases, may utilise copies of third-party texts or images to develop and refine their models. This use could lead to copyright violations, particularly when the AI is tasked to create new work. The extent of this risk is significantly influenced by the technical design and configuration of the AI system. For instance, if training data is strictly limited to public domain and licensed works, the risk diminishes but does not disappear entirely. Even ephemeral copies of copyrighted material, unless falling under public domain or licensed use, may infringe upon copyright if reproduced during AI model training.
Post-training, an AI system might be configured to avoid replicating the style of the work it was trained on. However, the process complexities may obscure the originally copied work within new AI-generated content. In the event of such copying, UK law offers few robust defences. This scenario contrasts with jurisdictions such as the US, where copyright law incorporates a more lenient ‘fair use’ doctrine.
Within the framework of UK law, particularly the Copyright Designs and Patents Act 1988 (CDPA), sections 28A and 29A are most pertinent to AI training. These sections allow for transient or incidental reproductions necessary for lawful use and for computational analysis of a work’s content solely for non-commercial research, termed text and data mining (TDM). However, those developing commercial AI systems may find it challenging to leverage these exceptions.
Specific applications of AI systems, particularly those that generate outputs mimicking an artist’s style, might invoke a defence of pastiche under Section 30A of the CDPA. Yet, the paucity of case law renders the interpretation of this defence uncertain.
Intellectual property specialists are closely monitoring the case of Getty Images (US) Inc v Stability AI Ltd, which promises to be a landmark judgment should it go to trial. This judgment is anticipated to address copyright infringement issues related to AI use in the UK. Stability AI’s defence that its training occurred outside the UK, hence not infringing UK copyright due to its territorial nature, will be particularly scrutinised. Notwithstanding the aforementioned factual nuances, the judgment is expected to provide crucial insights into both primary and secondary copyright infringements and the applicable defences.
Meanwhile, the EU Artificial Intelligence Act, soon to be enacted, will impose strict compliance with EU copyright laws for AI services within its territory. The EU’s comparatively permissive stance on TDM for commercial purposes, unless opted out by the copyright holder, means UK developers aspiring for market entry into the EU must respect these opt-outs. Moreover, the EU Act’s requirement for AI providers to disclose the data used for training could offer enhanced transparency to copyright owners, enabling clearer understanding of how their works are being utilised. For businesses engaging with generative AI, a prudent step involves drafting comprehensive policies aimed at identifying and managing the risks of inadvertent infringement of third-party intellectual property, coupled with a strategic allocation of risk within supplier and client contracts.
Navigating the legal landscape of AI and copyright necessitates careful consideration and proactive policy-making.
