AI and IP: copyright infringement by AI-systems (UK law)
Can AI systems can infringe copyright?
27 November 2017
"Artificial intelligence" is a broad term used to describe a range of software functionality. At present, it is used almost synonymously with "machine-based learning". In this series of Talking Tech posts we consider some of the intellectual property law issues arising from the increasing prevalence, and increasing capability, of AI software. In this post we consider whether from an English law perspective AI systems can infringe third party copyright and who might be responsible for the system's infringing acts?
Can AI infringe copyright?
Section 16(1) of the Copyright Designs and Patents Act 1988 (CDPA) sets out the acts restricted by copyright in a work. These acts include reproducing a copyright work, which – crucially for AI – includes the making of temporary copies by electronic means. Whilst there are some permitted uses of copyright works, if AI is used in a commercial context these are unlikely to apply. Section 16(2) CDPA suggests that AI cannot of its own volition infringe the copyright in a work, as copyright can only be infringed by a "person" who does, or authorises another to do, one of the acts restricted by copyright.
Section 16(2) CDPA does not however provide carte blanche to developers and operators of AI software to allow the AI to use third party works with impunity. In case law where similar issues under Section 16(2) CDPA have been raised, the court has looked for the person most closely connected with the infringing act If the creator of the AI and the user of the AI are separate, the degree of control the user has over the actions of the AI is likely to dictate whether a court would find that it is the operator of the AI which is the closest person, or the developer having provided the AI in the first place. The outcome will be highly fact-dependent.
Who is responsible for the AI's actions?
If an operator used AI software in such a way that the copying of third party works was inevitable then there would be a strong argument that the operator had caused the infringement. As AI advances, the question of whether an operator is responsible for the third party infringement will become much more opaque, particularly if the AI decides to copy or adapt a particular work absent any specific instructions to do – or not to do – so from the operator. There is a risk for developers that a court would look behind the operator to the design of the system. If, for example, the design of the AI system made the unauthorised use of third party copyright inevitable, developers may find it difficult to argue successfully that they do not have legal responsibility.
In the short term, serious consideration needs to be given to the appropriate allocation of risk in a licence of AI software (if applicable), as discussed further in this post. In doing so this will require an understanding of how the AI operates and how it is intended to be used. Particular care should be taken if the AI will have access to public sources, such as the internet.
In the long term, developers should look to control the risk by embedding the recognition of intellectual property rights within the AI's code itself. Developers of AI will need to teach their software to respect the rights of third parties, particularly if the AI is so advanced that the process by which tasks are completed is out of the control of the operator. A failure to do so could lead to a future finding that the developer, rather than the user, is the person "most closely connected" to the infringing act.