Working with The Machines: 5 key things to consider when dealing with AI

This week saw the announcement of Alan Turing, the father of modern computing and artificial intelligence, as the face of the new £50 note. Computing and artificial intelligence (AI) have come a long way since Turing’s efforts at Bletchley Park. AI is currently used to: power personal assistants; identify and diagnose diseases; improve logistics and transportation; draft legal documents; and predict an offender’s likelihood of reoffending. However, despite the technology developing at such an impressive speed, AI continues to present challenges to legislators and regulators.

The following are key points that programmers and users of AI should note, when using the technology:

  1. AI challenges basic intellectual property assumptions: English copyright law currently distinguishes between human and computer generated work. The author of a literary, dramatic, musical or artistic work that has been computer-generated, is “the person by whom the arrangements necessary for the creation of the work are undertaken”. However, in the context of AI, the identity of this person is unclear. Separately, the law is silent on whether AI generated work even meets the “originality” requirement needed for copyright protection. That aside, there are a number of questions. Should AI generated work be protected by copyright, given the extensive period of protection potentially provided? Who is the author? What is or should be the term of protection? Where the AI process creates something patentable, can, and should, this be the subject of a patent? Should the “inventor” be the AI?
  2. Does the provision of data to an AI algorithm in itself infringe copyright? AI works by analysing and learning from data sets. Users should make sure that they have the right to provide any data to the AI algorithm for it to copy and manipulate, or they risk infringing the copyright or database rights in the underlying data sets. Providers of datasets should note that it may not always be possible to determine the extent to which the AI algorithm mines and uses their data, given the lack of transparency around how the algorithm works (the “black box” problem). Therefore challenging any use of data sets may be difficult.
  3. How do we allocate risk and reward to AI’s actions? Three “entities” are responsible for the outputs and actions of an AI process: the programmer, the user and the algorithm itself. Our current legal framework does not attribute ownership and liability to algorithms; but should it? In the meantime, how do we fairly allocate liability between the programmer and the user? Do we risk stifling innovation by attributing liability to the programmer and user? Where there are multiple programmers, all providing separate lines of code for the AI algorithm, how do we allocate risk and reward between them? These, alongside other questions thrown up by the points summarised here, were discussed at the recent “AI: Decoding IP conference”, held by the UK’s Intellectual Property Office and the World Intellectual Property Organisation, which Mishcon de Reya attended.
  4. Data protection: As AI processes train with, and make decisions based on, datasets, there is the risk that the algorithms will make decisions based solely on automated processing. Where the subject of this decision is an individual, this requires consideration of the individual’s enhanced data protection rights under the GDPR. The Information Commissioner’s Office is developing its approach to auditing and supervising AI applications, and so more data protection regulation is expected in the form of measures which organisations using AI must have in place to comply with their data protection requirements.
  5. AI and ethics: Users and programmers of AI technology should be mindful of algorithmic bias (which is a factor of the quality of the data provided to the algorithm) and the various international AI ethical frameworks. As an example, the EU’s guidelines require AI to be “trustworthy” and are being piloted this year.

These legal issues are just a selection of the hurdles to overcome as the use of AI becomes more prevalent. Conquering these issues will require a joint effort between regulators, policymakers and business, and we fully expect the introduction of legislation in this area in the near future.