AI systems, however otherworldly their brilliance may appear, are of course all built by developers with human fallibilities that are liable to be reflected in the systems they design and build. However, many forms of AI, in particular in the influential fields of machine learning and its sub-field of deep learning, are often notoriously resistant to permitting clear explanations of how the system came to a particular result. As autonomous decision-making becomes increasingly mainstream, a potential danger is that more and more of the decisions in our day to day lives are liable to be taken by computer systems whose decision-making processes may be very hard or even impossible to interrogate in conventional ways.
Terence Bergin QC is one of the country’s pre-eminent tech disputes practitioner and has appeared in many of the most important cases relating to the supply of computer systems over the past 20 years, was twice awarded Chambers & Partners IT Junior of the Year before taking silk, and, together with Quentin, has recently published on the subject of explaining AI in light of the relevant ICO guidance. Quentin Tannock’s past lives have included turns as a solicitor, a venture capital executive, as well as teaching at Cambridge University. He has a broad commercial practice, and particular expertise in tech and contentious IP, in which areas he very well may be among the most technologically-knowledgeable juniors at the bar.
You can listen to this new episode on the track player below, or listen to all episodes on our Podcasts page, as well as on Spotify and iTunes.
We very much hope you enjoy this week’s podcast and thank you for your continued support of the TDN.
Sign up to receive email updates
Enter your name and email address below and I'll send you periodic updates about the podcast.