10 December 2019 | By Michael Thaidigsmann, Marion Lupin
“AI is nothing to be afraid of”
At a panel discussion hosted in the European Parliament, experts discussed the challenges posed by AI in the legal field.
“Artificial Intelligence is the talk of town at the moment, but very few people know what it means,” says the Danish member of the European Parliament Karen Melchior. On Friday 6 December, she hosted a breakfast meeting in the European Parliament for 40 company lawyers, in which participants discussed the implications of AI for politicians and corporate legal departments. The event was organised together by the European Company Lawyers Association (ECLA), ACC Europe and the Joint Law Societies Office in Brussels.
The debate was chaired by Ilse de Loof, General Counsel of LBC group. Panellists included Helena Raulus, Head of the Joint Law Societies Office; Aswin Parkunantharan from legal tech company Wolters Kluwer, and Quang-Minh Lepescheux, Director for EU Government Affairs at Microsoft in Brussels.
As Lepescheux immediately pointed out, there is currently not a standard definition of AI, and the general term can easily be used to indicate different things to different people. This difficulty in defining AI might be linked to the struggle of defining what constitutes intelligence in and of itself. Regardless, there is no doubt that AI is already a reality in much of our daily lives, spanning from social media algorithms to facial recognition tools, which regularly raise concerns regarding the right to privacy, as well as discrimination and possible human rights violations. Still, in Lepescheux’s opinion, as of the end of 2019 much of the existing AI tools can be satisfactorily regulated through existing legislation, from GDPR, anti-discrimination laws to human right conventions. Nevertheless, given the typically exponential growth of technological innovation, it is impossible to know today what AI will look like five or ten years from now, which is the reason why the adoption of a “future-proof” regulatory framework, ideally at the international level, is now becoming more and more urgent.
Helena Raulus then addressed some of the key opportunities that AI presents for the legal services sector, and in particular how algorithms could be used to help identify which legal documents should be looked at to address certain problems or, specifically for in-house lawyers, to scan a wider job market to identify the best-fitting candidates. Nevertheless, transparency remains the primary concern regarding these algorithms, as it can be difficult to determine whether they are operating within ethical and legal boundaries or if they comply with privacy and data protection law. For this reason, in Raulus’ opinion, the next step in the evolution of AI, especially regarding its use within the legal sector, should be to have the algorithms explain and show how they concretely reach the decisions they make, in order to be able to identify all misconstructions, bias or programming errors that might occur.
Still, as Quang-Minh Lepescheux pointed out, in practice it is still very much necessary to have a human being behind an algorithm to control, assess and if necessary adjust the formula, and all fictional scenarios that predict that AI will one day “take over” from humans and start making decisions independently is still, at the very least, far away. “AI will do the dirty job and the heavy lifting, but it will maybe allow lawyers to focus more on the human review and analysis. We have a lot of talk in the legal profession about easier decision-making. However, each case is unique.”
Perhaps surprisingly, the Microsoft manager also called for a regulatory push at the European level, in order to better define the space and usage of AI within society. “It is very easy to have flaws in the system, so it is very important to make sure that data fed to algorithms takes into account principles such as fairness, inclusiveness and transparency,” he said, adding that accountability is also paramount for a good and transparent use of AI: “We should have independent third parties that can audit AI systems and see how they have been built.”
Likewise, all panellists raised the problem of potential discrimination through AI applications, e.g. against people with a dark skin or against women, citing recent news cases that have revealed how implicit bias can be easily, and sometimes unnoticeably, passed on from programmers to AI through flocks of compromised data sets, which can lead to a wide range of malfunctions spanning from facial recognition to credit limits.
The panel also discussed whether or not an impact assessment for AI applications is currently feasible. MEP Melchior pointed out that, unlike with other pieces of EU legislation such as GDPR, there is still a chance to impose a regulatory framework before machine-learning based becomes so widespread. Recently, the Legal Affairs Committee of the European Parliament, of which Melchior is a member, was tasked with assessing ethical aspects of AI, but other committees are also aspiring to be the lead committee on this subject, so a definitive position from the Parliament cannot be expected in the immediate future.
In conlusion, Aswin Parkunantharan said AI would “change our lives, but not in a way that we have to be afraid of.” The Wolters Kluwer representative elaborated saying that: “We need communication, we need a combination between innovation and human beings. AI cannot be sensitive like humans, it will never be a substitution for the interaction between machines and humans.” He called for regulation on all levels: international, European and local.
Quang-Minh Lepescheux outlined the challenges for such a new regulatory structure: “We need to shape a framework that allows companies to develop their own markets”, and on this point was also critical of EU lawmakers, adding “I hear a lot of MEPs holding very protectionist speeches. But we should think of something more open, something that allows for partnerships between internet giants and policymakers and smaller companies.” He praised the multi-stakeholder approach, arguing that “With digital topics, you really need to have everyone on board.”
Melchior added that, when planning to regulate AI, it is also important to look at the field of competition law, especially regarding the link between data and market dominance. She voiced optimism that the EU would have a basic legislative framework in place within the next five years on how to use AI, especially with respect to machine learning and deep learning.
Helena Raulus also observed that there is still time to come up with appropriate solutions: “Things are moving a lot more slowly than we think.” Lepescheux agreed. “Microsoft developers have a difficult time when it comes to making predictions. But maybe we will get machines soon with intuition, too.”