Analysis Archives - Society for Computers & Law https://www.scl.org/tag/analysis/ Society for Computers & Law Wed, 30 Apr 2025 07:54:33 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://www.scl.org/wp-content/uploads/2024/02/cropped-scl-150x150.png Analysis Archives - Society for Computers & Law https://www.scl.org/tag/analysis/ 32 32 Exploring Competition in Cloud and AI Podcast: Episode 4: The EU Data Act and Cloud Analogies https://www.scl.org/exploring-competition-in-cloud-and-ai-podcast-episode-4-the-eu-data-act-and-cloud-analogies/ Wed, 30 Apr 2025 07:54:29 +0000 https://www.scl.org/?p=18572 We have teamed up with the LIDC (International League of Competition Law) to share a series of podcasts examining some of the increasingly pressing questions around cloud computing, AI and competition law. Over seven episodes, recorded in November 2024, Ben Evans, Shruti Hiremath and guests will look beyond the current position to identify some of the pressures...

Read More... from Exploring Competition in Cloud and AI Podcast: Episode 4: The EU Data Act and Cloud Analogies

The post Exploring Competition in Cloud and AI Podcast: Episode 4: The EU Data Act and Cloud Analogies appeared first on Society for Computers & Law.

]]>
We have teamed up with the LIDC (International League of Competition Law) to share a series of podcasts examining some of the increasingly pressing questions around cloud computing, AI and competition law.

Over seven episodes, recorded in November 2024, Ben Evans, Shruti Hiremath and guests will look beyond the current position to identify some of the pressures the changing landscape will bring to bear.

Episode 4: The EU Data Act and Cloud Analogies

Are analogies between cloud and open banking and telecoms appropriate? A deep dive into the EU Data Act and the potential unintended consequences

Building on the discussion in episode 3, this episode 4 analyses the cloud provisions of the EU Data Act with reference to an influential and widely cited paper co-authored by Ben Evans and Sean Ennis. The panel explore the concept of ‘equivalence’ between cloud services and question the merits of the controversial ‘functional equivalence’ requirement, which is designed to boost switching between cloud providers. This leads to a discussion over whether the analogy between cloud computing services, which exhibit high degrees of feature complexity and innovation, and banking services, which exhibit both a limited number of key features and a relatively low level of innovation, is appropriate. As articulated by the authors in an earlier SCL article, it is suggested that these two differences are critical for considering the nature and focus of future cloud regulation and may limit the value of analogies to prior experiences with portability and interoperability. Moreover, the panel considers the authors’ observation that a significant number of cloud customers already have the possibility and incentive to account ex ante at contract stage for the trade-off between complexity and customisation in service functionality and ease of portability and interoperability. The discussion turns attention to profound concerns that the Data Act may have the unintended consequences of disincentivising innovation, strengthening the position of incumbents, and harming smaller cloud service providers by inter alia effectively commoditising cloud services to the extent that competition is reduced to price competition.

Panel

Ben Evans (Chair) is a Postgraduate Researcher at the School of Law and Centre for Competition Policy, University of East Anglia. He is a member of the LIDC Scientific Committee.

Shruti Hiremath is Counsel in the Clifford Chance Antitrust Team in London.

Lauren Murphy is Founder and CEO of Friday Initiatives.

Sean Ennis is Director of the Centre for Competition Policy and a Professor of Competition Policy at Norwich Business School, University of East Anglia.

The LIDC NEX GEN Podcast Series on ‘Competition in Cloud and AI’ explores some the most topical and hotly debated questions  with a panel of leading international experts from academia, legal practice and industry.

The series was recorded  on 7 November 2024, and the views and opinions expressed therein reflect the legal context and state of affairs up to that date.

You can also watch or listen via the LIDC website, YouTube and Spotify.

The post Exploring Competition in Cloud and AI Podcast: Episode 4: The EU Data Act and Cloud Analogies appeared first on Society for Computers & Law.

]]>
Exploring Competition in Cloud and AI Podcast: Episode 3 – Dissecting Cloud Competition https://www.scl.org/exploring-competition-in-cloud-and-ai-podcast-episode-3-dissecting-cloud-competition/ Fri, 25 Apr 2025 09:50:58 +0000 https://www.scl.org/?p=18276 We have teamed up with the LIDC (International League of Competition Law) to share a series of podcasts examining some of the increasingly pressing questions around cloud computing, AI and competition law. Over seven episodes, recorded in November 2024, Ben Evans, Shruti Hiremath and guests will look beyond the current position to identify some of...

Read More... from Exploring Competition in Cloud and AI Podcast: Episode 3 – Dissecting Cloud Competition

The post Exploring Competition in Cloud and AI Podcast: Episode 3 – Dissecting Cloud Competition appeared first on Society for Computers & Law.

]]>
We have teamed up with the LIDC (International League of Competition Law) to share a series of podcasts examining some of the increasingly pressing questions around cloud computing, AI and competition law.

Over seven episodes, recorded in November 2024, Ben Evans, Shruti Hiremath and guests will look beyond the current position to identify some of the pressures the changing landscape will bring to bear.

Episode 3: Dissecting Cloud Competition

The investigations of the UK CMA and an introduction to the EU Data Act.

In episode three, the panel begin exploring the five-fold concerns raised by the UK CMA in its issues statement in relation to its cloud market investigation. First, the authority has expressed concern that potential market concentration may be limiting choice. Although a number of large firms hold substantial market share in public cloud, the existence of on-premises and hybrid cloud solutions may temper concerns. Second, the CMA is worried that data transfer fees may prohibit switching, an issue that has been addressed in the EU under the cloud provisions of the recently enacted Data Act. Third, there is a concern that impediments to portability and interoperability may create dependencies or impair customers’ ability to move assets and to integrate across providers. Although such concerns may be valid, the panel considers the reality that market-based solutions are already developing, with industry consortia and voluntary standards bodies emerging without the need for regulatory interference. Fourth, the CMA has considered whether committed spend agreements limit customer flexibility and cause lock-in. Any intervention should be mindful of the benefits of such agreements to consumers in terms of cost savings and price stability. Finally, unfair licensing practices have come under scrutiny and there is a legitimate question as to whether some large providers may restrict competition by, for example, requiring additional fees or adherence to restrictive terms when customers use software from rival providers.[1]

While there has been substantial regulatory interest in Japan, the Netherlands, South Korea and France, all of which have completed cloud market studies, and in Spain and the USA, which have started investigations, the UK authority has advanced arguably the most detailed research and analysis of competition in the sector. The panel observes that despite this, the initial conclusions reached by the CMA and the referring authority Ofcom do not necessarily follow from the empirical market research that underpins their respective studies. Indeed, this is an issue that has been raised by Ben Evans and Sean Ennis in their co-authored consultation responses to the CMA and Ofcom. The evidence suggests that generally customers are on the ‘way in’ on their cloud journey and that, as opposed to provider restrictions, one of the key factors leading to lock-in may be that those firms do not yet have the in-house technical capability to initiate cost and time efficient switch.


[1]  Since the recording of the podcast, the CMA has published its Provisional Decision Report on 28 January 2025. Further details are available at: https://www.gov.uk/cma-cases/cloud-services-market-investigation#provisional-findings.

Panel

Ben Evans (Chair) is a Postgraduate Researcher at the School of Law and Centre for Competition Policy, University of East Anglia. He is a member of the LIDC Scientific Committee.

Shruti Hiremath is Counsel in the Clifford Chance Antitrust Team in London.

Lauren Murphy is Founder and CEO of Friday Initiatives.

Sean Ennis is Director of the Centre for Competition Policy and a Professor of Competition Policy at Norwich Business School, University of East Anglia.

The LIDC NEX GEN Podcast Series on ‘Competition in Cloud and AI’ explores some the most topical and hotly debated questions  with a panel of leading international experts from academia, legal practice and industry.

The series was recorded  on 7 November 2024, and the views and opinions expressed therein reflect the legal context and state of affairs up to that date.

You can also watch or listen via the LIDC website, YouTube and Spotify.

The post Exploring Competition in Cloud and AI Podcast: Episode 3 – Dissecting Cloud Competition appeared first on Society for Computers & Law.

]]>
Cybersecurity Monitoring Centre: Bringing greater legal clarity to complex cyber events https://www.scl.org/cybersecurity-monitoring-centre-bringing-greater-legal-clarity-to-complex-cyber-events/ Thu, 24 Apr 2025 09:23:37 +0000 https://www.scl.org/?p=18406 Edward Lewis, CEO of CyXcel, on the genesis of the Cyber Monitoring Centre Without question, cybercrime is one of the leading threats facing every industry today. Ransomware remains not only rampant but devastatingly expensive, with average ransomware payments having increased 500% year over year to $2 million in 2024. What’s more, these payments account for...

Read More... from Cybersecurity Monitoring Centre: Bringing greater legal clarity to complex cyber events

The post Cybersecurity Monitoring Centre: Bringing greater legal clarity to complex cyber events appeared first on Society for Computers & Law.

]]>
Edward Lewis, CEO of CyXcel, on the genesis of the Cyber Monitoring Centre

Without question, cybercrime is one of the leading threats facing every industry today.

Ransomware remains not only rampant but devastatingly expensive, with average ransomware payments having increased 500% year over year to $2 million in 2024. What’s more, these payments account for just part of the cost. Excluding ransoms, the average cost of recovery now stands at $2.73 million.

For organisations to withstand such significant financial impacts, cyber insurance has become invaluable. However, from a legal perspective, this is a landscape that has continued to throw up challenges, debate, ambiguity and several headaches in recent years.

Lloyd’s of London introduction of a policy requiring insurance group members to exclude liability for losses arising from state-backed cyberattacks in 2023 is a prime example – one that remains contentious even today owing to both attribution challenges and its conflation of systemic cyber risk with cyber war.

The former of these challenges has proven to be particularly troublesome. Given the potential costs of recovery involved in cyberattacks, many small- and medium-sized businesses are simply unable to cope with delayed cyber policy payouts resulting from disputes over attribution. These are organisations that need rapid financial support in days or weeks, not months or years.

What is the CMC?

This where the newly launched Cyber Monitoring Centre (CMC) aims to provide a solution by enhancing legal clarity.

An independent non-profit led by a technical committee comprising non-insurance experts from across academia, cybersecurity, public policy, defence and law, the CMC has developed a standardised scale that categorises the impact of cyber incidents. I have been privileged to be a part of the leadership driving the initiative.

The CMC framework works in a similar way to the Saffir-Simpson Hurricane Wind Scale, assigning a severity rating to cyber incidents using a simple five-point scale ranging from one (least severe) to five (most severe). These ratings are based on the economic impacts of incidents, starting at £100 million for category one events and rising to more than £5 billion for category five. Further, each categorisation is supported by an event report, all of which will be available freely.   

Using a wide range of data and analysis to assess incidents, a key goal of the CMC is to address the long-standing challenge of legal ambiguity in the cyber insurance landscape by providing a consistent, market-wide framework for defining systemic cyber events.

Until now, the severity of cyber incidents has been notoriously difficult to quantify for several reasons.

First, there is no universal impact metric in relation to cyber incidents. While the financial loss, casualties and recovery times of physical disasters are well understood, cyberattacks can impact organisations in a variety of different ways. While a ransomware attack might cripple one company, the same attack may cause only minor problems for another.

Secondly, there are significant challenges around the availability of data. Indeed, many incidents are never disclosed due to reputational risks and legal concerns. Even when they are, organisations often underreport the impact, downplaying the full extent of the damage. As a result, building an accurate severity model becomes more difficult.

Thirdly, cyberattacks are rarely one-off events that end with a single victim. Supply chains, financial markets and critical infrastructure may all be impacted by attacks in ways that are tricky to quantify, with traditional methods of measuring impact focusing too much on direct costs while not considering the wider consequences.

These hurdles have made underwriting challenging in relation to cyber insurance – until now. By establishing a common framework for measuring severity, and aggregating data across sectors, the CMC is striving to overcome these existing challenges and provide a clearer, more quantifiable picture of cyber risks.

What are the benefits it provides?

The benefits of a consistent standard to measure the severity of cyber incidents can be significant for a variety of different stakeholders, bringing clarity to what has historically been a complex process.

Policymakers and regulators will gain a much clearer view into cyber risks at scale, ensuring that resources can be better allocated to combat threats and regulations can be introduced that more effectively enhance nationwide resilience.

Organisations, meanwhile, will be able to assess incidents with a standardised method, helping them to identify and eliminate potential vulnerabilities across their network. Again, this will enhance long-term resilience planning.

For insurers, meanwhile, the CMC’s classifications can help to improve the way in which they cover systemic cyber incidents – attacks that impact large parts of the business community that are difficult to insure for due to their scale.

At present, insurers that do offer cyber solutions have typically relied upon multiple exclusions to define the events that they will cover. However, this can lead to the development of cumbersome, complex and confusing policies. Moving forward, however, it is envisioned that insurers could eventually simplify policy language by referring directly to CMC classification to define the limits of their cover.

Challenges include scope limitations, data availability and model evolution

Critically, these changes may serve to make coyer more attractive and accessible to businesses – especially SMEs, helping to address the growing challenge of attribution issues and policy disputes. However, no initiative of this scale is without its hurdles.

A key risk that may determine the effectiveness of the CMC is scope limitations. While its primarily focused on financial and operational impact, some cyber incidents such as those impacting the health and transport sectors could have life-threatening consequences, which must also be considered.

Keeping up with evolving threats will also be a challenge. With cyberattacks constantly changing and shifting, the CMC will need to tweak its models over time to ensure relevancy. And in large part, that relevancy will rely on industry buy-in and data availability. If participation is patchy, or companies hold back key details, the CMC’s outputs may be less reliable.

Despite these challenges, the CMC holds major promise. Indeed, it has the potential to transform cyber insurance by providing a consistent, market-wide framework for defining systemic cyber events and bringing greater clarity to the understanding of often complex cyber events.

However, that success ultimately depends on continued collaboration between government, industry, and cybersecurity professionals, with widespread adoption key to ensuring the framework’s relevance and effectiveness for years to come.

Edward Lewis, CEO of CyXcel

The post Cybersecurity Monitoring Centre: Bringing greater legal clarity to complex cyber events appeared first on Society for Computers & Law.

]]>
Exploring Competition in Cloud and AI Podcast: Episode 2 – Alternative Visions https://www.scl.org/exploring-competition-in-cloud-and-ai-podcast-episode-2-alternative-visions/ Fri, 18 Apr 2025 09:46:15 +0000 https://www.scl.org/?p=18272 We have teamed up with the LIDC (International League of Competition Law) to share a series of podcasts examining some of the increasingly pressing questions around cloud computing, AI and competition law. Over seven episodes, recorded in November 2024, Ben Evans, Shruti Hiremath and guests will look beyond the current position to identify some of...

Read More... from Exploring Competition in Cloud and AI Podcast: Episode 2 – Alternative Visions

The post Exploring Competition in Cloud and AI Podcast: Episode 2 – Alternative Visions appeared first on Society for Computers & Law.

]]>
We have teamed up with the LIDC (International League of Competition Law) to share a series of podcasts examining some of the increasingly pressing questions around cloud computing, AI and competition law.

Over seven episodes, recorded in November 2024, Ben Evans, Shruti Hiremath and guests will look beyond the current position to identify some of the pressures the changing landscape will bring to bear.

Episode 2: Alternative Visions

A look at the emerging alternative visions of the AI stack around the world.

Episode 2 considers alternative visions for the AI stack. The discussion begins by thinking about the emergent ‘EuroStack’, which is a strategic initiative to develop independent digital infrastructure across all layers of the stack and reduce reliance on non-EU technologies that was launched in the European Parliament in 2024. At a high-level, this approach represents a significant transition away from the prevailing regulatory approach focussed on competition in  certain components of the stack towards an infrastructural approach driven by ambitious industrial policy. The panel proceeds to reflect on the approaches of different international jurisdictions, focussing in particular on the development of digital public infrastructure in emerging markets, and the issue of sovereignty. Crucially, the Indian examples of the Unified Payments Interface and the Open Network for Digital Commerce provide evidence that digital public infrastructure can promote significant competition. This prompts the panel to question whether regulatory intervention is necessary if there exists a sufficiently developed digital public infrastructure. Of course, it is essential that government initiatives are not mandated to the detriment of market-based solutions and are instead offered as alternatives. Ultimately, the co-existence of digital public infrastructure and private firm offerings may lead to a healthy competitive market.

Panel

Ben Evans (Chair) is a Postgraduate Researcher at the School of Law and Centre for Competition Policy, University of East Anglia. He is a member of the LIDC Scientific Committee.

Shruti Hiremath is Counsel in the Clifford Chance Antitrust Team in London.

Lauren Murphy is Founder and CEO of Friday Initiatives.

Sean Ennis is Director of the Centre for Competition Policy and a Professor of Competition Policy at Norwich Business School, University of East Anglia.

The LIDC NEX GEN Podcast Series on ‘Competition in Cloud and AI’ explores some the most topical and hotly debated questions  with a panel of leading international experts from academia, legal practice and industry.

The series was recorded  on 7 November 2024, and the views and opinions expressed therein reflect the legal context and state of affairs up to that date.

You can also watch or listen via the LIDC website, YouTube and Spotify.

The post Exploring Competition in Cloud and AI Podcast: Episode 2 – Alternative Visions appeared first on Society for Computers & Law.

]]>
Another Chinese court finds that AI-generated images can be protected by copyright: the Changshu People’s Court and the ‘half heart’ case https://www.scl.org/another-chinese-court-finds-that-ai-generated-images-can-be-protected-by-copyright-the-changshu-peoples-court-and-the-half-heart-case/ Tue, 15 Apr 2025 14:40:00 +0000 https://www.scl.org/?p=18139 Chinese courts take a different approach to the issue of AI generating copyright protected images, the DLA Piper team reports. On 7 March 2025, the Changshu People’s Court (in China’s Jiangsu province) announced that it had recently concluded a case on the topical issue of whether AI-generated works can be protected by copyright. In the...

Read More... from Another Chinese court finds that AI-generated images can be protected by copyright: the Changshu People’s Court and the ‘half heart’ case

The post Another Chinese court finds that AI-generated images can be protected by copyright: the Changshu People’s Court and the ‘half heart’ case appeared first on Society for Computers & Law.

]]>
Chinese courts take a different approach to the issue of AI generating copyright protected images, the DLA Piper team reports.

On 7 March 2025, the Changshu People’s Court (in China’s Jiangsu province) announced that it had recently concluded a case on the topical issue of whether AI-generated works can be protected by copyright. In the case, a plaintiff surnamed Lin used the AI tool Midjourney to create an image, and then Photoshop to further refine it. The image depicted a half-heart structure floating on the water in front of a cityscape, in which the other half of the heart was ‘completed’ by its reflection in the water. The plaintiff posted the image on social media and also obtained copyright registration for the image in China. An inflatable model company and a real estate company posted images substantially similar to the plaintiff’s image on their social media accounts and the inflatable model company’s 1688 online store, and also created a real 3D installation based on the image at one of the real estate company’s projects. The court found for the plaintiff, requiring that the inflatable model company publicly apologise to the plaintiff on its Xiaohongshu (RedNote) account for three consecutive days, and that the defendants compensate the plaintiff for economic losses and reasonable expenses totalling RMB 10,000. Although both the plaintiff and the defendants had rights of appeal, neither party appealed and the decision is now effective.

In reaching its decision, the court first examined the Midjourney user agreement which stipulates that the rights in outputs prompted by users belong to the user with very few exceptions. The court then examined the iterative process by which Midjourney users can modify the prompt text and other details of the output images. On this basis, the court held that the plaintiff’s crafting of their prompt and subsequent modification of the image reflected their unique choices and arrangement, making the ultimate image an original work of fine art protected by copyright. The defendants infringed the copyright in that image by disseminating it online without the plaintiff’s permission and using it without naming the plaintiff as the author. However, the court held that the copyright enjoyed by Lin was limited to the 2D image as recorded in the copyright registration certificate (rather than the idea of the 3D half-heart art installation as depicted in the image); the construction of the physical 3D installation by the defendants based on the central idea of Lin’s work (i.e. a half-heart floating on the water, an idea used by many prior works) did not infringe Lin’s copyright.

In the court’s WeChat post, some illustrative comments were shared from Hu Yue, Deputy Director of the court’s Intellectual Property Tribunal. “The premise for AI-generated content to be recognised as a work is that it should be able to reflect the original intellectual input of a human,” Hu states. He comments that “for creators, this judgement is a ‘reassurance’. It clarifies that creators who use AI tools to create have legal copyright over their works provided that the works have innovative design and expression (…) In addition, this case lawfully determined that the use of the ideas and concepts of another person’s work does not constitute infringement, which avoids overprotection of copyrights and abuse of rights, and is conducive to guiding the people on how to further innovate on the basis of using AI.”

Our comments

Cases involving generative AI and IP issues are going through courts around the world. US cases dominate, particularly on the issue of whether use of copyright works to train an AI model constitutes copyright infringement. However, courts in China have been notable for their boldness on the issue of copyright subsistence. Decisions in 2019 and 2020 from the Shenzhen City Nanshan District People’s Court, the Beijing Internet Court and the Beijing Intellectual Property Court have all found that AI-assisted text-based works could be protected by copyright. Most importantly, the Beijing Internet Court in November 2023 issued a significant decision in which it held that the plaintiff enjoyed copyright in an image generated using the AI tool Stable Diffusion. It was critical to the decision that the plaintiff had engaged in a process of “intellectual creation” by independently designing and refining the features of the image through several rounds of input prompts and parameter adjustments, and by making artistic choices regarding the final outcome. Applying similar reasoning, this latest case from the Changshu People’s Court is the second in China granting copyright protection to AI-generated images reflecting the “original intellectual input of a human”.

The relative willingness of Chinese courts to find subsistence of copyright in AI-generated works created by user prompts can be compared with the position in the United States, in which the United States Copyright Office has refused protection for AI-generated visual artworks in at least four cases. Guidance issued by the Office in March 2023 and January 2025 reiterate that: copyright protects only materials that are the product of human creativity; copyright protection is not available for purely AI-generated content, but human contributions to AI-assisted works are protectable, with protection analyzed on a case-by-case basis; and user prompts alone are insufficient to justify copyright protection for the output. The importance attributed to human input is shared with China, however it is safe to say a global consensus on this issue has yet to emerge.

In the meantime, China is becoming a world leader in both AI innovation and regulation. China’s National Intellectual Property Administration in December 2024 issued guidelines on patent applications for AI-related inventions, providing welcome guidance to firms seeking IP protection for innovations involving or assisted by AI. This follows the National Technical Committee 260 on Cybersecurity’s September 2024 release of an AI Safety Governance Framework, outlining principles for tackling AI-related risks in accordance with a “people-centered approach” and the “principle of developing AI for good.”

Edward Chatterton is a Partner at DLA Piper where he is Global Co-Chair of Trademark, Copyright and Media Group and Co-Head of IPT, Asia

Joanne Zhang is a Registered Foreign Lawyer (New York, USA) in the Intellectual Property & Technology team based in DLA Piper’s Hong Kong office. She is dually qualified in New York, USA, and China.

Liam is a Knowledge Development Lawyer in DLA Piper’s Intellectual Property and Technology group. He is based in the APAC region and focuses on trademark, copyright, media and artificial intelligence issues across the international practice.

The post Another Chinese court finds that AI-generated images can be protected by copyright: the Changshu People’s Court and the ‘half heart’ case appeared first on Society for Computers & Law.

]]>
Exploring Competition in Cloud and AI Podcast: Episode 1 – The Status Quo https://www.scl.org/exploring-competition-in-cloud-and-ai-podcast-episode-1-the-status-quo/ Fri, 11 Apr 2025 10:45:22 +0000 https://www.scl.org/?p=18129 We have teamed up with the LIDC (International League of Competition Law) to share a series of podcasts examining some of the increasingly pressing questions around cloud computing, AI and competition law. Over seven episodes, recorded in November 2024, Ben Evans, Shruti Hiremath and guests will look beyond the current position to identify some of...

Read More... from Exploring Competition in Cloud and AI Podcast: Episode 1 – The Status Quo

The post Exploring Competition in Cloud and AI Podcast: Episode 1 – The Status Quo appeared first on Society for Computers & Law.

]]>

We have teamed up with the LIDC (International League of Competition Law) to share a series of podcasts examining some of the increasingly pressing questions around cloud computing, AI and competition law.

Over seven episodes, recorded in November 2024, Ben Evans, Shruti Hiremath and guests will look beyond the current position to identify some of the pressures the changing landscape will bring to bear.

Episode 1: The Status Quo

The current state of competition law for cloud computing and what the regulators are up to now.

Episode 1 sets the listener up for a deep dive into cloud computing and AI later in the series with a high-level discussion of the key competition concerns that have been raised across the AI stack.

The AI stack broadly comprises of four components: data, compute (encompassing chips and cloud computing), foundation models, and AI applications. The panel reflect on the recent media and policy focus on the compute component and the widely reported chip shortages that have led competition authorities in the EU and USA to investigate how supply is being allocated. While there may have been shortages, these shortages – and any related competition concerns – should be considered against the backdrop of a sudden surge in AI product development, which may not represent a forward-looking picture of chip supply. Indeed, the recent proliferation of new chip development from firms including AMD, Intel, Google, OpenAI and Amazon suggests that competition for the supply of chips is fierce.[1] Authorities around the world are also showing considerable interest in cloud competition, focussing in particular on potential barriers to switching and interoperability. Episodes 3 and 4 are dedicated to exploring these issues in depth.

Turning attention to foundation models, the panel introduces concerns raised in particular by the UK Competition and Markets Authority (CMA) and the French Competition Authority (FCA)that firms perceived as controlling key inputs – principally data, cloud and skills – may restrict access in order to shield themselves from competition. Further concerns raised by authorities include the risk that cloud providers could exploit their market positions to distort foundation model choice, potentially engaging in self-preferencing à la Google Shopping (Case C-48/22 P Google and Alphabet v Commission). This discussion whets the appetite for a dissection of AI competition in a later episode.

Bringing the introductory session to a close, the panel also touches on concerns being raised by competition authorities that firms may be using strategic partnerships to reinforce, expand or extend existing market power through the value chain. This thorny issue is explored in greater detail later in the podcast series in an episode focussed on mergers and acquisitions, but at the outset thought is given to the importance of protections for investors in nascent technologies, with a parallel drawn to the pharmaceutical industry.

Panel

Ben Evans (chair) is a Postgraduate Researcher at the School of Law and Centre for Competition Policy, University of East Anglia. He is a member of the LIDC Scientific Committee.

Shruti Hiremath is Counsel in the Clifford Chance Antitrust Team in London.

Lauren Murphy is Founder and CEO of Friday Initiatives.

Sean Ennis is Director of the Centre for Competition Policy and a Professor of Competition Policy at Norwich Business School, University of East Anglia.


[1]  Further recent developments such as the development of more efficient models like DeepSeek’s R1 have also raised questions on the continued need for a large number of chips.

The LIDC NEX GEN Podcast Series on ‘Competition in Cloud and AI’ explores some of the most topical and hotly debated questions with a panel of leading international experts from academia, legal practice and industry.

The series was recorded  on 7 November 2024, and the views and opinions expressed therein reflect the legal context and state of affairs up to that date.

You can also watch or listen via the LIDC website, YouTube or Spotify.

The post Exploring Competition in Cloud and AI Podcast: Episode 1 – The Status Quo appeared first on Society for Computers & Law.

]]>
Software Quality and Testing: A Primer https://www.scl.org/software-quality-and-testing-a-primer/ Wed, 02 Apr 2025 13:36:33 +0000 https://www.scl.org/?p=18000 William Hooper asks: What do lawyers need to know about the assurance of quality in software to contract for it effectively? How do litigators draw on this to prove or defend a claim? His view is that avoiding “system melt-down” seems wiser than dealing with it afterwards. What is Software Testing? Suppliers test systems to...

Read More... from Software Quality and Testing: A Primer

The post Software Quality and Testing: A Primer appeared first on Society for Computers & Law.

]]>
William Hooper asks: What do lawyers need to know about the assurance of quality in software to contract for it effectively? How do litigators draw on this to prove or defend a claim? His view is that avoiding “system melt-down” seems wiser than dealing with it afterwards.

What is Software Testing?

Suppliers test systems to assess whether they do what they should do (functional testing) in a way that meets the customer’s need (non-functional testing). As such, it is the principal approach used to assure quality. Consideration of testing is useful both to transactional lawyers seeking to draft agreements that protect their clients’ interests and to contentious lawyers seeking to establish a claim.

If you have developed a spreadsheet and want to check whether it adds correctly, you may enter input data of 2 and 3, expecting to get the answer 5. If the actual result is what was expected, you call it a “pass.” If not, it is a “defect.” A useful “test report” contains details of the steps taken in testing by reference to the “test case,” of the input data, the result, and the deviation that leads you to believe it to be defective. In this way, when a developer is passed the defect for resolution, they may replicate the test as an early step in their triage, diagnosis, and fix.

Why Test?

The fundamental assumption is that if one looks for trouble before launching a product, one can address it before it harms anyone. Thus, the product is more likely to be satisfactory for users than if testing is inadequate.

Software engineers have long been aware that if they identify defects early in the process of development, they can fix them more cheaply than if the work has advanced. The reason is that the process of delivery involves bringing many components together. When a defect is discovered early on, just one component (that being developed) is affected. When found later, many others have been closely crafted to fit with the first, so each of these needs to be adapted and re-tested, first in isolation, then in combination. So, the impact is magnified. This is not a linear increase. If a fault is found only in live operation, the user population, support staff, documentation, data for processed transactions may all be affected. There can also be commercial fall-out as compensation or reputation are damaged. In this way, good testing is related to commercial success and profitability for the developing organisation and customer.

Risk and Testing

The aim of testing is to give reasonable assurance to those charged with developing and launching the system that it is ready for use and is likely to deliver greater benefit than it is harm.

This does not assure that the system is free of defects. No such guarantee can be given. Because of this, there is a residual level of risk. Managers decide whether testing has been appropriately rigorous to reduce the risk of harm to an acceptable level. If they delay launch to conduct more testing, there can be competitive and commercial consequences from this. So there are trade-offs to be made.

The conscious assessment and containment of risk is at the heart of good test design. This is assisted by the test managers’ having a good understanding of the intended business context of use, so that they focus their efforts on what is most important. The place to look for this is an over-arching document describing the project’s approach to testing, often called the “test strategy.”

In the most egregious cases, a system may be launched with little, or inadequate testing. The press, social media, customers, and regulators can be brutal in response.[1]

Some industries have developed sophisticated methods to address risks. Nuclear, aerospace and pharmaceuticals feature prominently. Such methods combine advanced management of the delivery process with considerations of risk and rigorous testing. West-Coast software developers have typically taken this on-board, moderated by methods such as progressive deployment and real-time monitoring of early responses to detect, react to and contain defects when they do occur.[2]

Types of Testing

There is a variety of types of testing with differing objectives. This results in each component being tested many times. When introducing a change, it is normal to repeat many of these. Types of testing that you may encounter include:

Functional

Unit – This is a set of tests normally performed by the person developing the component to validate that it performs the required function, such as the spreadsheet example above. One component may need to deliver several functions, each of which should have an associated test case. The unit is tested in isolation. Anything else the unit relies upon to function is simulated by programmes called “stubs” that deliver the result required from interfacing units and systems.

System – A system normally consists of more than one unit. In system testing, all the units are gathered and tested together, rather than relying on stubs. So, this encompasses looking at the interaction between component units.

Integration – A major system may have multiple elements, some from other suppliers or already in place within the customer’s environment. So, a finance system may interact with payroll and HR systems. Integration testing is a technical validation of the interactions and data flows.

Regression – Sometimes, when changing an element to fix one defect, it has unintended consequences, breaking another part of the system. Regression testing looks for such defects.

UAT – User Acceptance Testing is usually a late phase and is designed to address the question “is the system ready for business use?” It is not an exhaustive set of functional tests but is normally based on a few end-to-end scenarios.

It is normally required that functional testing should assure that the system does do what it should, or “positive testing.” It is wise also to check that it does not do what it should not, or “negative testing.” So, if you expect an input to the earlier spreadsheet example to be a positive integer, and the entry is either “-3” or “Friday” what does the system do? A helpful error message suggesting what is required is a good reaction; crashing is less good; producing an irrational answer is worse.

A complex system is likely to support many processes. Each may have an expected path and various exceptional cases. Each should be tested to assure it works as expected. It is likely to be infeasible to test all combinations, hence the use of risk to prioritise what are selected.

Non-Functional

Security – The project’s security lead should have conducted a security risk assessment. This will assess the value of the system’s function and data, consider vulnerability, the risks of attack and the means these may occur. From that, counter-measures may be constructed and their efficacy tested. One commonly adopted type of security test is “penetration” or “pen” testing. In this, hire a trusted person to attempt to penetrate the system’s defences and review its construction.

Performance – Express non-functional requirements as testable performance parameters. These can include elements such as response time; languages supported; support to disabled users; availability; capacity. Each parameter will have its own test.

User

Useability – Many systems need to operate effectively on a range of platforms such as mobile, PC, tablet. It is wise to validate that the system works effectively for the intended users, that they find the flow of interaction to be understandable and that it is effective in supporting them in their “jobs to be done.” [3] Useability testing explores aspects of the user experience.

Operational

Data Migration – If the new system is to take over from an existing one, there is likely to be data on historic transactions and assets that the new will need access to. Assume that existing data has faults such as missing or corrupt fields. Permitted values may also differ between the old and the new. Data migration testing runs along-side iterative cleansing of the data and its treatment to prepare it for the new and validates that transactions that are in-flight can be handled.

Deployment – Users of the new system may need material such as documentation and training to prepare them for the new system. Assess the efficacy of such preparation before rolling it out.

Support – Conduct “Operational Acceptance Test” or “OAT” through the repeated review of checklists. Questions may include “do we have a set of knowledge articles prepared to support the service desk with known issues?” It validates that the support organisation is ready for the system’s launch.

Code Inspection

It used to be widespread practice to require developers to submit code for human review. Whilst this is still used, it is normally now by exception and based on small sample sizes once a developer has established competence. Automated tools have taken over the bulk of the work.

Static Test – Automated tools are used to test the code for its ability to compile and for conformance to coding standards. The best organisations take this a long way into promoting the use of good practices in areas such as making code readable and declaring classes. Some tools automatically correct non-conformances.

The Limitations of Testing

Testing should be risk-based. It can assure within the scope of considered risks. It can say nothing of un-imagined “black swan” combinations of behaviour and data modes.

The aim of testing is to establish “does the system behave as intended?” A frequent source of contention is that what applies to the design of test is the designers wish. This may differ from that of the user and of the commissioning customer, especially where the expression is inarticulate. Most software testing is silent on the quality of the specification and of associated design until user testing.[4] The modern use of iterative design brings this process forward to avoid unwelcome surprises later.

Managers may consciously or ignorantly limit the scope of test. Often, they do this to accelerate launch. Sometimes the bet pays off. Sometimes not.[5]

Test Systems, Data, Environments

Setting up systems that replicate the production environment, or a part of it, can be expensive in labour, hosting, and maintenance charges. This is less of an issue in these days of virtualised and containerised systems than it was when everything was physical. But it still has costs.

For a test to operate, it must have access to:

The system – at the appropriate release level for every component required (or stubs).

An environment – loaded with the system to be evaluated, all pre-requisites and data.

Test data – Getting hold of enough of the right data can be a real problem. The contract often defines this as a customer obligation, one that can be difficult, causing delay. Then the customer’s security staff object to putting sensitive live customer data into an unsecured environment.

Modern software engineering promotes “test driven development” (TDD). Under this approach a developer first writes the test cases, then develops the code to satisfy them. This puts testing at the heart of the development process. Automated testing assists greatly.

So What? For Transactional Lawyers

Transactional lawyers are rightly reluctant to impose schedules defining detailed operational methods on the supplier. The tendering and selection process should have asked the customer to describe what they will do and the ways in which they will assure quality. An informed advisor with operational experience of testing should review the submission and raise the right awkward clarification questions during negotiations. Then upload the combined method statement, questions, and responses to become a schedule of the agreement. I hope to assist the drafting lawyer by providing introductory context and understanding to detect distracting waffle and to focus on what matters to the client.

The diligence of risk assessment heavily influences the level of assurance provided by test, making risk an area to prioritise. It is also worth considering how to report the progress and outcome of test. A key measure is test coverage, being the proportion of planned cases that are assessed.

So What? For Contentious Lawyers

Should the quality of testing or the treatment of defects become the subject of dispute, the contentious lawyer will be working along-side an expert who they need to instruct. There may be issues of breach and of tortious negligence, along-side consideration of associated loss. I hope that this article provides a guide to areas of test and their relation to the case that support the lawyer in their management of the matter.

Once test detects a defect, those investigating the case will be interested in whether the rate of fix is consistent with the planned schedule. They also look at whether defects accumulated in an uncontrolled manner or were simply and effectively despatched. Your expert should roll-up their sleeves and mine it for patterns that indicate systematic trends, so bringing clarity on the issues to the court.

If experts differ, it is likely that the supplier’s expert will seek to give the impression that overall, quality was good despite obstacles erected by the customer. The supplier was heroic. The customer’s expert may bemoan the manifold and serious failings encountered across delivery and the accumulated defects that took months to resolve.

Conclusions

A good and diligent programme of testing gives useful assurance that software is likely to be dependable. It complements good design, resourcing, and delivery methods. Where testing is appropriate in coverage and diligence, strong assurance follows and decisions are sound. Where testing is unreliable, so are its results.

Good delivery organisations embrace thorough testing and weave it into their development plans. The poor postpone the day of reckoning. Is your head high, scanning for threats, or buried in the sand?

profile picture of william Hooper

William Hooper acts as an expert witness in IT and Outsourcing disputes and a consultant in service delivery. He is a member of the Society of Computers and Law and a director of Oareborough Consulting. He may be reached on +44 7909 958274 or William@Oareborough.com


[1] https://www.bbc.co.uk/news/business-50471919

[2] Software Engineering at Google, Titus Winters, Tom Manshreck, Hyrum Wright, 2020, O’Reilly Pages 301-303

[3] Know your customers’ “Jobs to be done”, Clayton M. Christensen, Taddy Hall, Karen Dillon, David S. Duncan, Harvard Business Review, September 2016 https://hbr.org/2016/09/know-your-customers-jobs-to-be-done

[4] https://oareborough.com/Insights/assessing-design-quality/

[5] https://www.fca.org.uk/news/press-releases/tsb-fined-48m-operational-resilience-failings and

https://www.forbes.com/sites/kateoflahertyuk/2024/08/07/crowdstrike-reveals-what-happened-why-and-whats-changed

The post Software Quality and Testing: A Primer appeared first on Society for Computers & Law.

]]>
The ability of AI to increase access to justice. https://www.scl.org/the-ability-of-ai-to-increase-access-to-justice/ Wed, 19 Mar 2025 15:40:35 +0000 https://www.scl.org/?p=17895 Beth Gilmour explores the potential benefits and limitations of using AI to increase access to justice in the winning article of the SCL AI Group Junior Lawyer Article Competition Introducing DisruptionImagine someone, sitting anxiously in a waiting room at a solicitor’s office they hastily found online. They clutch a notice of eviction in their hands,...

Read More... from The ability of AI to increase access to justice.

The post The ability of AI to increase access to justice. appeared first on Society for Computers & Law.

]]>
Beth Gilmour explores the potential benefits and limitations of using AI to increase access to justice in the winning article of the SCL AI Group Junior Lawyer Article Competition

Introducing Disruption
Imagine someone, sitting anxiously in a waiting room at a solicitor’s office they hastily found online. They clutch a notice of eviction in their hands, confused as to how their landlord can remove them from their home of ten years. Somewhere else, a woman waits on hold with a personal injury helpline after months of excruciating pain from slipping on an uneven paving stone, leaving her unable to work. Another man is on their commute home from work for the last time, frantically googling employment law, after being unexpectedly dismissed; he does not know what he can do or where next month’s rent will come from. Despite their differences, they all share one common question:


“Am I going to win?”


It is an age-old query coming in different forms; can I stay in my home; how much will I get in damages? Ultimately, they want to know about the outcome.


When considering how AI can increase access to justice, we must start there—with the outcome. Too often, solutions to the crisis of access to justice are framed within the constraints of existing systems, which rely on the current process being supported by technology (sustaining technologies). While these efforts are valuable, artificial intelligence (“AI”) offers an opportunity to work outside of these constraints and construct new routes to the same just outcomes (disruptive technologies).1 This account will consider how access to justice has already benefitted from AI and what could be done next.

Organising Disruption: Efficiency
For AI-driven disruption to meaningfully increase access to justice, its implementation must be guided by clear principles with efficiency as the cornerstone. In this context, efficiency encompasses both expediency and accuracy in results. Timely justice has been a consistent guiding ethos, with the Magna Carta stating, “To no one will we sell, to no one deny or delay right or justice.”2 This remains pertinent today, as explained by Zuckerman, the passage of time can diminish the value and enforceability of rights, making speed an essential element not just in procedure but in the dispensing of justice.3
Guided by efficiency and a willingness to move beyond traditional processes, AI can increase access to justice by providing faster, and still accurate, resolution that would otherwise take months of litigation.


Implementing Disruption
One clear example of AI systems enhancing access to justice is the rise of chatbots, such as AccessAva, which streamline legal information for those who need it most.4 AccessAva, developed by Carers UK in partnership with Access Social Care, is an online tool designed specifically for unpaid carers in the UK. It empowers users by providing easy-to-understand legal information, along with templates and resources which reduce the need for professional legal assistance. Other models include DoNotPay, which offers a similar service aimed at consumers. These chatbots are examples of Large Language Models (“LLM”), which are trained on large amounts of text data, which, in turn, generate natural language responses to a wide range of inputs.5 What sets models like DoNotPay and AccessAva apart is their focus on supporting litigants directly. They disrupt by allowing computers to speak the language of lawyers, which was previously unachievable. Further, they increase efficiency by centralising the information needed by the individual and making justice more accessible to those who might otherwise struggle with traditional systems due to financial constraints or a lack of understanding. This exemplifies AI as a solution to problems highlighted by the 2023 Legal Needs Survey by the Law Society. A key finding was that, of people who faced a legal issue between 2019 and 2023, only 52% received professional help, with the rest either relying on family and friends or not seeking assistance at all.6 The survey highlights two key barriers: the cost of legal advice and a lack of understanding or confidence in engaging with the law. AI-driven platforms like AccessAva are precisely the kind of innovation that can overcome these obstacles, closing the gap and providing essential support to those who would otherwise struggle to access justice.

They can also be taken further. AI systems, particularly those using machine learning, can analyse patterns in large datasets to predict outcomes which has the potential to take chatbots beyond the provision of legal information and into the realm of advice.7 For instance, researchers have used AI to predict the outcomes of European Court of Human Rights cases with 79% accuracy.8 A predictive capability like this has the potential to disrupt as it would allow an individual not only to understand what the next steps are but to make a well-informed decision on whether to pursue at all. Users could ask whether they have a strong case, whether pursuing it is cost-effective, or what outcomes they might expect. Accessing legal advice is a key element of access to justice and an AI system which combines predictive outputs with user-friendly interfaces, like AccessAva and DoNotPay, has the potential to increase the number of people that such advice is available to.

Challenges
There are concerns which should be met head-on, the primary one being accuracy. If the architecture which underlies any predictive technology is wrong, the output will be too. Thus, any such model would have to be tightly regulated by humans (lawyers) with the knowledge of the underlying area and the ability to understand the dispute to ensure the algorithm does not result in litigants abandoning worthy claims. Legal minds will have a role at the point of data entry and in auditing the output. The changing role of the lawyer and the need for the legal sector to be reflexive with technological advancements is part of the disruption that access to justice solutions which use AI will bring around. The lawyer’s auditing role is also in identifying “hallucinations” by chatbots whereby responses generated are incorrect or fabricated. The risk is lower with bespoke systems using specialised legal data than with general-purpose chatbots like ChatGPT.9 Despite the reduced risk, verification by a qualified legal mind is still necessary to ensure accuracy, and therefore efficiency. With this safeguard, chatbots can help democratise legal assistance.
Another significant concern regarding the use of AI in legal practice is that it could stunt the growth of the common law. A classic iteration of this concern comes from considering Donoghue v Stevenson,10 a seemingly simple case where a woman drank from a bottle which, unbeknownst to her, had a decomposing snail inside. While the case involved a straightforward fact pattern, it went all the way to the House of Lords and ultimately established the “neighbour principle,” a key development in negligence law. If fed into an AI advice system before this principle was established, the outcome might have been different, possibly failing to recognise the broader legal implications of the case. This raises the concern that AI systems, by relying heavily on data from past decisions, might overlook the unique factors in a case that could lead to the establishment of new legal principles. If AI simply provides a binary answer—”good claim, pursue” or “no claim, tough luck” – it could ignore the nuanced, creative reasoning that legal professionals bring to the table. It must be acknowledged that whichever AI system is implemented, it has to be sophisticated enough to recognise and flag unique features of a case that may not align with past precedents. These features could prompt lawyers to consider how the case might develop or whether it warrants a new interpretation of the law.


Conclusion
Embracing AI has the potential to reduce delays and empower individuals with legal information and guidance. This disruption must be managed carefully with human oversight to address challenges in ensuring accuracy and preserving the flexibility of legal interpretation.
This account comes from a legal perspective, without the technical expertise to explore the underlying technology. That knowledge gap should not, however, remove lawyers from the conversation. They bring essential industry insights and knowledge that are key to the reflexive relationship between law and tech.
Ultimately, with such collaboration and safeguards, AI can bridge the justice gap, ensuring that more people can ask and answer the crucial question, “Am I going to win?”

Beth Gilmour is the winner of the SCL AI Group Junior Lawyer Article Competition. Beth is a BAR student and is currently a Judicial Assistant to High Court Judges in England and Wales.

  1. Susskind RE, Tomorrow’s Lawyers an Introduction to Your Future, Chapter 6 “Disruptive Legal Technologies” (Third edition, Oxford University Press 2023), ↩︎
  2. Magna Carta Clause 40 ↩︎
  3. Professor Adrian Zuckerman, Zuckerman on Civil Procedure: Principles of Practice 4th Ed, Chapter 1 p.17 (4th Edition. Street & Maxwell, 2021) ↩︎
  4. AccessAva, available at https://www.accesscharity.org.uk/accessava (accessed December 2024) ↩︎
  5. Robin Allen KC and Dee Master, Judges, Lawyers, and litigation: Do they, should they, use AI? Paper for the Employment Law Bar Association (2024) p.17 ↩︎
  6. The Law Society: Find out what your clients need, with the results of our Legal Needs Survey, available at https://www.lawsociety.org.uk/topics/research/find-out-what-your-clients-need-with-the-results-of-our-legal-needs-survey (accessed December 2024) ↩︎
  7. Richard Susskind n1, Table 6.1 ↩︎
  8. Nikolaos Aletras, Dimitrios Tsarapatsanis, Daniel Preoţiuc-Pietro, Vasileios Lampos, Predicting judicial decisions of the European Court of Human Rights: a Natural Language Processing perspective, (2016) PeerJ Computer Science 2:e93 ↩︎
  9. Robin Allen KC and Dee Master n5, p.13 ↩︎
  10. [1932] AC 562 ↩︎

The post The ability of AI to increase access to justice. appeared first on Society for Computers & Law.

]]>
Moving Beyond DORA Ready to DORA Now https://www.scl.org/moving-beyond-dora-ready-to-dora-now/ Tue, 04 Mar 2025 09:21:00 +0000 https://www.scl.org/?p=17573 Dr Paul Lambert highlights some of the key aspects of the Digital Operational Resilience Act (now in force) you should be aware of. The Digital Operational Resilience Act, known as DORA, impacts the financial sector as well as Big (and Small) Tech firms supporting banks and other financial institutions. The go live deadline for DORA...

Read More... from Moving Beyond DORA Ready to DORA Now

The post Moving Beyond DORA Ready to DORA Now appeared first on Society for Computers & Law.

]]>
Dr Paul Lambert highlights some of the key aspects of the Digital Operational Resilience Act (now in force) you should be aware of.

The Digital Operational Resilience Act, known as DORA, impacts the financial sector as well as Big (and Small) Tech firms supporting banks and other financial institutions. The go live deadline for DORA was 17 January 2025. DORA will have significant impacts across the international finance sector, and other types of firms in addition to the core financial sector, but arguably few of these have been fully compliant from day one. For example, some firms were preparing to be “DORA ready” for day one, recognising that there will be a period of additional implementation measures needed throughout 2025.

Cyber Threats Background

Why are the Act and the concepts of operational resilience and digital operational resilience relevant?

Recently we had an example of not one but three major banking institutions suffering IT problems which halted services to their customers, starting with Barclays, and expanding to Lloyds Bank and Halifax. Last year NatWest, RBS and Ulster Bank also suffered IT issues. The internal and external IT threats and vulnerabilities facing the financial sector are expanding. AI, which is a subject in its own right, appears to be only enhancing this trend when used by bad actors.

Some of these threats were being contemplated when the policymakers began to develop DORA, alongside market issues.

According to the ECB “with the use of information technology having become a large part of daily life, and even more so during the coronavirus (COVID-19) pandemic, the potential downsides of an increasing dependence on technology have become even more apparent. Protecting critical services like hospitals, electricity supply and access to the financial system from attacks and outages is crucial.  Given the ever-increasing risks of cyber attacks, the EU is strengthening the IT security of financial entities such as banks, insurance companies and investment firms.” DORA will “make sure the financial sector in Europe is able to stay resilient through a severe operational disruption.”

Increased digitalisation – and interconnection – also “amplify ICT risk”, making society as a whole (and the financial sector in particular) more vulnerable to cyber threats and / or ICT disruptions and attacks from errant third parties.

The range of cyber threats is also increasing. They include, for example, attacks such as bad actor hacking attacks, business email attacks, Phishing, Spear Phishing, Ransomware, viruses, Trojans, Distributed Denial of Service (DDOS), web application attacks, mobile attacks, and more.

The threat is not just from direct attacks. There are increasing numbers of indirect attacks, where the bad actors seek to gain access via a trusted third-party service provider that the financial company uses. This is supply chain and service provider compromise.

Other risk issues include management risk and system risk, such as failing to patch known vulnerabilities.

The number, level of sophistication and complexity of attacks are all increasing.

Costs to the Sector

The recent outages at Barclays, Lloyds Bank and Halifax demonstrate that there is a direct cost to consumers. There can even be a direct financial cost when salary payments or mortgage payments are missed. The DORA policymakers were also concerned at the potential systemic effects on the wider financial incident caused by IT issues, the effect on consumer trust, industry and national economies.

The cost of the above threats continues to increase. By comparison, we already see very significant data fines arising under data laws such as personal data rules. For example, Meta (has been fined €1.2 billion euros for one set of data breaches concerning data transfers while TikTok has been fined €345 million and £14.5 million for data breaches regarding child data. These are just examples with numerous other  data fines in the billions across the globe.

Many firms have been fined as a result of ineffective security measures leading to them being hacked, thus demonstrating a lack of appropriate and technical security measures and overall digital operational resilience. Already, even before the official go-live of DORA, firms have been receiving significant fines and penalties as a result of matters which cross over with digital operational resilience.

The Need for Digital Resilience

Regulators, whether the European Central Bank (ECB), the Bank of England (BOE), or the Fed in the US, are tasked with protecting the financial stability of their financial systems. As part of this they need to ensure that financial firms are financially resilient and stable and some of the rules around financial stability stem from the last great recession.

But today, financial stability is not the only threat to financial entities and the wider financial system: IT, ICT, and cyber threats must also be reckoned with. An example of an IT vulnerability change, apparently due to lack of testing prior to deployment and which had widespread adverse effects across a range of industries, was the SolarWinds incident. Financial entities often rely on third party suppliers or even outsource some of their core activities. Firms can be adversely affected when one of these third parties is exposed to a cyberattack. Bank of America, for example, had to warn its customers after one of its suppliers (IMS) was hacked by bad actors. Financial entities of service providers such as AddComm and Cabot have also encountered problems when these suppliers were involved in cyberattacks. Christine Lagarde (President of the ECB) states that “cyberattacks could trigger a serious financial crisis.” Piero Cipollone (ECB Executive Board) states that “cyber risks have become one of the main issues for global security. They have been identified as a systematic risk to the stability of the European financial system.” Unfortunately, it is not limited to just the European financial system.

Now, financial institutions must also ensure that they are digitally operational resilient and prepared for these internal and external tech threats.

Digital Operational Resilience Rules

DORA promotes rules and standards to mitigate Information and Communications Technology risks for financial institutions. One of the objectives of DORA is to “prevent increased fragmentation of rules applicable to ICT risk management” by establishing common rules and standards.

DORA “addresses today’s most important challenges for managing ICT risks at financial institutions and critical ICT third-party service providers.” These risks must be properly managed for digitalisation to “truly deliver on the many opportunities it offers for the banking and financial industry.” For example, better analysis and better data management can assist financial institutions  become more resilient. Also, “early warning systems” and automated alerts could enhance ICT risk management and digital operational resilience.

Key Focus Areas of DORA

DORA deals with five key pillar areas, namely:

  • ICT risk management
  • ICT-related incident management, classification and reporting
  • digital operational resilience testing (DORT)
  • ICT third-party risk management (TPRM)
  • information-sharing arrangements (ISAs).

Arguably, the rules and requirements for pillar 5 above are the least well developed and are likely to evolve during 2025 and 2026.

A very complex set of rules and requirements sits behind each of these pillars of the core DORA regulation. DORA sets out a broad array of new obligations for financial entities, outsource companies and technology companies supporting the financial sector. Some of these new rules mean new or enhanced:

  • ICT risk management and governance
  • ICT policies and procedures
  • ICT incident management and reporting
  • change management
  • digital operational resilience
  • digital operational resilience testing
  • ICT third party risk management
  • business continuity
  • cyber security
  • training
  • information sharing on threats.

Extensive Sub Rules

DORA is a legal Regulation. Being a law, it is labelled a Level 1 requirement. Unfortunately for industry, there is an expansive range of even more detailed legal and technical requirements at Level 2 below the Level 1 rules.

The array of DORA sub rules is vast. They are referred to as the Level 2 rules, with the main DORA Regulation representing Level 1. The Level 2 rules are then further separated into four types of sub rules, namely:

  • Regulatory Technical Standards or RTS
  • Implementing Technical Standards or ITS
  • Guidelines
  • (Independent) Commission Delegated Regulations.

The RTS, ITS and Guidelines were developed by the ESA, a combination of European financial regulators. The scope of these detailed Level 2 rules has added to the already complicated nature of the technical and regulatory compliance efforts required of financial entities. They are collectively far more extensive than the DORA Level 1 rules. An additional difficulty is that the Level 2 rules have come out over different time periods. The ones that are developed by the ESAs generally need to be reviewed, amended and implemented by the Commission. While the ESAs had specific time deadlines, the Commission did not have to specify when it would finalise the Level 2 rules.

Therefore, the rules have come out at different time periods, thus adding extra difficulties for financial institutions. Indeed, even near end of 2024, not all Level 2 rules were fully set out – even though the go-live date was imminent in January 2025.

In addition, we can also add two further layers of DORA regulations. There will be a certain level of national DORA direct legislation (Level 3) and national financial regulator rules (Level 4). Some of this is still in process.

Level 2 Regulatory Technical Standards

The RTS are:

  • Commission Delegated Regulation specifying ICT risk management tools, methods, processes, and policies and the simplified ICT risk management framework
  • Commission Delegated Regulation specifying the criteria for the classification of ICT-related incidents and cyber threats, setting out materiality thresholds and specifying the details of reports of major incidents
  • RTS to specify the policy on ICT services supporting critical or important functions provided by ICT third party services
  • Commission Delegated Regulation specifying the detailed content of the policy regarding contractual arrangements on the use of ICT services supporting critical or important functions provided by ICT third-party service providers
  • RTS on threat led penetrating testing (TLPT)
  • RTS and ITS on content timelines and templates on incident reporting (drafted by ESAs, apparently awaiting Commission implementing measure)
  • RTS on oversight harmonization
  • RTS on Joint Examination Teams (JET).
Level 2 Implementing Technical Standards

The ITS to a Register of Information.

Level 2 Guidelines

There are two DORA Level 2 Guidelines on:

  • aggregated costs and losses from major incidents (adopted by ESAs)
  • oversight cooperation between ESAs and competition authorities (adopted by ESAs).
Level 2 Delegated Regulations

There are two Commission Delegated Regulations which are independent of the ESAs, as follows:

  • Commission Delegated Regulation specifying the criteria for the designation of ICT third-party service providers as critical for financial entities
  • Commission Delegated Regulation determining the amount of the oversight fees to be charged by the Lead Overseer to critical ICT third-party service providers and the way in which those fees are to be paid.

DORA Ready to DORA Now.

Some of the details of the Level 2 sub regulations were finalised very close to the go-live date, and financial institutions had difficulty in fully understanding all the rules and nuances of the new regime and, importantly, in complying with these rules as some were not yet bedded down. The many layers of compliance requirements across multiple legal and technical instruments made this task vastly more complicated, consuming, and costly.

The effort needed to interpret and apply these expansive rules compounded by the late issue of some of the official materials, has meant financial entities and suppliers have faced significant challenges to reach a level even approaching compliance now and will need to expand the maturity of such compliance over the coming years.

While it was understandable to prepare on the basis of DORA ready (as much as one can be) up until now, it is now necessary to focus on DORA now, getting all of DORA and the sub regulations in place alongside measures needed to demonstrate digital operational resilience into the future.

Paul Lambert, Ph.D. Paul is the author of “DORA, Interpreting the EU’s Digital Operational Resilience Act” (published by Bloomsbury), and the editor of Gringras, The Laws of the Internet.

The post Moving Beyond DORA Ready to DORA Now appeared first on Society for Computers & Law.

]]>
The Rise of AI Agents: Pizza, Parameters and Problems https://www.scl.org/the-rise-of-ai-agents-pizza-parameters-and-problems/ Fri, 07 Feb 2025 10:26:16 +0000 https://www.scl.org/?p=17353 JJ Shaw on the rise of the AI agent and some legal issues to watch out for. “AI Agents” are making a case to become the new buzzword for 2025. These autonomous AI-powered tools can act on a user’s behalf, performing online tasks and making independent decisions with minimal human input.  Whilst AI agents are...

Read More... from The Rise of AI Agents: Pizza, Parameters and Problems

The post The Rise of AI Agents: Pizza, Parameters and Problems appeared first on Society for Computers & Law.

]]>
JJ Shaw on the rise of the AI agent and some legal issues to watch out for.

“AI Agents” are making a case to become the new buzzword for 2025. These autonomous AI-powered tools can act on a user’s behalf, performing online tasks and making independent decisions with minimal human input.  Whilst AI agents are emerging across various sectors (OpenAI announced the launch of their AI Agent, “Operator”, only last week), the Web3 space is proving to be a popular launchpad for these initiatives. Here, AI agents are being launched as decentralised products, often requiring users to purchase a minimum quantity of a native token / cryptocurrency to use the service.

A recent example on X highlighted this trend: Jesse Pollack, CEO of “Base” (Coinbase’s Ethereum Layer-2 blockchain), tagged Luna Virtuals in a post saying, “I want some pizza”. This simple command was all Luna’s AI agent needed to place a $50 pizza order, coordinating the transaction through “Agent BYTE” (another fully autonomous AI Agent allowing users to purchase fast-food with crypto).

Remarkably, the simple intent of Jesse’s post was enough to spark a consumer transaction to be concluded with a real-world vendor of fast food – with the order being processed autonomously via two separate AI agents. The details of the order required a series of decisions to be taken by the Luna Virtuals agent (e.g. as to the type of pizza, the vendor, the quantity and overall cost) – none of which were explicitly specified in Jesse’s original post.

While this technological feat is impressive, if this occurred within England and Wales, it would raise interesting legal questions under English law around: (1) whether an AI agent can legally act as “agent” to bind a human “principal” to a contract; and (2) the enforceability of any resulting transactions.

AI agents and the doctrine of agency

Under English law, an agent can bind a principal to a contract if the agent acts within the scope of its authority. In theory, there is nothing prohibiting a consumer user from delegating authority to a technology company (e.g. the provider of an AI agent tool) to act as “agent” to conclude transactions on the consumer’s behalf, provided certain legal principles and safeguards are properly observed. 

However, agency relationships typically rely on an agreement (express or implied) where the scope of the principal’s authority is made clear. For the nascent AI agents of today, the absence of any clear T&Cs governing the authority of the AI agent (in addition to the complexity of their decision-making processes) immediately thrusts the enforceability of resulting transactions into murky waters. Key considerations include:

  1. Express authority and user agreements: Robust and detailed T&Cs should exist between the user and the AI agent to define (at the very least):
    • The scope of the AI agent’s authority (e.g. placing orders up to a specified value).
    • Decision-making parameters (e.g. selecting third party vendors or products based on agreed parameters and user preferences).
    • Liability for errors or unintended transactions.
       
  2. Implied authority and reasonableness: If a user’s command is vague (“I want some pizza”), does the AI agent have implied authority to fill in the gaps? English courts may examine what a reasonable person in the user’s position would have expected the AI agent to do. For example, would Jesse have reasonably expected the AI agent to spend $50 without confirming the order details? What about $200? If an AI Agent is deemed to have exceeded its authority (express or implied) when placing an order for goods or services, this could mean the user is not bound to any resulting transaction – creating a headache for traders if needing to deal with refunds and chargebacks.
     
  3. Ratification: If an agent does act outside of its authority, a principal may “ratify” the transaction after the fact. Ratification typically occurs when a principal confirms or adopts the actions of their agent (even if those actions initially exceeded the agent’s authority), but the principal must be fully aware of all the material facts surrounding the unauthorised act before they can ratify it. In the context of AI agents, this principle is complicated by the absence of meaningful human oversight, the speed at which transactions occur, and the lack of pre-contractual information given as part of the transaction flow (see below).

Consumer law and AI-mediated orders

Beyond contract and agency law, transactions made through AI agents must also comply with consumer protection requirements. Under UK law, consumers enjoy specific rights when purchasing goods and services online, including:

  1. Cooling-off periods: Under the Consumer Contracts (Information, Cancellation and Additional Charges) Regulations 2013 (“CCRs“), consumers generally have 14 day “cooling-off period” to cancel a contract for most online purchases (although this does not apply to perishable goods, such as pizza). If an AI agent places an order for goods or services to which the cooling-off period applies, then the consumer’s rights should remain intact – and this again raises practical challenges around needing to unwind cancelled transactions that have been concluded through automated agents.
     
  2. Transparency and information requirements: For a consumer transaction to be valid and enforceable, not only must “fair” consumer terms must be in place to govern the transaction, but traders must also present consumers with certain required information on a “durable medium” directly before the order is placed – thereby allowing the consumer to make an informed decision about the given purchase. In the context of AI Agents, this creates issues on two fronts:
    • Use of the AI Agent tool: Unless the AI agent provider can evidence that users have agreed to compliant T&Cs governing the tool’s functionality, costs and significant limitations (which need to be “fair” and jargon-free by consumer law standards), users may not be bound by any terms governing use of the tool – such as payment terms or limitations on liability for the provider – and the provider could even become exposed to regulatory sanction for breach of consumer laws.
       
    • Transactions concluded by the AI Agent: Equally, all traders (including pizza vendors) must provide certain pre-contractual information to consumers immediately before a transaction takes place (such as details of the vendor, the product and total cost of the transaction) and a failure to do so may give the consumer the right to cancel the contract and claim a refund. In the pizza example, this pre-contractual information about the order was seemingly not communicated either to Luna Virtuals (by Agent BYTE) or to Jesse (by Luna Virtuals) before the transaction was completed. Where key consumer information is not relayed back to the consumer by AI agents in this way, does this mean the resulting transaction is inherently unenforceable under English law? 

Ultimately, there appears to be a direct tension between modern day consumer law (which is designed to ensure consumers are fully informed about all relevant details of a potential transaction during the purchase flow) and this new technological breakthrough, allowing consumers to delegate navigation of the consumer purchase journey to robots for the benefit of convenience (but at the disadvantage of not being fully informed about each transaction). 

This certainly wouldn’t be the first time we have seen today’s consumer laws being outpaced by modern technological developments. The NFT craze of 2021 (which saw NFTs widely issued without much regard for consumer rights) came and went without triggering much consumer regulatory attention, and the T&Cs of many of today’s AI chatbots (which contain numerous problematic and consumer-unfriendly terms) remain largely unchallenged. Only time will tell whether regulators take a harder line on AI Agents than they have with previous disruptive consumer-facing technologies.

And of course – data protection….

AI agents that process personal data of users (such as names, addresses and bank / crypto wallet details) must comply with applicable data protection laws, including the UK GDPR. This raises a number of considerations that AI Agent providers will need to consider, including:

  1. Compliant privacy policies: Providers of AI agents must implement clear and comprehensive privacy policies that explain how user data is collected, processed, stored, and shared. These policies should also detail the lawful basis for processing and provide users with information about their data rights.
  2. Purpose limitation and minimisation: AI agents should only process personal data necessary for fulfilling the specific task they are authorised to perform (eg, placing an order). Over-collection or use of data for secondary purposes without user consent could breach GDPR principles.
  3. Security measures: Strong technical and organisational measures must be in place to protect user data from unauthorised access or breaches, especially given the real-time nature of AI agent transactions.
  4. Accountability: Providers should also ideally conduct Data Protection Impact Assessments (DPIAs) for AI agent services to identify and mitigate privacy risks. 

Conclusion

The rise of AI agents could usher in a paradigm shift in online transactions, challenging established principles of agency and consumer laws. English law provides a robust framework for addressing these challenges, but it will require careful adaptation to ensure that automation does not erode accountability. As we embrace this new frontier, the lesson is clear: with great (AI-driven) power comes great responsibility – for both businesses and users alike.

JJ Silk, Managing Associate , Lewis Silkin

The post The Rise of AI Agents: Pizza, Parameters and Problems appeared first on Society for Computers & Law.

]]>