Artificial Intelligence Archives - Society for Computers & Law https://www.scl.org/category/artificial-intelligence/ Society for Computers & Law Thu, 01 May 2025 19:23:58 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://www.scl.org/wp-content/uploads/2024/02/cropped-scl-150x150.png Artificial Intelligence Archives - Society for Computers & Law https://www.scl.org/category/artificial-intelligence/ 32 32 This Week’s Techlaw News Round-up https://www.scl.org/this-weeks-techlaw-news-round-up-51/ Fri, 02 May 2025 09:00:00 +0000 https://www.scl.org/?p=18639 UK law Ofcom issues guidance on mandatory age checks for pornographic content services Last week we wrote about Ofcom’s new guidance on protecting children under the Online Safety Act 2023.  Among other things, it requires age assurance requirements for online services allowing pornographic content. From 25 July 2025, affected services must implement ‘highly effective age...

Read More... from This Week’s Techlaw News Round-up

The post This Week’s Techlaw News Round-up appeared first on Society for Computers & Law.

]]>
UK law
Ofcom issues guidance on mandatory age checks for pornographic content services

Last week we wrote about Ofcom’s new guidance on protecting children under the Online Safety Act 2023.  Among other things, it requires age assurance requirements for online services allowing pornographic content. From 25 July 2025, affected services must implement ‘highly effective age assurance’ measures to prevent under-18s from accessing such content. The requirements apply to services within scope of the Online Safety Act 2023, with Ofcom sending notification letters to hundreds of services whose primary purpose is hosting pornographic material.

Ofcom launches consultation on extending Online Safety Act user controls

Ofcom is consulting about amendments to the Illegal Content Codes of Practice under the Online Safety Act. The amendments would extend blocking and muting controls and comment disabling features to smaller user-to-user service providers likely to be accessed by children. The consultation ends on 22 July 2025.

Ofcom establishes Online Information Advisory Committee under Online Safety Act 2023

Ofcom has established its Online Information Advisory Committee under section 152 of the Online Safety Act 2023. Five expert members have been appointed to the committee for three-year terms.  The Committee will advise on misinformation and disinformation matters from 1 May 2025. The Committee will support Ofcom’s statutory duty to ensure platforms address illegal content and child-harmful material through appropriate systems, without making decisions on individual content.

CMA publishes guidance on 4Ps under the Digital Markets Competition Regime

The CMA has set out how the CMA plans to implement the so-called 4Ps under the digital markets competition regime. Through pace, predictability, proportionality and process, it says that it will promote business trust and confidence, encourage investment and innovation and deliver positive outcomes for UK businesses and consumers.  It sets out the approach the CMA will take, including how the CMA will pursue deeper collaboration with stakeholders to inform its work; ensure transparency around prioritisation of investigations and interventions and deliver efficient and streamlined processes to ensure stakeholders can meaningfully engage with its work. 

FCA publishes engagement paper for AI live testing

The Financial Conduct Authority has published an engagement paper for its proposal for AI Live Testing. The proposal builds on the FCA’s new five-year strategy which sets out how it aims to support growth through a tech-positive approach. It also aims to support the FCA to be a smarter regulator by embracing data and technology to be more effective and efficient. The FCA has asked for feedback on the engagement paper by 10 June 2025.

ICO issues statement following ramsomware attack on British Library

In October 2023, the British Library reported a ransomware attack to the ICO, which escalated because of the lack of multi-factor authentication on an administrator account.  Following the incident, the British Library published a cyber incident review in March 2024, which provided an overview of the cyber-attack and key lessons learnt to help other organisations that may experience similar incidents.  Having carefully considered this particular case, the Information has Commissioner decided that, due to its current priorities, further investigation would not be the most effective use of its resources. It has provided guidance to the British Library, which has reassured the ICO about its commitment to continue to review and ensure that appropriate security measures are in place to protect people’s data. 

The post This Week’s Techlaw News Round-up appeared first on Society for Computers & Law.

]]>
Exploring Competition in Cloud and AI Podcast: Episode 4: The EU Data Act and Cloud Analogies https://www.scl.org/exploring-competition-in-cloud-and-ai-podcast-episode-4-the-eu-data-act-and-cloud-analogies/ Wed, 30 Apr 2025 07:54:29 +0000 https://www.scl.org/?p=18572 We have teamed up with the LIDC (International League of Competition Law) to share a series of podcasts examining some of the increasingly pressing questions around cloud computing, AI and competition law. Over seven episodes, recorded in November 2024, Ben Evans, Shruti Hiremath and guests will look beyond the current position to identify some of the pressures...

Read More... from Exploring Competition in Cloud and AI Podcast: Episode 4: The EU Data Act and Cloud Analogies

The post Exploring Competition in Cloud and AI Podcast: Episode 4: The EU Data Act and Cloud Analogies appeared first on Society for Computers & Law.

]]>
We have teamed up with the LIDC (International League of Competition Law) to share a series of podcasts examining some of the increasingly pressing questions around cloud computing, AI and competition law.

Over seven episodes, recorded in November 2024, Ben Evans, Shruti Hiremath and guests will look beyond the current position to identify some of the pressures the changing landscape will bring to bear.

Episode 4: The EU Data Act and Cloud Analogies

Are analogies between cloud and open banking and telecoms appropriate? A deep dive into the EU Data Act and the potential unintended consequences

Building on the discussion in episode 3, this episode 4 analyses the cloud provisions of the EU Data Act with reference to an influential and widely cited paper co-authored by Ben Evans and Sean Ennis. The panel explore the concept of ‘equivalence’ between cloud services and question the merits of the controversial ‘functional equivalence’ requirement, which is designed to boost switching between cloud providers. This leads to a discussion over whether the analogy between cloud computing services, which exhibit high degrees of feature complexity and innovation, and banking services, which exhibit both a limited number of key features and a relatively low level of innovation, is appropriate. As articulated by the authors in an earlier SCL article, it is suggested that these two differences are critical for considering the nature and focus of future cloud regulation and may limit the value of analogies to prior experiences with portability and interoperability. Moreover, the panel considers the authors’ observation that a significant number of cloud customers already have the possibility and incentive to account ex ante at contract stage for the trade-off between complexity and customisation in service functionality and ease of portability and interoperability. The discussion turns attention to profound concerns that the Data Act may have the unintended consequences of disincentivising innovation, strengthening the position of incumbents, and harming smaller cloud service providers by inter alia effectively commoditising cloud services to the extent that competition is reduced to price competition.

Panel

Ben Evans (Chair) is a Postgraduate Researcher at the School of Law and Centre for Competition Policy, University of East Anglia. He is a member of the LIDC Scientific Committee.

Shruti Hiremath is Counsel in the Clifford Chance Antitrust Team in London.

Lauren Murphy is Founder and CEO of Friday Initiatives.

Sean Ennis is Director of the Centre for Competition Policy and a Professor of Competition Policy at Norwich Business School, University of East Anglia.

The LIDC NEX GEN Podcast Series on ‘Competition in Cloud and AI’ explores some the most topical and hotly debated questions  with a panel of leading international experts from academia, legal practice and industry.

The series was recorded  on 7 November 2024, and the views and opinions expressed therein reflect the legal context and state of affairs up to that date.

You can also watch or listen via the LIDC website, YouTube and Spotify.

The post Exploring Competition in Cloud and AI Podcast: Episode 4: The EU Data Act and Cloud Analogies appeared first on Society for Computers & Law.

]]>
Exploring Competition in Cloud and AI Podcast: Episode 3 – Dissecting Cloud Competition https://www.scl.org/exploring-competition-in-cloud-and-ai-podcast-episode-3-dissecting-cloud-competition/ Fri, 25 Apr 2025 09:50:58 +0000 https://www.scl.org/?p=18276 We have teamed up with the LIDC (International League of Competition Law) to share a series of podcasts examining some of the increasingly pressing questions around cloud computing, AI and competition law. Over seven episodes, recorded in November 2024, Ben Evans, Shruti Hiremath and guests will look beyond the current position to identify some of...

Read More... from Exploring Competition in Cloud and AI Podcast: Episode 3 – Dissecting Cloud Competition

The post Exploring Competition in Cloud and AI Podcast: Episode 3 – Dissecting Cloud Competition appeared first on Society for Computers & Law.

]]>
We have teamed up with the LIDC (International League of Competition Law) to share a series of podcasts examining some of the increasingly pressing questions around cloud computing, AI and competition law.

Over seven episodes, recorded in November 2024, Ben Evans, Shruti Hiremath and guests will look beyond the current position to identify some of the pressures the changing landscape will bring to bear.

Episode 3: Dissecting Cloud Competition

The investigations of the UK CMA and an introduction to the EU Data Act.

In episode three, the panel begin exploring the five-fold concerns raised by the UK CMA in its issues statement in relation to its cloud market investigation. First, the authority has expressed concern that potential market concentration may be limiting choice. Although a number of large firms hold substantial market share in public cloud, the existence of on-premises and hybrid cloud solutions may temper concerns. Second, the CMA is worried that data transfer fees may prohibit switching, an issue that has been addressed in the EU under the cloud provisions of the recently enacted Data Act. Third, there is a concern that impediments to portability and interoperability may create dependencies or impair customers’ ability to move assets and to integrate across providers. Although such concerns may be valid, the panel considers the reality that market-based solutions are already developing, with industry consortia and voluntary standards bodies emerging without the need for regulatory interference. Fourth, the CMA has considered whether committed spend agreements limit customer flexibility and cause lock-in. Any intervention should be mindful of the benefits of such agreements to consumers in terms of cost savings and price stability. Finally, unfair licensing practices have come under scrutiny and there is a legitimate question as to whether some large providers may restrict competition by, for example, requiring additional fees or adherence to restrictive terms when customers use software from rival providers.[1]

While there has been substantial regulatory interest in Japan, the Netherlands, South Korea and France, all of which have completed cloud market studies, and in Spain and the USA, which have started investigations, the UK authority has advanced arguably the most detailed research and analysis of competition in the sector. The panel observes that despite this, the initial conclusions reached by the CMA and the referring authority Ofcom do not necessarily follow from the empirical market research that underpins their respective studies. Indeed, this is an issue that has been raised by Ben Evans and Sean Ennis in their co-authored consultation responses to the CMA and Ofcom. The evidence suggests that generally customers are on the ‘way in’ on their cloud journey and that, as opposed to provider restrictions, one of the key factors leading to lock-in may be that those firms do not yet have the in-house technical capability to initiate cost and time efficient switch.


[1]  Since the recording of the podcast, the CMA has published its Provisional Decision Report on 28 January 2025. Further details are available at: https://www.gov.uk/cma-cases/cloud-services-market-investigation#provisional-findings.

Panel

Ben Evans (Chair) is a Postgraduate Researcher at the School of Law and Centre for Competition Policy, University of East Anglia. He is a member of the LIDC Scientific Committee.

Shruti Hiremath is Counsel in the Clifford Chance Antitrust Team in London.

Lauren Murphy is Founder and CEO of Friday Initiatives.

Sean Ennis is Director of the Centre for Competition Policy and a Professor of Competition Policy at Norwich Business School, University of East Anglia.

The LIDC NEX GEN Podcast Series on ‘Competition in Cloud and AI’ explores some the most topical and hotly debated questions  with a panel of leading international experts from academia, legal practice and industry.

The series was recorded  on 7 November 2024, and the views and opinions expressed therein reflect the legal context and state of affairs up to that date.

You can also watch or listen via the LIDC website, YouTube and Spotify.

The post Exploring Competition in Cloud and AI Podcast: Episode 3 – Dissecting Cloud Competition appeared first on Society for Computers & Law.

]]>
Select Committee report on British film and high-end television https://www.scl.org/select-committee-report-on-british-film-and-high-end-television/ Fri, 25 Apr 2025 09:18:00 +0000 https://www.scl.org/?p=18461 The House of Commons Culture, Media and Sport Committee has published a report on British film and high-end TV which includes several conclusions and recommendations which will be of interest to tech lawyers. The Committee has considered how the responsible use of artificial intelligence (AI) tools might transform the industry. For AI to be a...

Read More... from Select Committee report on British film and high-end television

The post Select Committee report on British film and high-end television appeared first on Society for Computers & Law.

]]>
The House of Commons Culture, Media and Sport Committee has published a report on British film and high-end TV which includes several conclusions and recommendations which will be of interest to tech lawyers.

The Committee has considered how the responsible use of artificial intelligence (AI) tools might transform the industry. For AI to be a positive force in film and HETV, the government must strengthen the copyright framework by requiring licensing of creative works in all cases where they are used to train AI models. It also says that the government must also protect our screen heritage. Screen archives face barriers to connecting the public with the UK’s filmmaking culture, and the Committee recommends the government explore a statutory deposit scheme for the moving image, minor changes to copyright legislation and the introduction of a national screen heritage strategy to put archives on a stronger footing.

HETV

The Committee says that the success of the UK’s HETV sector relies on continuing to attract inward investment while maintaining a vibrant domestic industry underpinned by strong intellectual property rights. Yet the dynamic between independent producers and subscription video-on-demand (SVoD) platforms is not sustainable, and successful production companies are being damaged by deals that deny them the ability to fully monetise their IP. While the differences in business models mean it may not be appropriate to extend the existing terms of trade as they stand for public service broadcasters to streamers, similar mechanisms must be considered.  As a result, the Committee recommends the government immediately commissions research on how regulatory measures could be applied to SVoD platforms to ensure that independent production companies developing IP in the UK maintain a minimum level of ownership over those rights.

AI

Industry guidelines based around protecting human creativity in the use of generative AI are welcome, but the film and TV sectors are calling out for help to embrace the growth potential of generative AI in a way that is fair, responsible and legally compliant.

The Committee says that at the next Spending Review, the government should fund the British Film Institute’s development of an AI observatory and tech demonstrator hub to enable it to provide effective leadership around the industry’s use of AI.

The government’s AI Sector Champion for the creative industries, once appointed, should work with the industry to develop an AI certification scheme for the ethical use of generative AI in film and HETV. In setting out guidelines for the responsible use of generative AI, the scheme should consider the interests of copyright holders, creative workers and audiences. To ensure compliance and protect the industry from irresponsible use of AI tools, the government should mandate certification for UK-based broadcasters or productions claiming tax incentives and National Lottery funding.

Proposed “opt out” regime and copyright reform

Getting the balance between AI development and copyright wrong will undermine the growth of the UK’s film and HETV sectors, and wider creative industries. The Committee says that proceeding with an ‘opt-out’ regime stands to damage the UK’s reputation among inward investors. The government should abandon its preference for a data mining exception for AI training with rights reservation model, and instead require AI developers to license any copyrighted works before using them to train their AI models. Although the film and HETV industry may be motivated to protect performers’ interests, with the history of collective bargaining agreements equipping it do so, that situation is not common across all the creative industries. The UK’s patchwork of copyright, intellectual property and data protection legislation is failing to protect performers from the nefarious use of generative AI technologies, such as unauthorised voice cloning and deepfakes. The government should legislate to prevent historical contract waivers from being interpreted to allow the use of recorded performances by AI tools.

Within the next six months the government should also conduct a review of the Copyright, Designs and Patents Act 1988 and the UK’s GDPR framework to consider whether further legislation is needed to prevent unlicensed use of data for AI purposes.

The Committee has also repeated its predecessor Committee’s calls for the government to implement the Beijing Treaty within the next six months, including extending unwaivable moral rights to audiovisual performances. The Government should introduce targeted copyright exemptions that allow for greater access to archive material without harming copyright holders. Those include adjusting legislation concerning ‘dedicated terminals’, broadening the definition of ‘educational establishments’, amending the ‘2039’ rule, and introducing exemptions for orphan works and commercially unavailable works.

The post Select Committee report on British film and high-end television appeared first on Society for Computers & Law.

]]>
This Week’s Techlaw News Round-up https://www.scl.org/this-weeks-techlaw-news-round-up-50/ Fri, 25 Apr 2025 08:57:01 +0000 https://www.scl.org/?p=18485 UK law Courts and Tribunals Judiciary publishes updated AI guidance and introduces Copilot Chat for judges The Courts and Tribunals Judiciary has published updated guidance to help judicial office holders to use AI. It updates and replaces the guidance document issued in December 2023. It sets out key risks and issues associated with using AI...

Read More... from This Week’s Techlaw News Round-up

The post This Week’s Techlaw News Round-up appeared first on Society for Computers & Law.

]]>
UK law
Courts and Tribunals Judiciary publishes updated AI guidance and introduces Copilot Chat for judges

The Courts and Tribunals Judiciary has published updated guidance to help judicial office holders to use AI. It updates and replaces the guidance document issued in December 2023. It sets out key risks and issues associated with using AI and some suggestions for minimising them. Examples of potential uses are also included. Any use of AI by or on behalf of the judiciary must be consistent with the judiciary’s overarching obligation to protect the integrity of the administration of justice. The guidance also introduces a private AI tool, Microsoft’s “Copilot Chat”, which is now available on judicial office holders’ devices through eJudiciary. This guidance applies to all judicial office holders under the Lady Chief Justice and Senior President of Tribunal’s responsibility, their clerks, judicial assistants, legal advisers/officers and other support staff.

Ofcom investigates misuse of telephone numbers

Ofcom is investigating if communications provider Primo Dialler has misused numbers sub-allocated to it, including to perpetrate scams. Ofcom allocates telephone numbers, usually in large blocks, to telecoms firms. They can then transfer the numbers to individual customers or other businesses. In line with Ofcom’s consumer protection rules and industry guidance, phone companies must not misuse numbers which have been sub-allocated to them. Services must also ensure numbers are being used correctly in accordance with the National Telephone Numbering Plan. Ofcom believes that the numbers sub-allocated to Primo Dialler are potentially being misused, including to facilitate scams. Its investigation will seek to establish whether Primo Dialler is complying with its obligations, specifically neral Conditions B1.8, B1.9(b), B1.9(c), and the Communications Act S128(5). The investigation falls under Ofcom’s enforcement programme, launched last year, looking specifically at phone and text scams. The aim of the programme is to protect customers by supporting best practice in the use of phone numbers and to ensure providers are following Ofcom’s rules. If Ofcom has reasonable grounds to suspect that rules have been broken, it may launch further investigations.

Ofcom takes action regarding “Global Titles” in mobile sector

Mobile operators use Global Titles as routing addresses for the exchange of signalling messages between 2G and 3G mobile networks and to support their provision of mobile services. Ofcom has now announced new rules to ban their leasing. This is because criminals can use Global Titles to intercept and divert calls and messages, and obtain information held by mobile networks. This could, for example, enable them to intercept security codes sent by banks to a customer via SMS message. In extreme cases they can be exploited to track the physical location of individuals anywhere in the world. The ban on entering new leasing arrangements is effective immediately. For leasing that is already in place, the ban will come into force on 22 April 2026. This will give legitimate businesses who currently lease Global Titles from mobile networks time to make alternative arrangements.  Alongside this, Ofcom has published new guidance for mobile operators on their responsibilities to prevent the misuse of their Global Titles.

ICO fines law firm £60,000 following cyber attack

The ICO has fined Merseyside-based DPP Law Ltd (DPP) £60,000, following a cyber attack that led to highly sensitive and confidential personal information being published on the dark web. It found that DPP failed to put appropriate measures in place to ensure the security of personal information held electronically. This failure enabled cyber hackers to gain access to DPP’s network, via an infrequently used administrator account which lacked multi-factor authentication and steal large volumes of data. DPP specialises in law relating to crime, military, family fraud, sexual offences, and actions against the police. The very nature of this work means it is responsible for both highly sensitive and special category data, including legally privileged information. As the information stolen by the attackers revealed private details about identifiable individuals, the ICO highlights that DPP has a responsibility under the law to ensure it is properly protected. In June 2022, DPP suffered a cyber-attack which affected access to the firm’s IT systems for over a week. A third-party consulting firm established that a brute force attempt gained access to an administrator account that was used to access a legacy case management system. This enabled cyber attackers to move laterally across DPP’s network and take over 32GB of data, a fact DPP only became aware of when the National Crime Agency contacted the firm to advise information relating to their clients had been posted on the dark web. DPP did not consider that the loss of access to personal information constituted a personal data breach, so did not report the incident to the ICO until 43 days after they became aware of it.

ICO fines compensation company £90,000 for unlawful marketing calls

The ICO has also fined AFK Letters Co Ltd (AFK) £90,000 for making more than 95,000 unsolicited marketing calls to people registered with the Telephone Preference Service, in a clear breach of electronic marketing laws. AFK writes letters seeking compensation and refunds for its customers. Between January and September 2023, AFK used data collected through its own website and a third-party telephone survey company to make 95,277 marketing calls without being able to demonstrate valid and specific consent from the people contacted. Despite AFK claiming it could not provide evidence of consent because it deleted all customer data after three months, when challenged by the ICO, it was also unable to provide consent records for several calls made within a three-month timeframe. AFK’s third-party data supplier was using consent statements which did not specifically name AFK when asking the public for consent to be called. Additionally, AFK’s own privacy policy only mentioned contact by email, and did not state that people would also receive phone calls. The ICO’s investigation found that AFK failed to comply with Regulation 21 of the Privacy and Electronic Communications Regulations.

EU law

European Commission consults on revision of EU Cybersecurity Act

The European Commission is consulting about revising the 2019 EU Cybersecurity Act. The consultation focuses on the European Union Agency for Cybersecurity mandate, the European Cybersecurity Certification Framework, and ICT supply chain security. It aims to simplify cybersecurity rules and streamline reporting obligations. The consultation ends on 20 June 2025.

Irish Data Protection Commission announces inquiry into X

The DPC has announced an inquiry into the processing of personal data comprised in publicly-accessible posts posted on the ‘X’ social media platform by EU/EEA users, for the purposes of training generative AI models, in particular the Grok Large Language Models (LLMs). The inquiry will examine compliance with the GDPR, including the lawfulness and transparency of the processing. Grok is the name of a group of AI models developed by xAI. They are used, among other things, to power a generative AI querying tool/chatbot, which is available on the X platform. Like other modern LLMs, the Grok LLMs have been developed and trained on a wide variety of data. The DPC’s inquiry considers a range of issues concerning the use of a subset of this data which was controlled by X, that is, personal data in publicly accessible posts posted on the X social media platform by EU/EEA users. The purpose of the inquiry is to determine if the personal data was lawfully processed to train the Grok LLMs. The DPC has notified X of its decision to conduct the inquiry under Section 110 of the Irish Data Protection Act 2018.

Coimisiún na Meán publishes Strategy Statement and Work Programme

Coimisiún na Meán has published its first three-year strategy, which sets out its vision for the media landscape in Ireland. The Strategy Statement 2025-2027 is accompanied by a 2025 Work Programme, which lists priority projects across Coimisiún na Meán’s remit of online safety, media sector development and regulation.  The Strategy Statement 2025-2027 is built on six key outcomes: children, democracy, trust, diversity and inclusion and public safety. Among the priority projects outlined in Coimisiún na Meán’s 2025 Work Programme are the development of a pilot programme for children at imminent risk of harm from online content, the development of an Election Integrity Strategy across all media sources, the creation of educational materials relating to online hate, the preparation of a new Broadcasting Services Strategy and a revised Media Plurality Policy, and the continuation of the Sound & Vision and Journalism funding Schemes.

The post This Week’s Techlaw News Round-up appeared first on Society for Computers & Law.

]]>
Exploring Competition in Cloud and AI Podcast: Episode 2 – Alternative Visions https://www.scl.org/exploring-competition-in-cloud-and-ai-podcast-episode-2-alternative-visions/ Fri, 18 Apr 2025 09:46:15 +0000 https://www.scl.org/?p=18272 We have teamed up with the LIDC (International League of Competition Law) to share a series of podcasts examining some of the increasingly pressing questions around cloud computing, AI and competition law. Over seven episodes, recorded in November 2024, Ben Evans, Shruti Hiremath and guests will look beyond the current position to identify some of...

Read More... from Exploring Competition in Cloud and AI Podcast: Episode 2 – Alternative Visions

The post Exploring Competition in Cloud and AI Podcast: Episode 2 – Alternative Visions appeared first on Society for Computers & Law.

]]>
We have teamed up with the LIDC (International League of Competition Law) to share a series of podcasts examining some of the increasingly pressing questions around cloud computing, AI and competition law.

Over seven episodes, recorded in November 2024, Ben Evans, Shruti Hiremath and guests will look beyond the current position to identify some of the pressures the changing landscape will bring to bear.

Episode 2: Alternative Visions

A look at the emerging alternative visions of the AI stack around the world.

Episode 2 considers alternative visions for the AI stack. The discussion begins by thinking about the emergent ‘EuroStack’, which is a strategic initiative to develop independent digital infrastructure across all layers of the stack and reduce reliance on non-EU technologies that was launched in the European Parliament in 2024. At a high-level, this approach represents a significant transition away from the prevailing regulatory approach focussed on competition in  certain components of the stack towards an infrastructural approach driven by ambitious industrial policy. The panel proceeds to reflect on the approaches of different international jurisdictions, focussing in particular on the development of digital public infrastructure in emerging markets, and the issue of sovereignty. Crucially, the Indian examples of the Unified Payments Interface and the Open Network for Digital Commerce provide evidence that digital public infrastructure can promote significant competition. This prompts the panel to question whether regulatory intervention is necessary if there exists a sufficiently developed digital public infrastructure. Of course, it is essential that government initiatives are not mandated to the detriment of market-based solutions and are instead offered as alternatives. Ultimately, the co-existence of digital public infrastructure and private firm offerings may lead to a healthy competitive market.

Panel

Ben Evans (Chair) is a Postgraduate Researcher at the School of Law and Centre for Competition Policy, University of East Anglia. He is a member of the LIDC Scientific Committee.

Shruti Hiremath is Counsel in the Clifford Chance Antitrust Team in London.

Lauren Murphy is Founder and CEO of Friday Initiatives.

Sean Ennis is Director of the Centre for Competition Policy and a Professor of Competition Policy at Norwich Business School, University of East Anglia.

The LIDC NEX GEN Podcast Series on ‘Competition in Cloud and AI’ explores some the most topical and hotly debated questions  with a panel of leading international experts from academia, legal practice and industry.

The series was recorded  on 7 November 2024, and the views and opinions expressed therein reflect the legal context and state of affairs up to that date.

You can also watch or listen via the LIDC website, YouTube and Spotify.

The post Exploring Competition in Cloud and AI Podcast: Episode 2 – Alternative Visions appeared first on Society for Computers & Law.

]]>
IT Contracts in 2025: the new MCTs and SCCs https://www.scl.org/it-contracts-in-2025-the-new-mcts-and-sccs/ Wed, 16 Apr 2025 09:30:00 +0000 https://www.scl.org/?p=18286 Chris Kemp summarises the issues around new Model Contract Terms and Standard Contractual Clauses emerging from the shadow of the recent EU digital regulation. Introduction – the new MCTs and SCCs By now the ‘top level’ requirements of the new wave of EU tech and digital regulation are fairly well known: the AI Act’s risk-based...

Read More... from IT Contracts in 2025: the new MCTs and SCCs

The post IT Contracts in 2025: the new MCTs and SCCs appeared first on Society for Computers & Law.

]]>
Chris Kemp summarises the issues around new Model Contract Terms and Standard Contractual Clauses emerging from the shadow of the recent EU digital regulation.

Introduction – the new MCTs and SCCs

By now the ‘top level’ requirements of the new wave of EU tech and digital regulation are fairly well known: the AI Act’s risk-based approach, the key contract provisions at DORA Art. 30, the NIS 2 reporting requirements, for example. What we think will bubble to the surface over the course of the rest of 2025 are the new Model Contract Terms (“MCT”) and Standard Contractual Clauses (“SCC”) nestled in the secondary legislation made under these rules: the delegated regulations and the implementing technical and regulatory standards.

These new MCTs and SCCs will not be mandatory. In some cases they are more like templates to show SME buyers of IT services what good could look like in a market where IT contracts are often one-sided documents favouring the large tech vendor. And because they won’t be mandatory, a big question for the new MCTs and SCCs is: will anyone actually pay any attention?

Examples – MCTs and SCCs in the AI Act, NIS 2, DORA and the Data Act

To give a bit more context, we will briefly walk through three examples. This is not an exhaustive list: there are lots of requirements in the rules that will either directly or indirectly affect what will need to be included in IT contracts. These are examples which have caught our interest recently.

Example 1: MCTs for high-risk AI system providers & their suppliers (AI Act, Art. 25(4))

By Art. 25(4) of the AI Act, the recently established European AI Office is encouraged to “develop and recommend voluntary model terms for contracts between providers of high-risk AI systems and third parties that supply tools, services, component or processes…”. The MCTs should “take into account possible contractual requirements in specific sectors or business cases.”

It is unclear what the status of these MCTs is, but it is foreseeable the requirement that they consider “specific sectors” and “business cases” will add to their complexity and the length of time it takes the AI Office to prepare them. When they are published, they will form part of a growing corpus of European AI model contract terms. For another example, see the March 2025 Updated EU AI model contractual clauses.[1]

Example 2: focus on supply chains (NIS 2 and DORA)

When organisations buy in IT they cede a certain amount of knowledge and control to third parties: SaaS vendors, managed service providers, IT consultants, etc. This creates supply chain vulnerabilities which NIS 2 and DORA, in particular, seek to remedy at the contract level.

As with the AI Act, many of the requirements are tucked under the primary legislation. For NIS 2, specific contract requirements can be found in the NIS 2 Implementing Regulation, para. 5.1.4 of the Annex to which requires that “relevant entities shall ensure that their contracts with… suppliers and service providers specify, where appropriate through service level agreements” contract terms like cybersecurity requirements, staff training, staff background checks, incident notification requirements and audit provisions.

For DORA, which applies in the financial services sector to in scope “Financial Entities”, the Regulatory Technical Standards on subcontracting will, when finalised, impose contract requirements where ICT services supporting critical or important functions are subcontracted.[2]

Example 3: cloud computing SCCs (Data Act, Art. 41)

The Data Act requires the European Commission to develop and recommend by 12 September 2025:

  • non-binding MCTs on data access and use, including terms on reasonable compensation and the protection of trade secrets, and
  • non-binding SCCs for cloud computing contracts to assist parties in drafting and negotiating contracts with fair, reasonable and non-discriminatory contractual rights and obligations.

The cloud computing SCCs are currently under development by a European Commission Expert Group on B2B data sharing and cloud computing contracts.[3] The SCCs will address important aspects of cloud contracts like information security and business continuity, liability and termination. The approach taken in drafts released in late 2024 suggests that the final  cloud computing SCCs are likely to depart significantly from current market norms for cloud contracting.

Conclusion: prepare, but will anyone pay any attention?

The point of this article is to draw the reader’s attention to the lesser known MCTs and SCCs squirreled away at the layer of the secondary legislation in the new EU tech and digital rules.

This is worth doing because: (1) the MCTs and SCCs go directly to content requirements for IT contracts and (2) a number of them are likely to be finalised this year.

The MCTs and SCCs will give buyers of IT services new tools and models to negotiate better terms with their vendors. Vendors will need to think carefully about what, if anything, in these new forms of contract they are prepared to accept. Either way, we expect these new documents will start cropping up in IT contract negotiations in the course of 2025.

In almost all cases, these new MCTs and SCCs are not mandatory: contract parties can choose to incorporate them into their agreements or not. So the question remains: will anyone pay any attention?

Chris Kemp, Partner at Kemp IT Law LLP


[1] See European Commission: Updated EU AI model contractual clauses, dated 5 March 2025 <https://tinyurl.com/3nvnssb5>.

[2] See EBA, EIOPA and ESMA: Final report on Draft Regulatory Technical Standards to specify the elements which a financial entity needs to determine and assess when subcontracting ICT services supporting critical or important functions as mandated by Article 30(5) of Regulation (EU) 2022/2554, dated 26 July 2024 <https://tinyurl.com/yecuvkwj>.

[3] See European Commission webpage for the Expert Group on B2B data sharing and cloud computing contracts (E03840) here <https://tinyurl.com/bddb9vz9>.

The post IT Contracts in 2025: the new MCTs and SCCs appeared first on Society for Computers & Law.

]]>
Another Chinese court finds that AI-generated images can be protected by copyright: the Changshu People’s Court and the ‘half heart’ case https://www.scl.org/another-chinese-court-finds-that-ai-generated-images-can-be-protected-by-copyright-the-changshu-peoples-court-and-the-half-heart-case/ Tue, 15 Apr 2025 14:40:00 +0000 https://www.scl.org/?p=18139 Chinese courts take a different approach to the issue of AI generating copyright protected images, the DLA Piper team reports. On 7 March 2025, the Changshu People’s Court (in China’s Jiangsu province) announced that it had recently concluded a case on the topical issue of whether AI-generated works can be protected by copyright. In the...

Read More... from Another Chinese court finds that AI-generated images can be protected by copyright: the Changshu People’s Court and the ‘half heart’ case

The post Another Chinese court finds that AI-generated images can be protected by copyright: the Changshu People’s Court and the ‘half heart’ case appeared first on Society for Computers & Law.

]]>
Chinese courts take a different approach to the issue of AI generating copyright protected images, the DLA Piper team reports.

On 7 March 2025, the Changshu People’s Court (in China’s Jiangsu province) announced that it had recently concluded a case on the topical issue of whether AI-generated works can be protected by copyright. In the case, a plaintiff surnamed Lin used the AI tool Midjourney to create an image, and then Photoshop to further refine it. The image depicted a half-heart structure floating on the water in front of a cityscape, in which the other half of the heart was ‘completed’ by its reflection in the water. The plaintiff posted the image on social media and also obtained copyright registration for the image in China. An inflatable model company and a real estate company posted images substantially similar to the plaintiff’s image on their social media accounts and the inflatable model company’s 1688 online store, and also created a real 3D installation based on the image at one of the real estate company’s projects. The court found for the plaintiff, requiring that the inflatable model company publicly apologise to the plaintiff on its Xiaohongshu (RedNote) account for three consecutive days, and that the defendants compensate the plaintiff for economic losses and reasonable expenses totalling RMB 10,000. Although both the plaintiff and the defendants had rights of appeal, neither party appealed and the decision is now effective.

In reaching its decision, the court first examined the Midjourney user agreement which stipulates that the rights in outputs prompted by users belong to the user with very few exceptions. The court then examined the iterative process by which Midjourney users can modify the prompt text and other details of the output images. On this basis, the court held that the plaintiff’s crafting of their prompt and subsequent modification of the image reflected their unique choices and arrangement, making the ultimate image an original work of fine art protected by copyright. The defendants infringed the copyright in that image by disseminating it online without the plaintiff’s permission and using it without naming the plaintiff as the author. However, the court held that the copyright enjoyed by Lin was limited to the 2D image as recorded in the copyright registration certificate (rather than the idea of the 3D half-heart art installation as depicted in the image); the construction of the physical 3D installation by the defendants based on the central idea of Lin’s work (i.e. a half-heart floating on the water, an idea used by many prior works) did not infringe Lin’s copyright.

In the court’s WeChat post, some illustrative comments were shared from Hu Yue, Deputy Director of the court’s Intellectual Property Tribunal. “The premise for AI-generated content to be recognised as a work is that it should be able to reflect the original intellectual input of a human,” Hu states. He comments that “for creators, this judgement is a ‘reassurance’. It clarifies that creators who use AI tools to create have legal copyright over their works provided that the works have innovative design and expression (…) In addition, this case lawfully determined that the use of the ideas and concepts of another person’s work does not constitute infringement, which avoids overprotection of copyrights and abuse of rights, and is conducive to guiding the people on how to further innovate on the basis of using AI.”

Our comments

Cases involving generative AI and IP issues are going through courts around the world. US cases dominate, particularly on the issue of whether use of copyright works to train an AI model constitutes copyright infringement. However, courts in China have been notable for their boldness on the issue of copyright subsistence. Decisions in 2019 and 2020 from the Shenzhen City Nanshan District People’s Court, the Beijing Internet Court and the Beijing Intellectual Property Court have all found that AI-assisted text-based works could be protected by copyright. Most importantly, the Beijing Internet Court in November 2023 issued a significant decision in which it held that the plaintiff enjoyed copyright in an image generated using the AI tool Stable Diffusion. It was critical to the decision that the plaintiff had engaged in a process of “intellectual creation” by independently designing and refining the features of the image through several rounds of input prompts and parameter adjustments, and by making artistic choices regarding the final outcome. Applying similar reasoning, this latest case from the Changshu People’s Court is the second in China granting copyright protection to AI-generated images reflecting the “original intellectual input of a human”.

The relative willingness of Chinese courts to find subsistence of copyright in AI-generated works created by user prompts can be compared with the position in the United States, in which the United States Copyright Office has refused protection for AI-generated visual artworks in at least four cases. Guidance issued by the Office in March 2023 and January 2025 reiterate that: copyright protects only materials that are the product of human creativity; copyright protection is not available for purely AI-generated content, but human contributions to AI-assisted works are protectable, with protection analyzed on a case-by-case basis; and user prompts alone are insufficient to justify copyright protection for the output. The importance attributed to human input is shared with China, however it is safe to say a global consensus on this issue has yet to emerge.

In the meantime, China is becoming a world leader in both AI innovation and regulation. China’s National Intellectual Property Administration in December 2024 issued guidelines on patent applications for AI-related inventions, providing welcome guidance to firms seeking IP protection for innovations involving or assisted by AI. This follows the National Technical Committee 260 on Cybersecurity’s September 2024 release of an AI Safety Governance Framework, outlining principles for tackling AI-related risks in accordance with a “people-centered approach” and the “principle of developing AI for good.”

Edward Chatterton is a Partner at DLA Piper where he is Global Co-Chair of Trademark, Copyright and Media Group and Co-Head of IPT, Asia

Joanne Zhang is a Registered Foreign Lawyer (New York, USA) in the Intellectual Property & Technology team based in DLA Piper’s Hong Kong office. She is dually qualified in New York, USA, and China.

Liam is a Knowledge Development Lawyer in DLA Piper’s Intellectual Property and Technology group. He is based in the APAC region and focuses on trademark, copyright, media and artificial intelligence issues across the international practice.

The post Another Chinese court finds that AI-generated images can be protected by copyright: the Changshu People’s Court and the ‘half heart’ case appeared first on Society for Computers & Law.

]]>
Exploring Competition in Cloud and AI Podcast: Episode 1 – The Status Quo https://www.scl.org/exploring-competition-in-cloud-and-ai-podcast-episode-1-the-status-quo/ Fri, 11 Apr 2025 10:45:22 +0000 https://www.scl.org/?p=18129 We have teamed up with the LIDC (International League of Competition Law) to share a series of podcasts examining some of the increasingly pressing questions around cloud computing, AI and competition law. Over seven episodes, recorded in November 2024, Ben Evans, Shruti Hiremath and guests will look beyond the current position to identify some of...

Read More... from Exploring Competition in Cloud and AI Podcast: Episode 1 – The Status Quo

The post Exploring Competition in Cloud and AI Podcast: Episode 1 – The Status Quo appeared first on Society for Computers & Law.

]]>

We have teamed up with the LIDC (International League of Competition Law) to share a series of podcasts examining some of the increasingly pressing questions around cloud computing, AI and competition law.

Over seven episodes, recorded in November 2024, Ben Evans, Shruti Hiremath and guests will look beyond the current position to identify some of the pressures the changing landscape will bring to bear.

Episode 1: The Status Quo

The current state of competition law for cloud computing and what the regulators are up to now.

Episode 1 sets the listener up for a deep dive into cloud computing and AI later in the series with a high-level discussion of the key competition concerns that have been raised across the AI stack.

The AI stack broadly comprises of four components: data, compute (encompassing chips and cloud computing), foundation models, and AI applications. The panel reflect on the recent media and policy focus on the compute component and the widely reported chip shortages that have led competition authorities in the EU and USA to investigate how supply is being allocated. While there may have been shortages, these shortages – and any related competition concerns – should be considered against the backdrop of a sudden surge in AI product development, which may not represent a forward-looking picture of chip supply. Indeed, the recent proliferation of new chip development from firms including AMD, Intel, Google, OpenAI and Amazon suggests that competition for the supply of chips is fierce.[1] Authorities around the world are also showing considerable interest in cloud competition, focussing in particular on potential barriers to switching and interoperability. Episodes 3 and 4 are dedicated to exploring these issues in depth.

Turning attention to foundation models, the panel introduces concerns raised in particular by the UK Competition and Markets Authority (CMA) and the French Competition Authority (FCA)that firms perceived as controlling key inputs – principally data, cloud and skills – may restrict access in order to shield themselves from competition. Further concerns raised by authorities include the risk that cloud providers could exploit their market positions to distort foundation model choice, potentially engaging in self-preferencing à la Google Shopping (Case C-48/22 P Google and Alphabet v Commission). This discussion whets the appetite for a dissection of AI competition in a later episode.

Bringing the introductory session to a close, the panel also touches on concerns being raised by competition authorities that firms may be using strategic partnerships to reinforce, expand or extend existing market power through the value chain. This thorny issue is explored in greater detail later in the podcast series in an episode focussed on mergers and acquisitions, but at the outset thought is given to the importance of protections for investors in nascent technologies, with a parallel drawn to the pharmaceutical industry.

Panel

Ben Evans (chair) is a Postgraduate Researcher at the School of Law and Centre for Competition Policy, University of East Anglia. He is a member of the LIDC Scientific Committee.

Shruti Hiremath is Counsel in the Clifford Chance Antitrust Team in London.

Lauren Murphy is Founder and CEO of Friday Initiatives.

Sean Ennis is Director of the Centre for Competition Policy and a Professor of Competition Policy at Norwich Business School, University of East Anglia.


[1]  Further recent developments such as the development of more efficient models like DeepSeek’s R1 have also raised questions on the continued need for a large number of chips.

The LIDC NEX GEN Podcast Series on ‘Competition in Cloud and AI’ explores some of the most topical and hotly debated questions with a panel of leading international experts from academia, legal practice and industry.

The series was recorded  on 7 November 2024, and the views and opinions expressed therein reflect the legal context and state of affairs up to that date.

You can also watch or listen via the LIDC website, YouTube or Spotify.

The post Exploring Competition in Cloud and AI Podcast: Episode 1 – The Status Quo appeared first on Society for Computers & Law.

]]>
This Week’s Techlaw News Round-Up https://www.scl.org/this-weeks-techlaw-news-round-up-49/ Fri, 11 Apr 2025 08:33:54 +0000 https://www.scl.org/?p=18181 UK law Electronic Communications (Networks and Services) (Designated Vendor Directions) (Penalties) Order 2025 The Electronic Communications (Networks and Services) (Designated Vendor Directions) (Penalties) Order 2025 SI 2025/443 has been made.  It makes consequential amendments to the Electronic Communications (Networks and Services) (Penalties) (Rules for Calculation of Turnover) Order 2003, SI 2003/2712 which in summary covers...

Read More... from This Week’s Techlaw News Round-Up

The post This Week’s Techlaw News Round-Up appeared first on Society for Computers & Law.

]]>
UK law
Electronic Communications (Networks and Services) (Designated Vendor Directions) (Penalties) Order 2025

The Electronic Communications (Networks and Services) (Designated Vendor Directions) (Penalties) Order 2025 SI 2025/443 has been made.  It makes consequential amendments to the Electronic Communications (Networks and Services) (Penalties) (Rules for Calculation of Turnover) Order 2003, SI 2003/2712 which in summary covers how certain penalties are calculated in relation to turnover under the Communications Act 2003. It came into force on 3 April 2025.

CAP and BCAP update advertising codes to align with Digital Markets Act 2024

CAP and BCAP have published amendments to their advertising codes, which took effect on 8 April 2025.  The amendments align the Codes with the unfair commercial practices provisions in the Digital Markets, Competition and Consumers Act 2024 which came into force on 6 April. The changes include new rules on drip pricing and fake reviews.  Both the CMA and the ASA will delay enforcement on fake reviews for three months. The ASA has also said that it will align its enforcement on drip pricing with the CMA’s approach.

DSIT and NCSC launch new Cyber Governance Code of Practice for board

The Department for Science, Innovation and Technology (DSIT) and National Cyber Security Centre (NCSC) has published a new Cyber Governance Code of Practice on 8 April 2025, following industry consultation in 2024. The Code outlines actions for boards and directors to manage cyber security risks across five areas: risk management, strategy, people, incident planning, and assurance. It forms part of a wider governance package that includes training and implementation toolkit, primarily targeting medium and large organisations. The Code was developed in response to data showing 74% of large businesses experienced cyber attacks in the past year.

EU law

European Commission’s Expert Group on B2B data sharing and cloud computing contracts publishes final report

The European Commission’s Expert Group on B2B data sharing and cloud computing contracts has published its final report.  It contains non-binding model contractual terms on data access and use, as well as standard contractual clauses for cloud computing contracts under Article 41 of the EU Data Act.

Joint letter published on the EU’s need for AI liability rules

Several civil society organisations and BEUC have written to Executive Vice President Virkkunen and Commissioner McGrath to share their concerns that the AI liability directive proposal (AILD) is being withdrawn and to urge them to begin preparatory work on new AI liability rules. They seek at the very least a non-fault based liability approach that will make it easier for consumers who are harmed by an AI system to seek compensation.

European Commission launches AI Continent Action Plan

The European Commission has launched its AI Continent Action Plam.  It revolves around five pillars: building a large-scale AI data and computing infrastructure; increasing access to large and high-quality data, developing algorithms and fostering AI adoption in strategic EU sectors, strengthening AI skills and talents and simplifying regulation. The Commission will also launch the AI Act Service Desk, to help businesses comply with the AI Act. It will serve as the central point of contact and hub for information and guidance on the AI Act. In May it will consult on its Data Union Strategy.

European Commission consults on cloud and AI policies in the EU

The European Commission is consulting on the preparatory work for the Cloud and AI Development Act and the single EU-wide cloud policy for public administrations and public procurement. The Commission seeks views on the EU’s capacity in cloud and edge computing infrastructure, especially in light of increasing data volumes and demand for computing resources, both fuelled by the rise of compute-intensive AI services. As well as this, the Commission seeks views on the use of cloud services in the public sector.  It ends on 4 June 2025.

European Commission launches public consultation and call for evidence on the Apply AI Strategy

The Commission’s AI Office has called for evidence and is consulting on its Apply AI Strategy, planned to be published later this year. The Apply AI Strategy is part of President von der Leyen’s Political Guidelines to make Europe a global leader in AI innovation. The Strategy will serve as a blueprint for the full adoption of AI in EU strategic sectors. In particular, the Apply AI Strategy aims to foster the integration of AI technologies into strategic sectors. These sectors include advanced manufacturing; aerospace; security and defence; agri-food; energy; environment and climate; mobility and automotive; pharmaceutical; biotechnology; robotics; electronic communications; advanced material design; and cultural and creative industries. The consultation aims to identify priorities, current challenges to the uptake of AI in specific sectors as well as potential solutions and policy approaches. The consultation also includes specific questions on the challenges in the AI Act implementation process and how the Commission and member states can support stakeholders better in implementing the legislation. The consultation ends on 4 June 2025.

Commission updates guidelines on responsible use of generative AI in research

The European Commission’s Directorate-General for Research and Innovation has published the second version of its guidelines on responsible use of generative AI in research. One of the goals of the guidelines is that the scientific community uses generative AI responsibly. They take into account key principles on research integrity as well as existing frameworks for the use of AI in general and in research specifically.  The principles include honesty, reliability, respect and accountability. It is also consulting on its AI in Science Strategy. The consultation ends on 5 June 2025.

The post This Week’s Techlaw News Round-Up appeared first on Society for Computers & Law.

]]>