Society & Justice Archives - Society for Computers & Law https://www.scl.org/category/society-justice/ Society for Computers & Law Wed, 30 Apr 2025 11:03:31 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://www.scl.org/wp-content/uploads/2024/02/cropped-scl-150x150.png Society & Justice Archives - Society for Computers & Law https://www.scl.org/category/society-justice/ 32 32 SCL Podcast “Technology & Privacy Laws Around The World” – Episode 5: Australia and New Zealand https://www.scl.org/scl-podcast-technology-privacy-laws-around-the-world-episode-5-australia-and-new-zealand/ Wed, 30 Apr 2025 11:03:27 +0000 https://www.scl.org/?p=18584 In two common law nations where regulation intersects with digital innovation, and with relatively small populations, Australia and New Zealand offer distinct yet complementary perspectives on technology regulation and privacy law. How do their legal systems address issues of safety in the digital age, privacy rights, and the interests of Indigenous communities? And in what...

Read More... from SCL Podcast “Technology & Privacy Laws Around The World” – Episode 5: Australia and New Zealand

The post SCL Podcast “Technology & Privacy Laws Around The World” – Episode 5: Australia and New Zealand appeared first on Society for Computers & Law.

]]>
In two common law nations where regulation intersects with digital innovation, and with relatively small populations, Australia and New Zealand offer distinct yet complementary perspectives on technology regulation and privacy law.

How do their legal systems address issues of safety in the digital age, privacy rights, and the interests of Indigenous communities? And in what ways do they align with, or diverge from, international standards set by Europe and the United States?

In this episode, host Mauricio Figueroa is joined by three experts to discuss the policy and normative landscape of Australia and New Zealand. Tune in for an interesting conversation and through-provoking conversation about privacy and tech in these two countries. Listen to the episode here: https://bit.ly/3Yquyz8

The Panel:

Mauricio Figueroa is a legal scholar and educator. His area of expertise is Law and Digital Technologies, and has international experience in legal research, teaching, and public policy. He is the host of the SCL podcast “Privacy and Technology Laws Around the World”.

Andelka Philipps is an academic and writer and her research interests are broadly in the areas of Technology Law, Privacy and Data Protection, as well as Medical Law, Intellectual Property, Cyber Security, and Consumer Protection. She has taught in law schools in four countries: the United Kingdom; the Republic of Ireland; New Zealand; and Australia. She is currently an Affiliate with the Bioethics Institute Ghent, Ghent University, Belgium and an Academic Affiliate with the University of Oxford’s Centre for Health, Law and Emerging Technologies (HeLEX). She is also an Associate Editor for the Journal of the Royal Society of New Zealand (JRSNZ), the first to be appointed from the discipline of Law. www.andelkamphillips.com

John Swinson is a former partner of a major international law firm and has 30 years of law firm experience in NY and Australia, with principle focus on technology law and intellectual property law. He is a Professor of Law at The University of Queensland, where he teaches privacy law, cybersecurity law, and Internet & IT law.

Raffaele Ciriello is Senior Lecturer in Business Information Systems at the University of Sydney, whose research focuses on compassionate digital innovation and the ethical and societal impacts of emerging technologies. His work critically examines issues of digital responsibility, decentralised governance, and public interest technology, with recent projects spanning AI companions, blockchain infrastructures, and national digital sovereignty.

About the podcast

Join host Mauricio Figueroa and guests on a tour of tech law from across the globe. Previous episodes have focused on the use of ‘robot judges’ in several jurisdictions and developments in India, the USA and Japan. Future episodes will look at South America, Africa and Europe.

Where to listen

The post SCL Podcast “Technology & Privacy Laws Around The World” – Episode 5: Australia and New Zealand appeared first on Society for Computers & Law.

]]>
Exploring Competition in Cloud and AI Podcast: Episode 1 – The Status Quo https://www.scl.org/exploring-competition-in-cloud-and-ai-podcast-episode-1-the-status-quo/ Fri, 11 Apr 2025 10:45:22 +0000 https://www.scl.org/?p=18129 We have teamed up with the LIDC (International League of Competition Law) to share a series of podcasts examining some of the increasingly pressing questions around cloud computing, AI and competition law. Over seven episodes, recorded in November 2024, Ben Evans, Shruti Hiremath and guests will look beyond the current position to identify some of...

Read More... from Exploring Competition in Cloud and AI Podcast: Episode 1 – The Status Quo

The post Exploring Competition in Cloud and AI Podcast: Episode 1 – The Status Quo appeared first on Society for Computers & Law.

]]>

We have teamed up with the LIDC (International League of Competition Law) to share a series of podcasts examining some of the increasingly pressing questions around cloud computing, AI and competition law.

Over seven episodes, recorded in November 2024, Ben Evans, Shruti Hiremath and guests will look beyond the current position to identify some of the pressures the changing landscape will bring to bear.

Episode 1: The Status Quo

The current state of competition law for cloud computing and what the regulators are up to now.

Episode 1 sets the listener up for a deep dive into cloud computing and AI later in the series with a high-level discussion of the key competition concerns that have been raised across the AI stack.

The AI stack broadly comprises of four components: data, compute (encompassing chips and cloud computing), foundation models, and AI applications. The panel reflect on the recent media and policy focus on the compute component and the widely reported chip shortages that have led competition authorities in the EU and USA to investigate how supply is being allocated. While there may have been shortages, these shortages – and any related competition concerns – should be considered against the backdrop of a sudden surge in AI product development, which may not represent a forward-looking picture of chip supply. Indeed, the recent proliferation of new chip development from firms including AMD, Intel, Google, OpenAI and Amazon suggests that competition for the supply of chips is fierce.[1] Authorities around the world are also showing considerable interest in cloud competition, focussing in particular on potential barriers to switching and interoperability. Episodes 3 and 4 are dedicated to exploring these issues in depth.

Turning attention to foundation models, the panel introduces concerns raised in particular by the UK Competition and Markets Authority (CMA) and the French Competition Authority (FCA)that firms perceived as controlling key inputs – principally data, cloud and skills – may restrict access in order to shield themselves from competition. Further concerns raised by authorities include the risk that cloud providers could exploit their market positions to distort foundation model choice, potentially engaging in self-preferencing à la Google Shopping (Case C-48/22 P Google and Alphabet v Commission). This discussion whets the appetite for a dissection of AI competition in a later episode.

Bringing the introductory session to a close, the panel also touches on concerns being raised by competition authorities that firms may be using strategic partnerships to reinforce, expand or extend existing market power through the value chain. This thorny issue is explored in greater detail later in the podcast series in an episode focussed on mergers and acquisitions, but at the outset thought is given to the importance of protections for investors in nascent technologies, with a parallel drawn to the pharmaceutical industry.

Panel

Ben Evans (chair) is a Postgraduate Researcher at the School of Law and Centre for Competition Policy, University of East Anglia. He is a member of the LIDC Scientific Committee.

Shruti Hiremath is Counsel in the Clifford Chance Antitrust Team in London.

Lauren Murphy is Founder and CEO of Friday Initiatives.

Sean Ennis is Director of the Centre for Competition Policy and a Professor of Competition Policy at Norwich Business School, University of East Anglia.


[1]  Further recent developments such as the development of more efficient models like DeepSeek’s R1 have also raised questions on the continued need for a large number of chips.

The LIDC NEX GEN Podcast Series on ‘Competition in Cloud and AI’ explores some of the most topical and hotly debated questions with a panel of leading international experts from academia, legal practice and industry.

The series was recorded  on 7 November 2024, and the views and opinions expressed therein reflect the legal context and state of affairs up to that date.

You can also watch or listen via the LIDC website, YouTube or Spotify.

The post Exploring Competition in Cloud and AI Podcast: Episode 1 – The Status Quo appeared first on Society for Computers & Law.

]]>
The ability of AI to increase access to justice. https://www.scl.org/the-ability-of-ai-to-increase-access-to-justice/ Wed, 19 Mar 2025 15:40:35 +0000 https://www.scl.org/?p=17895 Beth Gilmour explores the potential benefits and limitations of using AI to increase access to justice in the winning article of the SCL AI Group Junior Lawyer Article Competition Introducing DisruptionImagine someone, sitting anxiously in a waiting room at a solicitor’s office they hastily found online. They clutch a notice of eviction in their hands,...

Read More... from The ability of AI to increase access to justice.

The post The ability of AI to increase access to justice. appeared first on Society for Computers & Law.

]]>
Beth Gilmour explores the potential benefits and limitations of using AI to increase access to justice in the winning article of the SCL AI Group Junior Lawyer Article Competition

Introducing Disruption
Imagine someone, sitting anxiously in a waiting room at a solicitor’s office they hastily found online. They clutch a notice of eviction in their hands, confused as to how their landlord can remove them from their home of ten years. Somewhere else, a woman waits on hold with a personal injury helpline after months of excruciating pain from slipping on an uneven paving stone, leaving her unable to work. Another man is on their commute home from work for the last time, frantically googling employment law, after being unexpectedly dismissed; he does not know what he can do or where next month’s rent will come from. Despite their differences, they all share one common question:


“Am I going to win?”


It is an age-old query coming in different forms; can I stay in my home; how much will I get in damages? Ultimately, they want to know about the outcome.


When considering how AI can increase access to justice, we must start there—with the outcome. Too often, solutions to the crisis of access to justice are framed within the constraints of existing systems, which rely on the current process being supported by technology (sustaining technologies). While these efforts are valuable, artificial intelligence (“AI”) offers an opportunity to work outside of these constraints and construct new routes to the same just outcomes (disruptive technologies).1 This account will consider how access to justice has already benefitted from AI and what could be done next.

Organising Disruption: Efficiency
For AI-driven disruption to meaningfully increase access to justice, its implementation must be guided by clear principles with efficiency as the cornerstone. In this context, efficiency encompasses both expediency and accuracy in results. Timely justice has been a consistent guiding ethos, with the Magna Carta stating, “To no one will we sell, to no one deny or delay right or justice.”2 This remains pertinent today, as explained by Zuckerman, the passage of time can diminish the value and enforceability of rights, making speed an essential element not just in procedure but in the dispensing of justice.3
Guided by efficiency and a willingness to move beyond traditional processes, AI can increase access to justice by providing faster, and still accurate, resolution that would otherwise take months of litigation.


Implementing Disruption
One clear example of AI systems enhancing access to justice is the rise of chatbots, such as AccessAva, which streamline legal information for those who need it most.4 AccessAva, developed by Carers UK in partnership with Access Social Care, is an online tool designed specifically for unpaid carers in the UK. It empowers users by providing easy-to-understand legal information, along with templates and resources which reduce the need for professional legal assistance. Other models include DoNotPay, which offers a similar service aimed at consumers. These chatbots are examples of Large Language Models (“LLM”), which are trained on large amounts of text data, which, in turn, generate natural language responses to a wide range of inputs.5 What sets models like DoNotPay and AccessAva apart is their focus on supporting litigants directly. They disrupt by allowing computers to speak the language of lawyers, which was previously unachievable. Further, they increase efficiency by centralising the information needed by the individual and making justice more accessible to those who might otherwise struggle with traditional systems due to financial constraints or a lack of understanding. This exemplifies AI as a solution to problems highlighted by the 2023 Legal Needs Survey by the Law Society. A key finding was that, of people who faced a legal issue between 2019 and 2023, only 52% received professional help, with the rest either relying on family and friends or not seeking assistance at all.6 The survey highlights two key barriers: the cost of legal advice and a lack of understanding or confidence in engaging with the law. AI-driven platforms like AccessAva are precisely the kind of innovation that can overcome these obstacles, closing the gap and providing essential support to those who would otherwise struggle to access justice.

They can also be taken further. AI systems, particularly those using machine learning, can analyse patterns in large datasets to predict outcomes which has the potential to take chatbots beyond the provision of legal information and into the realm of advice.7 For instance, researchers have used AI to predict the outcomes of European Court of Human Rights cases with 79% accuracy.8 A predictive capability like this has the potential to disrupt as it would allow an individual not only to understand what the next steps are but to make a well-informed decision on whether to pursue at all. Users could ask whether they have a strong case, whether pursuing it is cost-effective, or what outcomes they might expect. Accessing legal advice is a key element of access to justice and an AI system which combines predictive outputs with user-friendly interfaces, like AccessAva and DoNotPay, has the potential to increase the number of people that such advice is available to.

Challenges
There are concerns which should be met head-on, the primary one being accuracy. If the architecture which underlies any predictive technology is wrong, the output will be too. Thus, any such model would have to be tightly regulated by humans (lawyers) with the knowledge of the underlying area and the ability to understand the dispute to ensure the algorithm does not result in litigants abandoning worthy claims. Legal minds will have a role at the point of data entry and in auditing the output. The changing role of the lawyer and the need for the legal sector to be reflexive with technological advancements is part of the disruption that access to justice solutions which use AI will bring around. The lawyer’s auditing role is also in identifying “hallucinations” by chatbots whereby responses generated are incorrect or fabricated. The risk is lower with bespoke systems using specialised legal data than with general-purpose chatbots like ChatGPT.9 Despite the reduced risk, verification by a qualified legal mind is still necessary to ensure accuracy, and therefore efficiency. With this safeguard, chatbots can help democratise legal assistance.
Another significant concern regarding the use of AI in legal practice is that it could stunt the growth of the common law. A classic iteration of this concern comes from considering Donoghue v Stevenson,10 a seemingly simple case where a woman drank from a bottle which, unbeknownst to her, had a decomposing snail inside. While the case involved a straightforward fact pattern, it went all the way to the House of Lords and ultimately established the “neighbour principle,” a key development in negligence law. If fed into an AI advice system before this principle was established, the outcome might have been different, possibly failing to recognise the broader legal implications of the case. This raises the concern that AI systems, by relying heavily on data from past decisions, might overlook the unique factors in a case that could lead to the establishment of new legal principles. If AI simply provides a binary answer—”good claim, pursue” or “no claim, tough luck” – it could ignore the nuanced, creative reasoning that legal professionals bring to the table. It must be acknowledged that whichever AI system is implemented, it has to be sophisticated enough to recognise and flag unique features of a case that may not align with past precedents. These features could prompt lawyers to consider how the case might develop or whether it warrants a new interpretation of the law.


Conclusion
Embracing AI has the potential to reduce delays and empower individuals with legal information and guidance. This disruption must be managed carefully with human oversight to address challenges in ensuring accuracy and preserving the flexibility of legal interpretation.
This account comes from a legal perspective, without the technical expertise to explore the underlying technology. That knowledge gap should not, however, remove lawyers from the conversation. They bring essential industry insights and knowledge that are key to the reflexive relationship between law and tech.
Ultimately, with such collaboration and safeguards, AI can bridge the justice gap, ensuring that more people can ask and answer the crucial question, “Am I going to win?”

Beth Gilmour is the winner of the SCL AI Group Junior Lawyer Article Competition. Beth is a BAR student and is currently a Judicial Assistant to High Court Judges in England and Wales.

  1. Susskind RE, Tomorrow’s Lawyers an Introduction to Your Future, Chapter 6 “Disruptive Legal Technologies” (Third edition, Oxford University Press 2023), ↩︎
  2. Magna Carta Clause 40 ↩︎
  3. Professor Adrian Zuckerman, Zuckerman on Civil Procedure: Principles of Practice 4th Ed, Chapter 1 p.17 (4th Edition. Street & Maxwell, 2021) ↩︎
  4. AccessAva, available at https://www.accesscharity.org.uk/accessava (accessed December 2024) ↩︎
  5. Robin Allen KC and Dee Master, Judges, Lawyers, and litigation: Do they, should they, use AI? Paper for the Employment Law Bar Association (2024) p.17 ↩︎
  6. The Law Society: Find out what your clients need, with the results of our Legal Needs Survey, available at https://www.lawsociety.org.uk/topics/research/find-out-what-your-clients-need-with-the-results-of-our-legal-needs-survey (accessed December 2024) ↩︎
  7. Richard Susskind n1, Table 6.1 ↩︎
  8. Nikolaos Aletras, Dimitrios Tsarapatsanis, Daniel Preoţiuc-Pietro, Vasileios Lampos, Predicting judicial decisions of the European Court of Human Rights: a Natural Language Processing perspective, (2016) PeerJ Computer Science 2:e93 ↩︎
  9. Robin Allen KC and Dee Master n5, p.13 ↩︎
  10. [1932] AC 562 ↩︎

The post The ability of AI to increase access to justice. appeared first on Society for Computers & Law.

]]>
SCL Podcast “Technology & Privacy Laws Around The World” – Episode 3: USA https://www.scl.org/scl-podcast-technology-privacy-laws-around-the-world-episode-3-usa/ Mon, 10 Mar 2025 12:01:07 +0000 https://www.scl.org/?p=17752 The U.S. legal system is at a crossroads in the field of technology: Can it keep up with the rapid experimentation and deployment in AI ? While some states push for stronger privacy and AI laws, the federal government appears to lean toward deregulation. How does this fragmented approach impact businesses, consumers, and the future...

Read More... from SCL Podcast “Technology & Privacy Laws Around The World” – Episode 3: USA

The post SCL Podcast “Technology & Privacy Laws Around The World” – Episode 3: USA appeared first on Society for Computers & Law.

]]>
The U.S. legal system is at a crossroads in the field of technology: Can it keep up with the rapid experimentation and deployment in AI ? While some states push for stronger privacy and AI laws, the federal government appears to lean toward deregulation. How does this fragmented approach impact businesses, consumers, and the future of technology?

Mauricio Figueroa sits down with legal professionals Chris Mammen and Maria Angel, two experts based on opposite coasts of the U.S., bringing complementary and insightful perspectives. Together, they discuss the evolving legal challenges around privacy, intellectual property, and content moderation in the United States.

Mauricio Figueroa is a Mexican legal scholar based in the United Kingdom. His area of expertise is Law and Digital Technologies, and has international experience in legal research, teaching, and public policy. He is the host of the SCL podcast “Privacy and Technology Laws Around the World”.

Chris Mammen is the Office Managing Partner of Womble Bond Dickinson’s San Francisco office, who has guided Silicon Valley, national, and global tech and life sciences clients in high-stakes patent, other intellectual property, and technology litigation for over 25 years. He has led both large and small trial teams in courts throughout the United States, and has also served as lead counsel on appeals before the Ninth and Federal Circuits. His clients include companies in the software, artificial intelligence, telecommunications, microelectronics, medical devices, and life sciences sectors. Before joining Womble Bond Dickinson in 2019, Chris practiced in the Bay Area offices of several nationally-known law firms. One of only a handful of practicing lawyers to have earned a doctorate in Law from Oxford University in addition to a U.S. law degree, Chris has held visiting faculty positions at UC Hastings School of Law, Berkeley Law School, Stanford Law School, and Oxford University. Drawing on his years of teaching civil procedure, evidence, e-discovery, and advanced patent law, Chris is a creative strategist and has marked wins for clients on a variety of unconventional issues. Before entering law practice, Chris clerked for Judge Robert Beezer on the U.S. Court of Appeals for the Ninth Circuit.

María P. Ángel is a Postdoctoral Resident Fellow at Yale Law School’s Information Society Project (ISP). She holds a Ph.D. from the University of Washington School of Law. Her research focuses on privacy law, law and technology, and Science and Technology Studies (STS). Using STS’s theoretical and conceptual framework, María examines legal debates on privacy regulation and algorithmic governance. Her work aims to influence and hopefully improve scholarship and public policy discussions on these matters, by surfacing the transformation of key working concepts used by legal actors, the normative commitments that underpin certain proposed regulatory reforms, and the diverse ways in which different stakeholders understand the role of technology in these conversations.

About the podcast

Join host Mauricio Figueroa and guests on a tour of tech law from across the globe. The first episode looked at the evolving use of ‘robot judges’ in several jurisdictions. Episode 2 focused on developments in India and touched on the thought-provoking issues of data protection, freedom of expression and algorithmic discrimination, with insights from local experts, with experience in legal practice, academia and technology policy.
Future episodes will look at Southeast Asia, South America, Africa and Europe.

Where to listen

The post SCL Podcast “Technology & Privacy Laws Around The World” – Episode 3: USA appeared first on Society for Computers & Law.

]]>
Moving Beyond DORA Ready to DORA Now https://www.scl.org/moving-beyond-dora-ready-to-dora-now/ Tue, 04 Mar 2025 09:21:00 +0000 https://www.scl.org/?p=17573 Dr Paul Lambert highlights some of the key aspects of the Digital Operational Resilience Act (now in force) you should be aware of. The Digital Operational Resilience Act, known as DORA, impacts the financial sector as well as Big (and Small) Tech firms supporting banks and other financial institutions. The go live deadline for DORA...

Read More... from Moving Beyond DORA Ready to DORA Now

The post Moving Beyond DORA Ready to DORA Now appeared first on Society for Computers & Law.

]]>
Dr Paul Lambert highlights some of the key aspects of the Digital Operational Resilience Act (now in force) you should be aware of.

The Digital Operational Resilience Act, known as DORA, impacts the financial sector as well as Big (and Small) Tech firms supporting banks and other financial institutions. The go live deadline for DORA was 17 January 2025. DORA will have significant impacts across the international finance sector, and other types of firms in addition to the core financial sector, but arguably few of these have been fully compliant from day one. For example, some firms were preparing to be “DORA ready” for day one, recognising that there will be a period of additional implementation measures needed throughout 2025.

Cyber Threats Background

Why are the Act and the concepts of operational resilience and digital operational resilience relevant?

Recently we had an example of not one but three major banking institutions suffering IT problems which halted services to their customers, starting with Barclays, and expanding to Lloyds Bank and Halifax. Last year NatWest, RBS and Ulster Bank also suffered IT issues. The internal and external IT threats and vulnerabilities facing the financial sector are expanding. AI, which is a subject in its own right, appears to be only enhancing this trend when used by bad actors.

Some of these threats were being contemplated when the policymakers began to develop DORA, alongside market issues.

According to the ECB “with the use of information technology having become a large part of daily life, and even more so during the coronavirus (COVID-19) pandemic, the potential downsides of an increasing dependence on technology have become even more apparent. Protecting critical services like hospitals, electricity supply and access to the financial system from attacks and outages is crucial.  Given the ever-increasing risks of cyber attacks, the EU is strengthening the IT security of financial entities such as banks, insurance companies and investment firms.” DORA will “make sure the financial sector in Europe is able to stay resilient through a severe operational disruption.”

Increased digitalisation – and interconnection – also “amplify ICT risk”, making society as a whole (and the financial sector in particular) more vulnerable to cyber threats and / or ICT disruptions and attacks from errant third parties.

The range of cyber threats is also increasing. They include, for example, attacks such as bad actor hacking attacks, business email attacks, Phishing, Spear Phishing, Ransomware, viruses, Trojans, Distributed Denial of Service (DDOS), web application attacks, mobile attacks, and more.

The threat is not just from direct attacks. There are increasing numbers of indirect attacks, where the bad actors seek to gain access via a trusted third-party service provider that the financial company uses. This is supply chain and service provider compromise.

Other risk issues include management risk and system risk, such as failing to patch known vulnerabilities.

The number, level of sophistication and complexity of attacks are all increasing.

Costs to the Sector

The recent outages at Barclays, Lloyds Bank and Halifax demonstrate that there is a direct cost to consumers. There can even be a direct financial cost when salary payments or mortgage payments are missed. The DORA policymakers were also concerned at the potential systemic effects on the wider financial incident caused by IT issues, the effect on consumer trust, industry and national economies.

The cost of the above threats continues to increase. By comparison, we already see very significant data fines arising under data laws such as personal data rules. For example, Meta (has been fined €1.2 billion euros for one set of data breaches concerning data transfers while TikTok has been fined €345 million and £14.5 million for data breaches regarding child data. These are just examples with numerous other  data fines in the billions across the globe.

Many firms have been fined as a result of ineffective security measures leading to them being hacked, thus demonstrating a lack of appropriate and technical security measures and overall digital operational resilience. Already, even before the official go-live of DORA, firms have been receiving significant fines and penalties as a result of matters which cross over with digital operational resilience.

The Need for Digital Resilience

Regulators, whether the European Central Bank (ECB), the Bank of England (BOE), or the Fed in the US, are tasked with protecting the financial stability of their financial systems. As part of this they need to ensure that financial firms are financially resilient and stable and some of the rules around financial stability stem from the last great recession.

But today, financial stability is not the only threat to financial entities and the wider financial system: IT, ICT, and cyber threats must also be reckoned with. An example of an IT vulnerability change, apparently due to lack of testing prior to deployment and which had widespread adverse effects across a range of industries, was the SolarWinds incident. Financial entities often rely on third party suppliers or even outsource some of their core activities. Firms can be adversely affected when one of these third parties is exposed to a cyberattack. Bank of America, for example, had to warn its customers after one of its suppliers (IMS) was hacked by bad actors. Financial entities of service providers such as AddComm and Cabot have also encountered problems when these suppliers were involved in cyberattacks. Christine Lagarde (President of the ECB) states that “cyberattacks could trigger a serious financial crisis.” Piero Cipollone (ECB Executive Board) states that “cyber risks have become one of the main issues for global security. They have been identified as a systematic risk to the stability of the European financial system.” Unfortunately, it is not limited to just the European financial system.

Now, financial institutions must also ensure that they are digitally operational resilient and prepared for these internal and external tech threats.

Digital Operational Resilience Rules

DORA promotes rules and standards to mitigate Information and Communications Technology risks for financial institutions. One of the objectives of DORA is to “prevent increased fragmentation of rules applicable to ICT risk management” by establishing common rules and standards.

DORA “addresses today’s most important challenges for managing ICT risks at financial institutions and critical ICT third-party service providers.” These risks must be properly managed for digitalisation to “truly deliver on the many opportunities it offers for the banking and financial industry.” For example, better analysis and better data management can assist financial institutions  become more resilient. Also, “early warning systems” and automated alerts could enhance ICT risk management and digital operational resilience.

Key Focus Areas of DORA

DORA deals with five key pillar areas, namely:

  • ICT risk management
  • ICT-related incident management, classification and reporting
  • digital operational resilience testing (DORT)
  • ICT third-party risk management (TPRM)
  • information-sharing arrangements (ISAs).

Arguably, the rules and requirements for pillar 5 above are the least well developed and are likely to evolve during 2025 and 2026.

A very complex set of rules and requirements sits behind each of these pillars of the core DORA regulation. DORA sets out a broad array of new obligations for financial entities, outsource companies and technology companies supporting the financial sector. Some of these new rules mean new or enhanced:

  • ICT risk management and governance
  • ICT policies and procedures
  • ICT incident management and reporting
  • change management
  • digital operational resilience
  • digital operational resilience testing
  • ICT third party risk management
  • business continuity
  • cyber security
  • training
  • information sharing on threats.

Extensive Sub Rules

DORA is a legal Regulation. Being a law, it is labelled a Level 1 requirement. Unfortunately for industry, there is an expansive range of even more detailed legal and technical requirements at Level 2 below the Level 1 rules.

The array of DORA sub rules is vast. They are referred to as the Level 2 rules, with the main DORA Regulation representing Level 1. The Level 2 rules are then further separated into four types of sub rules, namely:

  • Regulatory Technical Standards or RTS
  • Implementing Technical Standards or ITS
  • Guidelines
  • (Independent) Commission Delegated Regulations.

The RTS, ITS and Guidelines were developed by the ESA, a combination of European financial regulators. The scope of these detailed Level 2 rules has added to the already complicated nature of the technical and regulatory compliance efforts required of financial entities. They are collectively far more extensive than the DORA Level 1 rules. An additional difficulty is that the Level 2 rules have come out over different time periods. The ones that are developed by the ESAs generally need to be reviewed, amended and implemented by the Commission. While the ESAs had specific time deadlines, the Commission did not have to specify when it would finalise the Level 2 rules.

Therefore, the rules have come out at different time periods, thus adding extra difficulties for financial institutions. Indeed, even near end of 2024, not all Level 2 rules were fully set out – even though the go-live date was imminent in January 2025.

In addition, we can also add two further layers of DORA regulations. There will be a certain level of national DORA direct legislation (Level 3) and national financial regulator rules (Level 4). Some of this is still in process.

Level 2 Regulatory Technical Standards

The RTS are:

  • Commission Delegated Regulation specifying ICT risk management tools, methods, processes, and policies and the simplified ICT risk management framework
  • Commission Delegated Regulation specifying the criteria for the classification of ICT-related incidents and cyber threats, setting out materiality thresholds and specifying the details of reports of major incidents
  • RTS to specify the policy on ICT services supporting critical or important functions provided by ICT third party services
  • Commission Delegated Regulation specifying the detailed content of the policy regarding contractual arrangements on the use of ICT services supporting critical or important functions provided by ICT third-party service providers
  • RTS on threat led penetrating testing (TLPT)
  • RTS and ITS on content timelines and templates on incident reporting (drafted by ESAs, apparently awaiting Commission implementing measure)
  • RTS on oversight harmonization
  • RTS on Joint Examination Teams (JET).
Level 2 Implementing Technical Standards

The ITS to a Register of Information.

Level 2 Guidelines

There are two DORA Level 2 Guidelines on:

  • aggregated costs and losses from major incidents (adopted by ESAs)
  • oversight cooperation between ESAs and competition authorities (adopted by ESAs).
Level 2 Delegated Regulations

There are two Commission Delegated Regulations which are independent of the ESAs, as follows:

  • Commission Delegated Regulation specifying the criteria for the designation of ICT third-party service providers as critical for financial entities
  • Commission Delegated Regulation determining the amount of the oversight fees to be charged by the Lead Overseer to critical ICT third-party service providers and the way in which those fees are to be paid.

DORA Ready to DORA Now.

Some of the details of the Level 2 sub regulations were finalised very close to the go-live date, and financial institutions had difficulty in fully understanding all the rules and nuances of the new regime and, importantly, in complying with these rules as some were not yet bedded down. The many layers of compliance requirements across multiple legal and technical instruments made this task vastly more complicated, consuming, and costly.

The effort needed to interpret and apply these expansive rules compounded by the late issue of some of the official materials, has meant financial entities and suppliers have faced significant challenges to reach a level even approaching compliance now and will need to expand the maturity of such compliance over the coming years.

While it was understandable to prepare on the basis of DORA ready (as much as one can be) up until now, it is now necessary to focus on DORA now, getting all of DORA and the sub regulations in place alongside measures needed to demonstrate digital operational resilience into the future.

Paul Lambert, Ph.D. Paul is the author of “DORA, Interpreting the EU’s Digital Operational Resilience Act” (published by Bloomsbury), and the editor of Gringras, The Laws of the Internet.

The post Moving Beyond DORA Ready to DORA Now appeared first on Society for Computers & Law.

]]>
Robot Judges podcast: An interview with Tomás McInerney https://www.scl.org/robot-judges-podcast-an-interview-with-tomas-mcinerney/ Wed, 15 Jan 2025 10:53:15 +0000 https://www.scl.org/?p=16924 SCL has recently launched a new season of podcasts surveying Technology and Privacy Law Around the World, hosted by Mauricio Figueroa. The first one in the season looks at idea of Robot Judges and Mauricio spoke after recording with one of the panel, Tomás McInerney, to find out more about his experience in being part...

Read More... from Robot Judges podcast: An interview with Tomás McInerney

The post Robot Judges podcast: An interview with Tomás McInerney appeared first on Society for Computers & Law.

]]>
SCL has recently launched a new season of podcasts surveying Technology and Privacy Law Around the World, hosted by Mauricio Figueroa. The first one in the season looks at idea of Robot Judges and Mauricio spoke after recording with one of the panel, Tomás McInerney, to find out more about his experience in being part of the project and his work in the area.   

What got you involved with AI and judicial decision-making?

Understanding how emerging technologies challenge and redefine many traditionally human decision-making processes is an increasingly important question. There is certainly a concern amongst some that we have given away too much to new AI systems in general, especially when those systems that are largely controlled and disseminated by major technology companies. The bearing that these developments have on the fundamentally human activity of judging disputes in the court is therefore fertile ground for analysis. Deploying AI in possibly the most crucial decision-making context of all – the judicial role – raises fundamental questions regarding the emotional and cathartic elements of delivering justice, respect for the delivery of individualised justice in many cases, and the absence of arbitrariness in decision-making. These questions make us consider the role of the judge in an age where efficiency, cost-saving, and access to justice are increasingly pertinent. There is a certainly a role for AI in the courts, and perhaps in some ‘low-level’ decision-making contexts, but understanding in where and why AI may be appropriate here needs to be answered first.

How did you feel about participating and as a listener, what themes would you like to hear more on?

It was really enjoyable: a great opportunity to explore complex ideas in an accessible format, and it is always rewarding to think about how these discussions might resonate with listeners and a wider audience. The process also gave me a chance to reflect on how to frame key debates in law and technology in ways that inspire curiosity and encourage deeper engagement.

As a listener, I would be particularly interested in themes that critically examine the global impacts of digital technologies, and particularly Generative AI, on law. Exploring how different jurisdictions are grappling with interdisciplinary challenges like data bias, data privacy, and AI benchmarking would provide some great insights. For example, comparing regulatory approaches in the EU, the US, and emerging frameworks in the Global South could highlight how cultural, legal, and political contexts shape responses to technological change.

 Ultimately, topics that unpack the human dimension of digital technologies, whether in terms of access to justice, fairness, or legal certainty, would be particularly compelling. These are the kinds of conversations that could stimulate a richer discussion for academics, legal practitioners, and a wider audience.

Where can we learn more about  learn more about the projects you mentioned in the podcasts?

A good starting point, in the near future, will be the monograph I am working on, which builds on my doctoral research. It examines the limits of AI in judicial decision-making, focusing on what it is about the human act of judging that cannot – and perhaps should never – be replicated by machines. This expands on several of the ideas I touched on during the podcast.

For now, I’d recommend a recent co-authored chapter of mine, available as a preprint, which addresses similar themes. For those interested in further reading, Morison and Harkens’ 2019 paper provides an excellent foundation for understanding many of the issues raised during our discussion. Additionally, Deakin and Markou’s edited collection, Is Law Computable?, is a valuable resource that brings together diverse perspectives at the intersection of AI and law.

I also have a chapter forthcoming in an edited collection on Epistemic Injustice, titled The Algorithmic Construction of Epistemic Injustice. This explores the processes of constructing Large Language Models, unpacking the problematic practices and assumptions that often embed injustice into these systems and perpetuate it in their downstream applications.

Tomás McInerney was in conversation with Mauricio Figueroa. Find out more about them, and to listen to the podcast, visit podcasts.scl.org.

About the podcast

Over the next few months, Mauricio will host a unique series of conversations on tech law from across the globe with scholars and practitioners from different jurisdictions and expert fields.

The next episode in the series, looking at developments in India will be released on 20th January and touches on the thought-provoking issues of data protection, freedom of expression and algorithmic discrimination, with insights from local experts, with experience in legal practice, academia and technology policy.

Where to listen

The post Robot Judges podcast: An interview with Tomás McInerney appeared first on Society for Computers & Law.

]]>
Book Review: Living with the Algorithm – Servant or Master? https://www.scl.org/book-review-living-with-the-algorithm-servant-or-master/ Thu, 09 Jan 2025 14:01:00 +0000 https://www.scl.org/?p=16779 Darren Grayson Chng on a book making the case for greater regulation of AI It was at a webinar on AI and ethics in July 2024 that I first heard Lord Tim Clement-Jones speak. After hearing him speak I wanted to hear more. I knew I had to get my hands on the AI regulation...

Read More... from Book Review: Living with the Algorithm – Servant or Master?

The post Book Review: Living with the Algorithm – Servant or Master? appeared first on Society for Computers & Law.

]]>
Darren Grayson Chng on a book making the case for greater regulation of AI

It was at a webinar on AI and ethics in July 2024 that I first heard Lord Tim Clement-Jones speak. After hearing him speak I wanted to hear more. I knew I had to get my hands on the AI regulation and policy book that he said he had published.

Living with the Algorithm – Servant or Master? opens with an exploration of the narratives around AI, and how governments have been grappling with regulating a rapidly evolving technology that can be used for good but also for harm. The author’s view is that governments should develop and implement a governance framework that encourages transparency and is designed to gain and develop stakeholder trust.

At this point in time when countries around the world are competing to be the top AI hub and are thinking about whether to regulate AI, how to do so, and what would encourage innovation and investment rather than scare it away, the author makes the pointed comment that focusing on innovation-friendly regulation can mislead regulators and hinder effective governance. Instead, regulators should focus on assessing and calibrating risk, and providing guardrails for high-impact outcomes.

Chapter 2 discusses how many governments identify and plan for AI risks. Chapter 3 examines the impact of AI on democracy and freedom of speech. It talks about how AI has contributed to disinformation, and how countries are trying to mitigate or prevent AI-specific risks to democratic values.

Chapter 4 focuses on public sector adoption of AI technologies, with sections devoted to live facial recognition and autonomous weapons systems. The author argues that even when automated decision making  is not relied upon solely, the impact of such systems across an entire population can be immense in terms of potential discrimination, breach of privacy, and access to justice. As an inhouse lawyer now implementing  AI regulations but who used to work for the government, I found rather interesting the author’s short discussion about the utility of procurement rules and contractual clauses in ensuring the quality of AI systems.

Another enjoyable chapter was Chapter 5, which discusses the complex relationship between AI and intellectual property, and how AI challenges traditional notions of IP rights and ownership. Chapter 6 covers digital skills training and education for the future, the importance of digital literacy, and how to combat digital exclusion and data poverty.

Chapter 7 surveys the landscape of ethical AI principles before talking about legal liability and corporate governance. The author says that boards must have the right skill sets to understand what technology the company is using, and how it is using and managing it, in order to fulfil their oversight role.

He also suggests questions that boards should ask when considering the adoption of AI solutions, questions which I think are pertinent and important, and which I think companies still trying to get AI governance in place will find challenging to answer. Examples are:

  • How is ethics around technology included within board governance? How often is ethics and technology discussed by the board?
  • How does accountability between the business leadership and technology specialists fit together? Who is accountable at board level for these issues?
  • What is the risk appetite of the business for the adoption of new technologies? How is risk assessed?

Chapter 8 looks at the differing approaches to AI regulation adopted by the EU, US, and UK, and the role of international standards. I like that the author devoted space (Chapter 9) to examining geopolitical tensions with China. Finally, Chapter 10 recaps the key themes discussed throughout the book, emphasising the need for thoughtful regulation of AI.

I think that Living with the Algorithm – Servant or Master? is a jewel for policymakers and regulators dealing with AI. For readers in other professions, it will be an insightful introduction to the range of challenges in regulating AI, both challenges that governments have to grapple with as well as challenges that arise because of how governments work.

This book clearly reflects the author’s significant expertise in AI policy. I cannot help but wish that it was much longer than 160 pages with deeper discussions on various topics like mitigating AI risks, IP, and managing geopolitical tensions.

Darren Grayson Chng is a data and tech lawyer in Singapore.

About the book

Living with the Algorithm – Servant or Master? by Tim Clement-Jones

£12.39

Published March 2024

Paperback, 160 pages

ISBN: 1911397923

The post Book Review: Living with the Algorithm – Servant or Master? appeared first on Society for Computers & Law.

]]>
This week’s Techlaw news round-up https://www.scl.org/this-weeks-techlaw-news-round-up-36/ Fri, 20 Dec 2024 10:32:00 +0000 https://www.scl.org/?p=16668 Online Safety Act 2023 (Commencement No 4) Regulations 2024 made The Online Safety Act 2023 (Commencement No 4) Regulations 2024 (SI 2024/1333) have been made. These Regulations are the fourth commencement regulations under the Online Safety Act 2023. They bring into force on 17 January 2025 the duties about regulated provider pornographic content in section...

Read More... from This week’s Techlaw news round-up

The post This week’s Techlaw news round-up appeared first on Society for Computers & Law.

]]>
Online Safety Act 2023 (Commencement No 4) Regulations 2024 made

The Online Safety Act 2023 (Commencement No 4) Regulations 2024 (SI 2024/1333) have been made. These Regulations are the fourth commencement regulations under the Online Safety Act 2023. They bring into force on 17 January 2025 the duties about regulated provider pornographic content in section 81 and other provisions, such as Ofcom’s enforcement and information powers and the offence of failing to comply with a confirmation decision, as they relate to section 81.

CMA publishes final digital markets competition regime guidance

The CMA has published its final digital markets competition regime guidance. It provides advice and general information to businesses, their advisers and other stakeholders on the approach used by the CMA in operating the digital markets competition regime, set out in the Digital Markets, Competition and Consumers Act 2024. The guidance received approval from the Secretary of State for Business and Trade on 17 December 2024 and takes effect from 1 January 2025. The CMA has also published relevant guidance for the reporting of a merger by firms designated by the CMA as having Strategic Market Status (SMS) under the Act.

Advertising Standards Authority publishes update on online supply pathway of age-restricted ads

The ASA has published a report providing a unique insight into the online supply pathway of ads for alcohol, gambling and other age-restricted ads. The ASA’s five-year strategy commits to protecting children and other vulnerable audiences and bringing greater transparency and broader accountability to its online advertising regulation. The ASA’s report presents the perspectives of advertisers, publishers and ad supply intermediaries on the relatively few cases, identified by automated monitoring, of age-restricted ads mistargeted to websites and YouTube channels disproportionately popular with children. The report highlights what can be done to reduce children’s exposure to age-restricted ads online (such as those for alcohol or gambling). The study also describes compliance processes in place, and steps taken, to target age-restricted ads away from children in line with CAP Guidance on Age-restricted Ads Online. Whilst breaches of the advertising codes are few in number, the ASA says that it remains important to examine the circumstances that lead to the ads being mistargeted to sites disproportionately popular with children. For example, the report provides specific case study evidence around mis-categorisation of age-restricted ads, which if categorised correctly are likely to have prevented the ad from being served and inadequacies relating to the blocklisting of publications disproportionately popular with children.

Ofcom consults on technology notices

Ofcom is consulting on two parts of the framework that underpin Ofcom’s Online Safety Technology Notice powers: its proposals for what the minimum standards of accuracy for accredited technologies could be, to inform its advice to the UK government; and its draft guidance about how it proposes to use this power. Under the Online Safety Act, Ofcom has powers to tackle terrorism and child sexual exploitation and abuse (CSEA) content. It can, where it decides that it is necessary and proportionate, make a provider use a specific technology to tackle terrorism and/or child sexual exploitation and abuse (CSEA) content, or develop technology to tackle CSEA content. Ofcom would do this by issuing a Technology Notice under section 121 of the Act. Any technology that Ofcom requires a provider to use will need to be accredited either by Ofcom, or someone Ofcom appoints, against minimum standards of accuracy set by the UK government, after advice from Ofcom. The consultation ends on 10 March 2025.

PSA publishes final annual report

The Phone-paid Services Authority (PSA) has published its final annual report before it transfers regulatory responsibility for regulating phone paid services to Ofcom in 2025, after which it will cease operations. The organisation highlighted its agile approach to regulation, including the introduction of Code 15, which shifted focus from enforcement to prevention. The PSA also says that it has reduced consumer detriment by over 85% when regulating Information, Connection and Sign-posting Services.

FCA issues discussion paper on cryptoassets

The FCA has published a discussion paper on the future market abuse regime for cryptoassets and cryptoasset admissions and disclosures regime. In 2023, the UK government announced plans to legislate for a future financial services regime for cryptoassets. This would bring certain cryptoasset activities into the FCA’s regulatory perimeter. The Treasury published its initial consultation and call for evidence in February 2023, followed by its response in October. In November 2024, the Labour government confirmed it will proceed with legislation to bring cryptoassets into the FCA’s regulatory perimeter. Under the government’s plans, the FCA’s regulatory remit for cryptoassets will expand from the current Money Laundering, Terrorist Financing and Transfer of Funds (Information on the Payer) Regulations 2017 and Financial Promotions regime to a more comprehensive conduct regime. This will cover cryptoasset trading, regulation of stablecoins, intermediation, custody and other core activities. The FCA has issued the discussion paper help inform the development of a balanced regime that addresses market risks without stifling growth. It seeks views by 14 March 2025.

Revised EU Product Liability Directive enters into force

The revised Product Liability Directive ((EU) 2024/2853) applies to new products placed on the EU market from 9 December 2026. It updates the product liability framework for victims seeking compensation for damage, such as personal injury, property damage and damage to data, caused by defective products.  It also aims to provide greater legal certainty for economic operators. It applies to all products. This includes household items as well as AI, software and product-related digital services, and to products sold online. Under the revised Product Liability Directive, the European Commission will also develop a publicly accessible EU database of court judgments on product liability cases. This aims to give more information about how the rules apply.

The post This week’s Techlaw news round-up appeared first on Society for Computers & Law.

]]>
Ofcom publishes final version of illegal harms guidance under Online Safety Act https://www.scl.org/ofcom-publishes-final-version-of-illegal-harms-guidance-under-online-safety-act/ Fri, 20 Dec 2024 08:25:00 +0000 https://www.scl.org/?p=16661 Ofcom has published its first-edition codes of practice and guidance on tackling illegal harms, such as terror, hate, fraud, child sexual abuse and assisting or encouraging suicide, under the UK’s Online Safety Act. The Act places new safety duties on social media firms, search engines, messaging, gaming and dating apps, and pornography and file-sharing sites....

Read More... from Ofcom publishes final version of illegal harms guidance under Online Safety Act

The post Ofcom publishes final version of illegal harms guidance under Online Safety Act appeared first on Society for Computers & Law.

]]>
Ofcom has published its first-edition codes of practice and guidance on tackling illegal harms, such as terror, hate, fraud, child sexual abuse and assisting or encouraging suicide, under the UK’s Online Safety Act.

The Act places new safety duties on social media firms, search engines, messaging, gaming and dating apps, and pornography and file-sharing sites. It requires Ofcom to produce codes of practice and industry guidance to help firms to comply, following a period of public consultation.

Ofcom says that it has carefully considered responses to the consultation on the draft consultations and strengthened some areas of the codes since the initial consultation.

Every site and app in scope of the new laws has until 16 March 2025 to complete an assessment to understand the risks illegal content poses to children and adults on their platform.

Subject to Ofcom’s codes completing the Parliamentary process by this date, from 17 March 2025, sites and apps will then need to start implementing safety measures to mitigate those risks, and the codes set out measures they can take. Some of these measures apply to all sites and apps, and others to larger or riskier platforms. The key changes that sites and apps need to make are:

  • Senior accountability for safety. Each provider should name a senior person accountable to their most senior governance body for complying with their illegal content, reporting and complaints duties. 
  • Better moderation, easier reporting and built-in safety tests. Tech firms will need to make sure their moderation teams are appropriately resourced and trained and are set robust performance targets, so they can remove illegal material quickly when they become aware of it, such as illegal suicide content. Reporting and complaints functions must be easier to find and use, with appropriate action taken in response. Relevant providers will also need to improve the testing of their algorithms to make illegal content harder to disseminate. 
  • Protecting children from sexual abuse and exploitation online. Ofcom’s final measures are explicitly designed to tackle pathways to online grooming. This will mean that, by default, on platforms where users connect with each other, children’s profiles and locations – as well as friends and connections – should not be visible to other users, and non-connected accounts should not be able to send them direct messages. Children should also receive information to help them make informed decisions around the risks of sharing personal information, and they should not appear in lists of people users might wish to add to their network. The codes also expect high-risk providers to use automated tools called hash-matching and URL detection to detect child sexual abuse material (CSAM). These tools allow platforms to identify large volumes of illegal content more quickly, and are critical in disrupting offenders and preventing the spread of this seriously harmful content. This includes smaller file hosting and file storage services, which are at particularly high risk of being used to distribute CSAM.
  • Protecting women and girls. Women and girls are disproportionately affected by online harms. Under Ofcom’s measures, users will be able to block and mute others who are harassing or stalking them. Sites and apps must also take down non-consensual intimate images (or “revenge porn”) when they become aware of it. Following feedback, Ofcom has also provided specific guidance on how providers can identify and remove posts by organised criminals who are coercing women into prostitution against their will. It has also strengthened its guidance to make it easier for platforms to identify illegal intimate image abuse and cyberflashing.
  • Identifying fraud. Sites and apps are expected to establish a dedicated reporting channel for organisations with fraud expertise, allowing them to flag known scams to platforms in real-time so that action can be taken. Ofcom has expanded the list of trusted flaggers.
  • Removal of terrorist accounts. It is very likely that posts generated, shared, or uploaded via accounts operated on behalf of terrorist organisations proscribed by the UK government will amount to an offence. Ofcom expects sites and apps to remove users and accounts that fall into this category to combat the spread of terrorist content.

Enforcement powers

Ofcom says that it will offer support to providers to help them to comply with these new duties. However, it also warns that it is “gearing up to take early enforcement action against any platforms that ultimately fall short.”  Under the Act, Ofcom has the power to fine companies up to £18m or 10% of their qualifying worldwide revenue, whichever is greater, and in very serious cases it can apply for a court order to block a site in the UK.

Future developments

Ofcom is going to carry out a further consultation on further codes measures in Spring 2025. This will include proposals in the following areas:

  • blocking the accounts of those found to have shared CSAM;
  • using AI to tackle illegal harms, including CSAM;
  • use of hash-matching to prevent the sharing of non-consensual intimate imagery and terrorist content; and
  • crisis response protocols for emergency events (such as last summer’s riots).

As well as this, Ofcom is planning the following:

  • January 2025: final age assurance guidance for publishers of pornographic material, and children’s access assessments;
  • February 2025: draft guidance on protecting women and girls; and
  • April 2025: additional protections for children from harmful content promoting, among other things – suicide, self-harm, eating disorders and cyberbullying.

The post Ofcom publishes final version of illegal harms guidance under Online Safety Act appeared first on Society for Computers & Law.

]]>
ICO responds to consultation series on generative AI https://www.scl.org/ico-responds-to-consultation-series-on-generative-ai/ Thu, 19 Dec 2024 08:44:00 +0000 https://www.scl.org/?p=16643 In January 2024, the ICO launched its five-part generative AI consultation series. It has now published its consultation response. The series set out to address regulatory uncertainties about how specific aspects of the UK GDPR and the DPA 2018 apply to the development and use of generative AI. It did that by setting out the...

Read More... from ICO responds to consultation series on generative AI

The post ICO responds to consultation series on generative AI appeared first on Society for Computers & Law.

]]>

In January 2024, the ICO launched its five-part generative AI consultation series. It has now published its consultation response. The series set out to address regulatory uncertainties about how specific aspects of the UK GDPR and the DPA 2018 apply to the development and use of generative AI. It did that by setting out the ICO’s initial analysis of these areas, along with the positions it wanted to consult on.

The ICO retained its position on purpose limitation, accuracy and controllership.

It updated its position on the legitimate interests lawful basis for web scraping to train generative AI models.

It heard that data collection methods other than web scraping exist, which could potentially support the development of generative AI. An example is where publishers collect personal data directly from people and license this data in a transparent way. It is for developers to demonstrate the necessity of web scraping to develop generative AI. The ICO will continue to engage with developers and generative AI researchers on the extent to which they can develop generative AI models without using web-scraped data.

Web scraping is a large-scale processing activity that often occurs without people being aware of it. The ICO says that this sort of invisible processing poses particular risks to people’s rights and freedoms. For example, if someone doesn’t know their data has been processed, they can’t exercise their information rights. The ICO received minimal evidence on the availability of mitigation measures to address this risk. This means that, in practice, generative AI developers may struggle to demonstrate how their processing meets the requirements of the legitimate interests balancing test. As a first step, the ICO expects generative AI developers to significantly improve their approach to transparency. For example, they could consider what measures they can provide to protect people’s rights, freedoms and interests. This could involve providing accessible and specific information that enables people and publishers to understand what personal data the developer has collected. The ICO also expects them to test and review these measures.

The ICO received evidence that some developers are using licences and terms of use to ensure deployers are using their models in a compliant way. However, to provide this assurance, developers will need to demonstrate that these documents and agreements contain effective data protection requirements, and that these requirements are met.

The ICO updated its position on engineering individual rights into generative AI models.

The ICO says that organisations acting as controllers must design and build systems that implement the data protection principles effectively and integrate necessary safeguards into the processing. This would put organisations in a better place to comply with the requirement to facilitate people’s information rights.

Article 11 of the GDPR (on processing which does not require identification) may have some relevance in the context of generative AI. However, organisations relying on it need to demonstrate that their reliance is appropriate and justified. For example, they must demonstrate they are not able to identify people. They must also give people the opportunity to provide more information to enable identification.

The response also highlights areas where the ICO thinks further work is needed to develop and inform its thinking. It also recognises that the upcoming Data (Use and Access) Bill may affect the positions in the paper. Following the changes to data protection law through the Data (Use and Access) Bill, it will update and consult on its wider AI guidance to reflect the changes and include generative AI.

Its final positions will also align with its forthcoming joint statement on foundation models with the Competition and Markets Authority. This statement will touch on the interplay of data protection and competition and consumer law in this complex area.

The post ICO responds to consultation series on generative AI appeared first on Society for Computers & Law.

]]>