Webinar: AI Act & Intellectual Property
Published on August 6, 2024 --- 0 min read
By Alessandra Nicolosi

Webinar: AI Act & Intellectual Property

Share this article

A few months ago we had the pleasure to host a live chat with Luca Egitto, Lawyer specialized in Intellectual Property and Information Technology, Data Protection Officer (DPO), and Partner at RP Legal, and our CEO Shalini Kurapati. During the chat, the speakers dove into the uncharted waters of AI Act and its relation to Intellectual Property.

As we all know, AI Act came into force on 1 August 2024 this is an apt time to understand more about its intricacies and its impact on intellectual property rights.

Shalini Kurapati and Luca Egitto, both Certified Informational Privacy Professionals/Europe (CIPP/E) discussed the current and future complexities of the regulation in an informal discussion that went beyond the surface and was tailored for the interest of both industry professionals seeking a comprehensive understanding of the AI Act and enthusiasts eager to grasp the new regulatory landscape.

Whether you're an AI enthusiast, a legal professional, or someone curious about the intersection of technology and regulation, you can gain new insights by watching the video above or read a summary of the conversation below.

Introducing the AI Act

The European Union AI Act is a pioneering comprehensive regulation for artificial intelligence—one that stands out globally for its scope, impact, and approach. Since its proposal in 2021, the Act has made significant strides, recently receiving adoption from the European Parliament and the publication of its official text. The AI Act is officially in force since 1 August 2024. Companies will have a two-year grace period to comply, with full enforcement slated for 2026.

The Act classifies AI systems based on their risk levels, as we outlined in a previous blog post. Here’s a quick overview:

  • Minimal-risk systems, such as spam filters, face fewer obligations.
  • Medium-risk systems, like those generating deepfakes, must adhere to stricter transparency standards.
  • High-risk systems, which include AI used in surgeries or mortgage applications, are subject to rigorous assessments and monitoring.
  • Unacceptable-risk systems, such as those involved in social scoring, are banned outright.

Recent updates also include provisions for general-purpose AI models, acknowledging their potential for systemic risks.

Intellectual Property under the AI Act

While the AI Act addresses fundamental rights, its treatment of intellectual property (IP) is somewhat limited. Companies are required to disclose copyrighted data used in AI training, but how generated content is handled remains unclear. The focus is primarily on transparency regarding data usage and model operations.

Recent lawsuits, such as those involving Getty Images and The New York Times, underscore the ongoing confusion surrounding AI-generated content and copyright. In response, companies like Shutterstock, Google, and Reddit are establishing data licensing agreements to manage AI training data, with Google’s significant deal with Reddit drawing Federal Trade Commission scrutiny.

The AI Act’s approach to IP introduces more questions than answers, reflecting its reliance on principles rather than specific rules—a challenge reminiscent of the early days of GDPR. This approach provides flexibility but also creates uncertainty, similar to the initial lack of clarity with GDPR that required further guidance and adaptation.

AI Act and Intellectual property: expert insights

The AI Act's principle-based approach offers businesses flexibility but also introduces uncertainty, much like the early days of GDPR. Although the AI Office is expected to provide guidance, the current text primarily addresses training data rather than AI outputs.

Training data remains a contentious issue, as evidenced by the New York Times lawsuit. General-purpose AI models require vast amounts of data, often sourced from the internet, which raises significant copyright concerns among media publishers and authors who believe their rights are being violated.

The AI Act mandates transparency in data usage, allowing copyright holders to address potential infringements, echoing aspects of the 2019 Copyright Directive, which permits text and data mining under certain conditions to foster innovation. Despite Europe's lack of a direct fair use provision like the US, there are exemptions for data mining aimed at promoting innovation. However, these exemptions may not cover all commercial uses, leading to questions about compliance and rights protection.

AI providers must disclose data usage, which could help rights holders protect their works, though it also places a monitoring burden on them. Database rights, especially sui generis rights protecting data collections crucial for AI training, underscore the need for compliance and transparency to maintain competitiveness for EU-based providers. Discrepancies between EU and US regulations could potentially give non-EU providers an advantage.

In the EU, copyright protection is automatic, without the need for formal registration. This can create challenges in asserting rights, but innovative solutions are emerging. For example, the Numbers Protocol uses a decentralized network to timestamp ownership on the blockchain, providing proof of creation.

Adobe's Content Authenticity Initiative ensures tamper-proof metadata, aiding in tracking provenance. Such technologies could help in attributing and compensating creators, particularly artists and photographers. However, while these ideas show promise, they lack institutional backing. EU regulators tend to avoid overly specific technological solutions to prevent them from becoming obsolete quickly.

The Data Act suggests using smart contracts for transparent information exchange, which could extend to watermarking and tagging digital content, though there is a risk of complexity leading for large scale adoption covering the vast variety of data.

Economic compromises among stakeholders may be necessary to ensure fair remuneration and recognition. For extensive, multi-purpose AI models requiring vast datasets, achieving this balance will involve significant political and economic negotiation. Europe's role in AI development is crucial, but global collaboration is essential to address these challenges effectively.

Moral rights and research

Communities such as researchers and creators in open data and creative commons spaces place high value on moral rights, seeking proper attribution when their work is reused. Platforms like Zenodo provide citation opportunities, recognizing original creators.

Some artists are actively working to protect their data. For example, the University of Chicago has developed tools like NightShade and Glaze, which subtly alter images to disrupt AI training while preserving human perception. These efforts aim to prevent AI from mimicking their style and to protect their moral rights.

While these initiatives are commendable, the broader challenge lies in ensuring fair attribution and compensation in large-scale AI projects that require extensive datasets. Balancing moral rights with practical data use remains complex and will require ongoing discussions and innovative solutions.

AI outputs

We've extensively discussed training data, but AI-generated outputs also present significant issues. Unlike the complex background data, outputs are tangible and accessible for evaluation. Responsibility for AI outputs lies with the deployer or user of the AI system, not the AI itself, which is not a legal entity.

If a legal issue arises, courts may require full disclosure of training data to determine if it includes protected work, potentially making the output a copy or derivative. The burden of proof falls on the deployer to demonstrate that any similarities were unintentional. Even accidental resemblance to previous works can lead to legal consequences.

Ownership of AI-Generated Content

Ownership and copyright for AI-generated content, including new datasets, images, or movies, raise important questions. Although AI systems are controlled by entities, these entities can claim ownership of AI outputs. However, human intervention is essential, particularly in creatively combining or controlling the output.

Precedents in Italy, suggest that copyright can be claimed by a legal or natural person, provided that significant human expertise contributes to the process. Currently, copyright law focuses on human authors, meaning a human must be involved in the creative process to claim copyright for AI-generated work. More case law is needed to clarify these issues, but for now, copyright originators must be human.

Joining efforts - Is it possible?

AI’s global nature necessitates addressing how differing copyright perspectives from the US and EU impact each other. Despite differences, there is some harmonization between US and European copyright laws, influenced by international conventions.

While the US lacks a specific database right, its case law often provides protection similar to Europe's. Conversely, Europe’s database rights are becoming less relevant globally. The real concern is whether Europe might over-regulate and stifle innovation. Balancing rights protection with fostering innovation is crucial to avoid hindering progress.

In data protection, we already anonymize datasets to comply with GDPR. We should explore methods to clear datasets to satisfy copyright owners while allowing developers to continue their research and product development. The goal is to find a balance that supports both protection and innovation.

Data reuse and sharing

Data reuse and data sharing are essential for innovation and AI projects but have been restricted by GDPR. Organizations are increasingly reluctant to share data, fearing it may be used to train AI models without proper compensation or attribution. This hesitancy may lead to more restrictive data access policies.

As data licensing and monetization opportunities emerge, organizations are already viewing data as a proprietary asset. This could make data access more restricted, as regulations like the AI Act emphasize protecting intellectual property and trade secrets. For instance, the Data Act mandates that disclosing technology details should not compromise IP rights or trade secrets, potentially leading businesses to limit data sharing.

Future guidance from the AI Office and the European Court of Justice (ECJ) will be crucial in providing clarity for entrepreneurs and technology developers. Without such guidance, we risk regulatory failure, which must be avoided.

Technical and policy perspectives

The AI Act includes both high-level principles and specific technical details, creating a disconnect between broad guidelines and technical requirements. Collaboration between lawyers and technologists is essential to develop strategies and solutions that integrate technical and policy perspectives. The challenge is to ensure effective collaboration to avoid the Act becoming a bureaucratic obstacle.

The challenge of rapid development

Regulating technology poses the risk of falling behind its rapid development. The slow legislative process in the EU may result in outdated regulations by the time they are implemented. Instead of rigid policies for technical issues, standards and protocols should be developed by relevant associations. Technical guidance on interoperability, independent reviews of data sets, and fair compensation standards should be established by industry experts rather than through legislative mandates. Digital information is not unique, and compensation should reflect this, avoiding excessive demands for fleeting data use.

Finding a balance

It's crucial to approach these issues with a modern perspective, fostering innovation while protecting fundamental rights. We must focus on advancing technology and developing technical skills, alongside interpreting the AI Act’s implications. Supporting technological development while ensuring legal clarity will be key to thriving in this evolving landscape.

Tags:

blogpost
Picture of Alessandra Nicolosi
Alessandra is Digital Marketing Manager at Clearbox AI and her work revolves around every aspect of digital communication and media strategy.