Europe holds the famous or sometimes notorious title as the regulatory superpower of the world. It’s not surprising that a few weeks ago it launched a proposal for regulating Artificial Intelligence, arguably the first of its kind in the world.
This has sparked many debates in the past weeks. While activists and some academics say it hasn’t gone far enough, businesses have pushed back on the legislation claiming that it may stifle AI innovation in Europe and threaten our competitive position. Both publicly and oftentimes in closed circles companies are complaining that they had just about understood how to deal with GDPR, and have to worry yet again about a looming bureaucratic nightmare. While these concerns are understandable, it is essential that businesses look closely at the proposal from a different perspective— that it is not just all about sticks but offers plenty of carrots.
It may be hard to believe at face value but I will try to convince you otherwise. I do this not from a moral high horse but from a pragmatic vantage point, to explain why paying attention to what this proposal requires actually makes business sense. Before I do that let me present a brief summary of the proposal.
The AI regulation proposal in a nutshell
The recent EU proposal on regulating AI largely encompasses and codifies the following principles of Trustworthy AI into law— Human agency, Technical Robustness, Privacy and Data Governance, Transparency, Non-discrimination and fairness, Societal and environmental well-being and Accountability.
The proposed regulation is a “a risk-based approach and imposes regulatory burdens only when an AI system is likely to pose high risks to fundamental rights and safety.”
In fact the proposal divides AI systems into 4 risk categories:
- Unacceptable risk: AI technologies used for real time mass surveillance (with some exemptions for law enforcement situations), social scoring and other AI technologies that exploit vulnerable groups and cause harm are deemed as unacceptable and are therefore prohibited.
- High risk: AI systems that have a significant impact on health, economic situation, and safety such as credit scoring, CV screening, education evaluations and placements, critical infrastructures and healthcare applications.
- Low risk: Applications like chatbots and some voice assistants.
- Limited risk: Applications like AI based spam filters.
The proposal mainly mandates compliance requirements for prohibited and high risk AI systems.
And what does compliance mean? After a high-risk AI system is developed it needs to undergo a conformity assessment either by an independent group inhouse or a third party assessment body known as notified body. It has to be registered in an EU database after which a declaration of conformity is to be signed before the AI system is given a CE marking. After this certification process a high risk AI system can be placed in the market. The proposal also acknowledges that a one-off certification cannot be applied to AI, due to the dynamic nature of real world data and performance, so it requires provisions for continuous monitoring known as post-market monitoring. More details on the technical information required to draft this certification process can be found in the Annex of the proposal.
If private companies use prohibited AI technologies, noncompliance can lead to fines up to EUR 30,000,000 or 6% of the total worldwide annual turnover(whichever is higher). Fines between EUR 20,000,000 or 4% of worldwide turn over to EUR 10,000,000 or 2% turnover are expected for noncompliance with the conformity assessment and monitoring obligations.
Now that we discussed the stick, let’s see what the carrots are.
Why does it make business sense?
For all it’s legal context, this proposal touches upon many technological challenges of assessing and monitoring AI systems. Needlessly to say, any AI expert will tell you that it is easier said than done.
AI models, especially those that are based on Machine Learning are not only powerful but exceedingly complex. With the increased AI adoption and expansion of this technology in companies, managing these AI models is becoming increasingly challenging.
Long before this regulation came into discussion, businesses were and still are grappling with the ‘blackbox’ problem of AI: lack of explanation, unreliable performance/robustness, difficulty in identifying issues and improving the model,
adversarial attacks, fairness and bias etc.
Interestingly they coincide with the aspects of Trustworthy AI around which the EU regulation is designed. There is an entire sub-field in AI called MLOps dedicated to solving these challenges. I for one co-founded Clearbox AI in 2019, to offer a model assessment and monitoring platform for companies known as AI Control Room.
From a managerial and governance level it is good business practice to understand why the decisions are being made and have mechanisms to intervene when the AI models don’t perform well or behave in an undesirable manner. It’s all about trust and control. So embracing Trustworthy AI will not only help you with regulatory compliance in future, but will bring immediate results in terms of harnessing the full power of AI with trust and control. It also helps your company mitigate risks ranging from bias to cyber security.
One stop for responsible AI innovation
Companies that want to responsibly innovate with AI are often confronted with hundreds of frameworks and guidelines on AI ethics. It is a challenging task both organisationally and technically to implement these ethical guidelines in practice. At best they can serve as high-level ethics washing strategies for marketing campaigns, and at worst they can create a false sense of security and drive the focus away from the actual risks posed by AI. With this in mind, what the EU proposal did is to codify such ethical principles into procedures to implement Trustworthy AI in organisations with a threefold objective: firstly to enable responsible innovation that sustains the democratic values and individual rights , secondly to create legal certainty over AI innovation, and finally to have a single market standard to interpret the law.
This actually reduces a lot of work and uncertainty for companies on how to deploy AI responsibly, without drowning in the complexity of figuring out yet another fresh-off-the-press ethical manifesto. In a sense it does create what businesses and markets like and thrive upon: predictability. And the indication of certainty and assurance to responsibly use AI to improve their business and market position.
So far it is still a proposal and it will take a couple of years before it fully comes into effect. From both a legal and a business point of view, companies cannot and should not wait until 2024 to embrace Trustworthy AI. If they haven’t started already, they should now. If they have any questions, they can always contact yours truly!