The EU’s Artificial Intelligence proposal is here: A first look at the requirements
May 7, 2021
While powerful Artificial Intelligence (AI) tools are already present in our day to day lives, AI is still in its relative infancy. According to PWC research, it could contribute up to $15.7 trillion to the global economy in 2030, and it is no surprise that regulation is trailing behind the market in this rapidly evolving area. In response to an increased awareness of the possible dangers this technology carries, the European Union (EU) has adopted a proposal in the first ever international effort to regulate AI.
The proposed legal framework, called the Artificial Intelligence Act, is a positive step towards curtailing the potentially negative impacts of AI on individuals and society as a whole. Within its scope, exist some of the most exciting and controversial technologies in recent history, such as autonomous driving, facial recognition, and algorithms powering online marketing. The act aims to achieve this by approaching AI in a similar way to the EU’s product safety regulation, which serves to shed light on the development process and increases transparency for the people impacted.
The Artificial Intelligence Act will introduce important obligations surrounding transparency, risk management and data governance, and is likely to apply to AI providers such as FRISS and its customers as ‘users’ of our AI. Like the General Data Protection Regulation (GDPR), a legal framework for protecting personal information in the EU, the fines that authorities will be able to issue are tiered according to severity. However, an upper tier fine under the AI act surpasses the GDPR’s, reaching €30 million (about $35.5 million USD) or 6% annual worldwide turnover, whichever is the highest.
How does the proposal approach AI?
The Commission has adopted a broad interpretation of AI, undoubtedly an intentional act to maximize the scope and effectiveness of the legislation. AI has been broadly defined under Article 3 of the proposal as being: “Software that is developed with one or more of [certain] approaches and techniques... Such approaches can be found listed under Annex I and include machine learning approaches, logic and knowledge-based approaches and/or statistical approaches. ...and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.” The Commission’s approach has been to separate AI, and corresponding requirements, according to risk level. It starts with “Prohibited AI”, which has been classified by the Commission as being off-limits due to its inherent exceptional risk. It is what many of us would consider that “too far” tier, which includes technology like biometric facial recognition in public spaces, or social scoring by authorities.
The Commission identifies certain AI as being “Low-Risk” and "Minimal Risk”, which only need to meet basic transparency obligations in order to be compliant with the proposal. These are things like chat bots, spam filters, or video games that use AI to emulate realistic human player behavior. The principal focus of the proposal is “High-Risk AI”, examples of which have been provided under Annex III of the proposal as being:
AI related to biometric identification and categorization of natural persons;
management and operation of critical infrastructure; education and vocational training;
employment, works management and access to self-employment;
access to and enjoyment of essential private services and public services and benefits;
law enforcement; migration, asylum and border control management;
and administration of justice and democratic processes.
While it is dependent on how further guidance defines Annex III, it is possible that Annex III 5(b) is intended to include certain Insurtech products. It is also important to note, that this list is not static, and may be updated by the Commission to capture future or newly identified systems determined to be High-Risk, which is why FRISS will be regularly monitoring developments.
So, What Are the Requirements?
Providers of high-risk AI systems under the proposal must implement a number of requirements, such as: having a risk management system, performing data governance practices, maintaining technical documentation and record keeping, as well as ensuring the provision of transparent information to users of their systems (Article 16). In addition to product-centric requirements, AI providers must also ensure that they have a quality management system in place, that they perform ‘conformity assessments’ and keep automatically generated logs. Providers must also affix a CE marking to high-risk AI systems (or documentation) to indicate their conformity. If non-compliance with any of the requirements is identified, providers must inform the appointed authorities and take any necessary corrective actions. The users of high-risk AI systems are obliged to: follow issued instructions that accompany the AI systems, ensure that input data is relevant for the intended purpose of the AI, monitor operation of the AI and suspend the system in the event that any risk is presented, and maintain and retain generated logs for an appropriate period (Article 29).
The FRISS Approach to This New Regulation
FRISS takes regulatory standards seriously because referring to ourselves as trusted advisors is not only a title. We welcome the use of AI because it never sleeps, it’s faster, has fewer errors and operates on the collective brain power of an entire anti-fraud department. And at FRISS, AI is utilized so that we can provide real-time, and holistic views of risks at policy request, renewals, and claims, to increase efficiency for our customers. However, transparency is equally as crucial to us, too. While this proposal marks the beginning of the legislative process, there are further actions that will need to be taken before we will see the finalized version. The proposal’s next step will be revision by the European Parliament and Council of Ministers, and in the meantime, FRISS’ Compliance and Data teams will continue to closely monitor those developments, identifying any changes within the core requirements and aligning with those as soon as possible. From the beginning, FRISS has been committed to incorporating key principles of responsible AI like reduction of bias, transparency, and risk management, focusing on the following aspects of each:
Reduction of bias: We exclude obvious data points on gender, marital status, nationality, ethnicity, etc. in our models, and our data scientists are trained to recognize possible proxies, as well.
Transparency: We apply explainable AI to all of our models, meaning the end users will see exactly why a certain claim was flagged as high risk.
Risk management: We’ve implemented Data Protection Impact Assessments (DPIAs) into our development process and taken a privacy-by-design approach, in line with the GDPR.
What We Want You to Know
The Artificial Intelligence Act is a positive step toward the ethical and responsible use of AI. We commend the EU’s enhanced focus on transparency and risk management because we also believe this should be a market standard for AI-powered products. We’re taking the time to analyze and observe developments in the proposal to make sure we understand what these requirements will mean for our products and our customers globally. Because we recognize that being compliant will be essential in maintaining “business as usual”, we’re on top of these changes. We’ll continue to monitor what regulatory requirements are on the horizon and take responsible steps toward compliance as early as possible. If you’d like to learn more about the steps that FRISS is taking to ensure the transparency of our products, schedule a demo with a member of our team, or click here to read about how we meet certain security regulations elsewhere in the world.