- AI regulation is jeopardised by unclear definitions and an overly tight timeline, which create legal uncertainty and hinder the effective application of the rules in practice.
- The application of the regulation should be postponed and clarified. Traditional statistical methods should be clearly excluded from the scope of regulation, and overlapping and fragmented supervisory obligations should be eased.
As AI-based solutions become more widespread, the regulation governing them is also updated. The European Commission is currently preparing a Digital Omnibus on AI, which will amend certain provisions of the AI Act.
Finance Finland takes a positive view of the objectives set for EU regulation of artificial intelligence (AI). AI requires a predictable operating environment that supports innovation. However, the tight timeline and unclear definitions of the current proposal raise concerns, and the obvious flaws in the regulation must be fixed sooner rather than later.
“Applying the requirements related to high-risk AI systems will need to be supported by technical standards, guidelines and supervisory practices that currently are not comprehensive enough. For this reason, it would be sensible to postpone the implementation so that companies have a realistic chance to adapt to the new rules”, says Aleksi Kaakinen, head of competition law at Finance Finland.
“It’s clear that the use of AI requires well-defined rules, but vague and rushed regulation will not achieve its purpose – it is more likely to stall when it matters most. We’re now amending legislation that was finalised only two years ago. Having to change regulation already during the transitional period is unsustainable from the perspective of legal certainty and predictability”, Kaakinen criticises.
High-risk AI systems and use cases are systems that may have wide-ranging impacts, for example on people’s health, safety, benefits and fundamental rights. The use of high-risk systems is permitted, provided that the system meets the requirements set for it. Derogating from the high-risk requirements is possible in a number of specific cases. In the financial sector, high-risk use cases include the assessment of a natural person’s creditworthiness, as well as the pricing and risk assessment of life and health insurance.
An overly broad definition of AI could even bring Excel within the scope of regulation
According to Kaakinen, the definition of AI systems is one of the biggest stumbling blocks of the proposal. Kaakinen points out that traditional statistical and mathematical models that have been in use for decades, such as linear and logistic regression, should be unequivocally excluded from the scope of the regulation. These models have long been used also in the financial sector, without any additional threat scenarios or mystique.
“Many of the statistical and mathematical methods have been used for decades and should not be treated as falling within the definition of an AI system. This includes even Excel. Unfortunately, expanding the definition to include these methods, the statistical methods in particular, is still being debated.”
According to Kaakinen, the existing ambiguity causes legal uncertainty and may lead to inconsistent application between different countries.
“This weakens the functioning of the single market and creates an uneven playing field for companies”, Kaakinen says.
Promoting AI competence is a matter for society as a whole
The proposed regulation requires companies to promote the AI competence of their personnel. Kaakinen emphasises that while promoting AI competence at the grassroots level is extremely important, on a bigger scale it should be the responsibility of society as a whole.
“Companies obviously train their employees in the use of various systems as necessary. However, building a basic understanding of AI should primarily be the responsibility of public authorities, not individual companies, any more than literacy or other core civic skills are”, Kaakinen says.
“The financial sector naturally wants to be a responsible user of AI and evaluates its conduct in the context of European legislation, regardless of whether the responsibility for promoting AI competence is legislatively assigned to the Commission and the member states”, Kaakinen adds.
The promotion of non-discrimination and security must not clash with regulation
A key objective of the EU AI Act is to mitigate discrimination and unfair treatment in AI-based solutions. Potential applications of AI-based solutions include credit decisions, risk assessment and insurance pricing, and in such cases it is crucial to ensure that AI does not result in discrimination based on the customer’s background, for example.
Kaakinen is concerned that the regulation may, in practice, make it more difficult to take equality considerations into account.
“Preventing discrimination requires sufficiently broad personal data processing rights. Regulation must therefore allow the use of more extensive data sets to identify and address discrimination risks, also in cases where the systems involved are not classified as high-risk”, Kaakinen says.
The high-risk classification entails significantly stricter obligations for companies: systems must be tested, documented and thoroughly assessed even before they are placed on the market. Because of the additional burden imposed on companies, systems should not be classified as high-risk on light grounds.
“High-risk AI systems should be defined on the basis of their actual impact, meaning that they must have a material effect on decision-making. From the viewpoint of crime prevention, it would also be unreasonable to classify AI systems designed for cybersecurity as high-risk systems”, Kaakinen says.
According to Kaakinen, the administrative burden should be eased, for example by simplifying the registration of high-risk systems. In addition, overlapping obligations in relation to other EU-level and sector-specific legislation should be removed.
Kaakinen also warns against excessive fragmentation of supervision.
“From a company perspective, it would be unsustainable if the same project were overseen by several different authorities, each from a different angle”, Kaakinen says.
Looking for more?
Other articles on the topic
“AI requires clear rules, but vague and rushed regulation will stall when it matters most”
Nordic banks urge the Commission to pursue a stable but not overregulated banking sector
What can Finland learn from the UK, where financial supervisors now promote competitiveness in addition to stability?
Head of Cabinet Michael Hager: Europe will not prosper simply by avoiding mistakes – Financial supervisors must look after competitiveness and growth