- 17.12.24
- Reading time: 6 Minuten
The EU AI Act (AI Regulation)Why 2025 is approaching faster than you might think
Author: Dr Markus Hülper, lawyer, IT security expert
Fasten your seatbelts, because 2025 is going to be an exciting year! In addition to the usual New Year’s resolutions, companies in the EU are facing a whole new challenge: the EU AI Act (also known as the ‘AI Regulation’). And no, it’s not just another bureaucracy monster – even if it might feel like one at first glance.
What is the EU AI Act? And why should you take it seriously?
The AI Act is the first comprehensive AI legislation worldwide. The EU has set out to regulate the use of artificial intelligence (AI) – strictly but fairly. The idea behind it: Create trust, minimise risks and promote innovation at the same time.
For medium-sized companies that do not develop high-risk AI systems (think of autonomous cars or medical diagnostic systems), this sounds relaxed at first. But beware! From 2 February 2025, the first obligations will apply to many companies – even those that ‘only’ use AI-based tools or develop simple systems.
What's coming up?
The regulations of the EU AI Act follow a risk-based approach. This means that the riskier an AI system, the stricter the regulations. For ‘normal’ medium-sized companies
For companies that do not use high-risk AI, the requirements are somewhat less intimidating – but by no means trivial.
From 2025, the EU AI Act will bring new requirements that companies must observe:
- Disclose when AI systems are used (transparency obligations).
- Ensure that the systems used do not provide discriminatory results.
- Pay attention to data protection and cyber security – and not just superficially.
- Documenting how AI systems are developed and used. (Yes, that means more paperwork.)
High-risk systems, such as biometric monitoring or AI-supported application tools, have even stricter requirements – but we will deal with this in a separate article.
Why should you act now?
When it comes to new regulations, most SMEs often think: ‘Oh, we’ll wait until it becomes concrete.’ That’s not a good idea this time. Why?
Reason 1: 2025 is tomorrow
The training obligation from the new EU AI Act (AI Regulation) (AI Act) will apply from 2 February 2025, ensuring that everyone who works with AI systems knows what is important. It is about understanding risks, utilising opportunities and using this technology responsibly.
What exactly is it about?
The EU does not want anyone to work with AI systems unprepared. Regardless of whether you develop them, integrate them into a company or ‘just’ use them – a certain basic knowledge is mandatory. This includes
- How does AI work? No detailed knowledge, but a solid understanding of the basics.
- What risks are there? For example, when AI reinforces prejudices or makes the wrong decisions.
- How do I use AI safely and sensibly? So that it becomes a tool and not a tripping hazard.
Who is affected?
The training obligation is aimed at everyone involved with AI systems. This includes
- Developers and providers who need to ensure that their AI is not only innovative, but also secure.
- Companies and users who use AI in their everyday lives. They should understand how to use the technology correctly without unintentionally causing harm.
Why is this important?
AI systems are becoming increasingly complex and their effects are often not immediately apparent. Mandatory training ensures that everyone involved has the same knowledge base. This minimises risks and strengthens trust in this technology.
This means that companies should use AI sensibly and responsibly.
The EU AI Act (AI Regulation) takes compliance with its requirements very seriously – and this is also reflected in the sanctions and fines. Anyone who ignores the rules risks not only damage to their image, but also severe financial consequences.
Reason 2: Penalties are expensive
From 2 February 2025: Don’t be afraid of AI, but don’t be naïve either. The EU makes it clear that knowledge is the key to using AI systems safely and responsibly. Companies and employees should see the training courses as an opportunity to be optimally prepared for dealing with AI. Those who ignore the obligation not only risk problems with the technology, but also fines.
What offences are punished?
The regulation distinguishes between different types of offences, which vary in severity. For example, sanctions may be imposed for
- Disregarding the requirements for high-risk AI systems: If an AI system is used in an unsafe or discriminatory manner, it can be expensive.
- Lack of transparency: If the fact that it is AI is concealed (e.g. in the case of deepfakes), this is a clear breach of the rules.
- Breaches of the training obligation: Companies must ensure that their employees are adequately trained – no excuses.
- Insufficient risk management: Anyone who does not take measures to minimise risks is in breach of the principles of the regulation.
How high can the fines be?
The EU has set clear upper limits depending on the severity of the offence:
- Up to 30 million euro or 6% of a company’s global annual turnover – whichever is higher. This applies to particularly serious offences, such as non-compliance with the requirements for high-risk AI.
- Up to 20 million euro or 4% of annual global turnover for less serious but still significant breaches of the rules.
- Up to 10 million euro or 2% of annual global turnover for minor offences, such as breaches of transparency requirements.
Why so strict?
The EU wants to ensure that the EU AI Act (AI Regulation) not only exists on paper, but is actually complied with. The fines are intended to act as a deterrent – because AI systems can have a major impact on society. Negligent or irresponsible use could cause considerable damage.
This means that there is a threat of serious sanctions.
The sanctions of the EU AI Act (AI Regulation) are not just a symbolic threat, but an instrument to be taken seriously. Companies should see compliance with the regulations as an investment in security and trust. After all, the consequences of breaching the rules are not only financially painful, but can also damage a company’s reputation in the long term.
Reason 3: AI is everywhere
Many SMEs are already using AI technologies – often without even realising it. AI has long since made the leap from high-tech laboratories into everyday working life and is hidden in tools and processes that we use as a matter of course.
Where is AI already being used today?
- Customer support: Chatbots that answer questions around the clock are mostly based on AI, as are automatic email analysis tools.
- Marketing: From personalised advertisements to the optimisation of social media campaigns – many of these processes are controlled by AI.
- Human resources: Applicant management systems that pre-filter CVs or employee feedback tools often use AI to support decisions.
- Logistics and production: Optimised route planning, demand forecasts or quality controls – AI ensures efficiency and cost reduction here.
- Finance: Systems for fraud detection or AI-supported accounting are standard in many companies.
Why don't many people know about this?
AI is ‘under the bonnet’ in many software solutions. This means that users are not even aware of the complex mechanisms behind it. For many, it is simply a tool that works – and is not necessarily recognised as AI.
Meaning: AI systems are used without everyone being aware of it
SMEs have long been part of the AI world, even if this is not always obvious. This is precisely why it is important to find out about the technologies used. After all, the better companies understand where AI is already at work, the better they can utilise its potential – and fulfil the requirements of the new EU AI Act (AI Regulation).
But don't panic - we'll do it together
At CLARIUS.LEGAL, we specialise in everything to do with compliance, data protection and information security – and the AI Act fits right in with our strengths. Our goal is to guide you through the jungle of regulations so that you can focus on what you do best: Running your business successfully.
How we can help:
- Consultancy: We check which obligations apply to you and how you can implement them efficiently.
- E-learning: Our training courses on cyber security and AI prepare your team in the best possible way – comprehensible, practical and entertaining
- Compliance check: We help you to set up your AI systems and their use in such a way that they fulfil the legal requirements.
One last tip: knowledge is power (and saves money)
The requirements of the AI Act may be new, but they are based on principles that should sound familiar to you: Privacy, transparency and security. And the best part? With the right support, these requirements are achievable – and even a real competitive advantage.
If you’re thinking: ‘That all sounds good, but I still don’t know where to start’ – don’t worry, that’s what we’re here for. Get in touch with us and together we will make sure that you are not only ready, but also optimally prepared.
PS: Why procrastinate? Our e-learning courses on AI and cyber security are launching soon – reserve your place and get the knowledge you need!