The EU Artificial Intelligence Act (AI Act) is a regulation introduced by the European Union to regulate artificial intelligence systems and ensure their safe, ethical and transparent use within its Member States. It aims to foster innovation and AI competitiveness within the EU, all while safeguarding fundamental rights and building public trust. The AI Act was adopted in June 2024 and creates a risk-based framework for AI applications. The Act classifies AI systems into categories such as minimal, limited, high and unacceptable risk, with different regulatory requirements for each. Systems posing an unacceptable risk are strictly prohibited and are, for example, AI systems with real-time facial recognition technology used in physical spaces (as opposed to systems that do so online, which are considered high risk). When an AI system qualifies as high risk, this creates requirements for the developers of the AI application, as well as for the deployers thereof, possibly being law enforcement authorities. High-risk applications – like those used in law enforcement – must meet stringent requirements relating to accuracy, human oversight and transparency. In the context of TRACY, an AI Impact Assessment template has been developed by the legal team, allowing legal and technical partners to assess the risk category of an AI system. By conducting workshops and training sessions for law enforcement authorities, awareness about the AI Act, its non-disputable relevance and its requirements is created, and both theoretical and practical knowledge can be shared.