BIP Pennsylvania News

collapse
Home / Daily News Analysis / What the EU AI Act requires for AI agent logging

What the EU AI Act requires for AI agent logging

Apr 19, 2026  Twila Rosenbaum  8 views
What the EU AI Act requires for AI agent logging

The EU AI Act, a comprehensive 144-page document, establishes important logging requirements that AI agent developers must adhere to. These requirements are detailed across four interrelated articles, which delineate the obligations for high-risk AI systems. This article summarises the essential information, including the deadlines for compliance and identifies potential gaps.

Classification of AI Agents

Interestingly, the Act does not explicitly name "AI agents"; instead, it focuses on the actions and impacts of the systems in question. If your AI agent is involved in significant decision-making processes—such as scoring credit applications, filtering resumes, determining healthcare benefits, pricing insurance, or triaging emergency calls—it is categorized as high-risk under Annex III of the Act.

Article 6(3) provides a possible exemption, stating that if the system does not materially influence decision outcomes, it may not be classified as high-risk. However, establishing this claim is challenging for an agent that autonomously makes decisions based on the results of various tools.

Furthermore, general-purpose AI models face different obligations outlined in Chapter V. While the model itself isn’t deemed high-risk, the system built upon it may be classified as such once deployed in a high-risk context. The model provider retains its obligations under Chapter V, while the integrator assumes high-risk provider responsibilities as stated in Article 25.

Essential Articles for Compliance

Article 12 mandates that high-risk AI systems must have the capability for the automatic recording of events (logs) throughout their operational lifetime. The terms "automatic" and "lifetime" are crucial here; automatic signifies that the system must generate logs independently, and lifetime indicates that logging is required from deployment through decommissioning, not just during the current operational phase.

Article 12(2) specifies three categories of data that logs need to encompass: scenarios where the system might pose risks or undergo significant changes, information for post-market monitoring, and data for operational oversight by deployers. However, the regulation does not dictate a specific format or require particular fields, focusing instead on these three purposes.

According to Article 13, there is a requirement to document how deployers can collect and interpret these logs. This documentation should serve as a technical integration guide for the logging layer rather than a compliance manual.

Articles 19 and 26 stipulate that logs must be retained for a minimum of six months. Financial services firms can integrate AI logs into their existing regulatory documentation, while other sectors are required to keep logs for at least half a year, with possible extensions based on sector-specific rules.

The Need for Robust Logging Mechanisms

While standard application logging can capture the actions of an AI agent—such as calling tools, delegating to sub-agents, obtaining responses from large language models (LLMs), and generating final outputs—the real challenge lies in proving the integrity of these logs after a period of time. For instance, if regulators demand evidence that logs have not been tampered with six months later, application logs stored on user-controlled infrastructure could easily be altered or replaced without detection.

Although Article 12 does not explicitly mention the need for "tamper-proof" logs, the implication is clear. If logs are susceptible to silent modifications and cannot provide verifiable evidence of their integrity, they hold no evidentiary value, especially for high-risk systems.

This gap in logging reliability has led to the exploration of cryptographic signing for agent logs. For instance, the proposed method involves signing each action taken by the agent with a key that the agent does not possess, chaining each signature to the previous one, and storing the resulting receipts in a secure location inaccessible to the agent. This approach ensures that any alteration in the logs will be immediately evident.

Current Status of Standards

As of now, there is no finalized technical standard for logging under Article 12. Two draft standards are currently in consideration: prEN 18229-1, which addresses AI logging and human oversight, and ISO/IEC DIS 24970, which focuses specifically on AI system logging. Neither of these drafts has been completed.

Entities must prepare for regulations that outline outcomes without specifying implementation methods. Organizations that establish effective logging practices now will have a significant advantage when formal standards are finalized, while those that delay may face challenges in adapting under pressure.

Upcoming Deadlines and Potential Penalties

The obligations outlined in Annex III are set to take effect on August 2, 2026. However, the European Commission proposed a delay through the Digital Omnibus package last November, potentially extending the deadline to December 2027. Negotiations are ongoing as the Council and Parliament have adopted positions in March 2026. Yet, until a new law is enacted, the original date remains enforceable.

Failure to comply with the regulations could result in penalties of up to 15 million euros or 3% of a company's worldwide annual turnover, whichever amount is greater. While this formula applies to all entities, Article 99 emphasizes that penalties must be proportionate and instructs national authorities to consider company size and economic viability. Consequently, startups and SMEs are likely to face lower fines than the maximum, despite the formula remaining unchanged.

Key Considerations

  • Can your system automatically generate logs at every decision point?
  • Are those logs protected against tampering?
  • Can you retain them for six months in a regulator-friendly format?

As the deadline approaches, these questions become increasingly urgent.


Source: Help Net Security News


Share:

Your experience on this site will be improved by allowing cookies Cookie Policy