1 call from an AI agent = 1 new data output with legal weight, according to the AEPD

It is not enough to know the input and the output of the system

25 of march of 2026 at 13:44h

The Spanish Data Protection Agency published this March 24, 2026 new guidelines on agentic artificial intelligence from a data protection perspective, in which it warns that incorporating this type of system into a process, service or product usually entails a real alteration of data processing and requires reviewing compliance.

The body places the focus on cases in which an agent can plan subtasks, consult memory, invoke tools, connect with third parties or execute actions autonomously. In that scenario, it maintains that agentic AI should not be analyzed as an isolated technology, but as a way to implement, totally or partially, personal data processing.

Review of the treatment and of all the traceability of the data

The guidelines state that the use of these agents can modify the way data is accessed, how it is transformed, to whom it is communicated, or for how long it is retained. Therefore, the AEPD points out that their incorporation may require reviewing the record of activities, the categories of data processed, the recipients, international transfers, retention periods, and security measures.

The agency emphasizes that it is not enough to know the input and output of the system. It demands understanding what sources the agent consults, what memory it reuses, what tools it activates, what intermediate transformations it performs, and what data persists at the end of the process.

In that vein, the document highlights that knowing the chain of reasoning allows knowing the data's lifecycle. If it cannot be reconstructed with sufficient detail where that information has passed, it will be more difficult to justify principles such as minimization, proportionality, or purpose limitation.

The organization, furthermore, must be able to identify what part of the result depends on specific sources, inferences, or memories. The AEPD warns that without that visibility it is more complex to detect chained errors or unforeseen uses of personal data.

Control over external tools and third-party services

Another of the warnings in the document focuses on the use of third-party tools or resources by agents. The AEPD warns that this resource can introduce new processors, sub-processors, independent controllers, or even co-responsibility scenarios.

It can also activate new persistent memories, new international data flows, and new contractual or informational obligations. In that context, the agency considers that a call to an external tool can de facto become a partial exit from the processing with its own relevance.

That's why it recommends reviewing not only the contracts, but also the terms and conditions, the privacy policies, the conditions of use and, where applicable, the changes in version or functionality of those services. Among the practical measures, it proposes white lists of services, limitation of accessible tools and control of parameters and responses in each call.

Internal governance and continuous evaluation

The AEPD insists on the need for transversal governance, with participation of functional managers, IT teams, quality, and the figure of the data protection officer. The review of the processing, it adds, must be done from the design and maintained throughout the entire life cycle of the system, not only at the initial moment of deployment.

The document advocates for a continuous evaluation based on evidence, with clear metrics, benchmark tests, and incident review. It also points out that organizations should analyze whether the incorporation of the agent requires performing or updating an impact assessment.

Data minimization and memory control

The guidelines reinforce the principle of minimization by warning that these agents may tend to seek efficiency through more data, more context, and more memory than strictly necessary. To avoid this, the AEPD proposes defining access policies according to each processing, cataloging the available data and its sources, and applying controls over the quality, origin, and consistency of the information used.

The text warns that poor management of repositories, metadata or tags can lead to indiscriminately treating irrelevant personal information, reusing context outside of purpose or accessing special categories of data without real necessity.

In terms of memory, the agency demands specific measures. Among them it cites compartmentalization by treatments, cases or users, the separation between organizational memory and user's memory, strict retention periods and sanitization techniques on persistent memory.

Data localization must also cover prompts and other intermediate elements when they contain personal information. That traceability, specifies the AEPD, is necessary to be able to attend requests for access, rectification, erasure, restriction, objection or portability.

The body concludes that deploying agentic AI without previously redesigning the processing, without reinforcing traceability, and without precisely delimiting tools, memories, autonomies, and responsibilities raises the risk of non-compliance in terms of data protection.

About the author
Redacción
See biography