A study signed by researchers from the University of Zaragoza and the University of La Rioja warns that the definitive elimination of the digital past is not guaranteed, not even when explicit deletion of information is requested on online platforms or services.
Artificial intelligence can recover deleted data
The work, published in the journal ACM Computing Surveys, warns that artificial intelligence is capable of accessing data that had already been marked as deleted. The team, made up of Ignacio Marco-Pérez, Beatriz Pérez Valle and Ángel Luis Rubio (University of La Rioja) along with María Antonia Zapata (University of Zaragoza), differentiates between "recoverable deletion" and "non-recoverable deletion".
According to the analysis, technologies such as temporal databases, blockchain, or machine learning systems can leave residual copies or fragments of dispersed information. Advanced algorithms can identify and reconstruct information from these fragments, which can violate the original intention of those who requested the deletion of their data.
The right to be forgotten, in question
The finding calls into question the principle of the "right to be forgotten" recognized in the European Union. The existence of backups and the proliferation of devices capable of storing data, such as mobile phones, smartwatches, connected televisions, or vehicles, make the effective compliance with this right difficult. The study also underlines the importance of addressing the fate of data after a person's death, as the digital footprint can last indefinitely.
Privacy Risks and Security Recommendations
The possibility that an artificial intelligence reconstructs information from fragmented data can lead to leaks and privacy attacks. Companies and entities that hold data are obligated to establish protocols that guarantee data deletion is real and total, although the challenge grows with the increase in connected devices and the complexity of storage systems.
- It is not recommended to introduce financial reports, strategic data, client lists, or confidential information into artificial intelligence systems.
- Conversations with artificial intelligence systems do not have end-to-end encryption or adequate protection guarantees for critical data.
- There is a risk of exposure if a third party accesses the user account or if platforms use conversations to train their models.
- Internal security policies usually prohibit the use of external tools to manage sensitive information.
Warnings about the use of artificial intelligence in sensitive decisions
Cybersecurity experts insist on the need to never share personal, banking, password, or any sensitive data with conversational artificial intelligence systems. Furthermore, artificial intelligence systems do not replace the judgment of doctors, lawyers, or financial advisors. Making medical, legal, or economic decisions based exclusively on the answers provided by an AI can lead to serious consequences.