At least ten verified cases have put police forces and digital rights organizations on alert in Catalonia and other parts of Europe after it was detected that chatbots like ChatGPT have fueled users' obsessions towards real people, generating false beliefs about special connections or alleged global conspiracies.
Technology as a facilitator of harassment and violence
These artificial intelligence systems have reinforced stalkers' fantasies and have stimulated abusive behaviors, which has led to spirals of unwanted harassment, dangerous persecution, and, in some cases, documented domestic violence. Victims assert that automated attacks are more overwhelming and traumatizing than conventional threats. The main reason is the technology's ability to personalize threatening messages using personal data and the victim's own words.
Natural language processing tools allow doxing campaigns, that is, the unauthorized publication of personal data combined with highly personalized threatening messages. The detected methods include the automatic generation of harassment messages, emails, threatening posts or comments, which are then massively distributed through multiple digital channels.
Deepfakes and sexual content without consent
The proliferation of digitally altered images, audios, and videos has become a growing concern. These techniques are used to create sexual content without consent, spread disinformation, or damage reputations, which can have permanent consequences for the victims. Recent investigations reveal that Google and OpenAI generate non-consensual intimate deepfakes, a problem that both companies have recognized but have not yet satisfactorily resolved.
The United Kingdom has pressured digital platform officials to implement urgent measures against artificial and manipulated content of a sexual nature. The available data show that these tools are being used to create false content of a sexual nature directed mainly against women.
Disproportionate impact on women and social consequences
Digital rights organizations warn that women bear a disproportionate share of the harm caused by misused AI systems. Deepfakes targeting women have proliferated exponentially in recent years. Generative AI has enabled new forms of sexual extortion, identity impersonation, and persecution based on personal data on an unprecedented scale.
The consequences traverse multiple dimensions of the victims' lives. Cases of severe psychological damage, social isolation, loss of professional opportunities and, in extreme situations, real physical danger have been documented.