Canada concludes that OpenAI collected medical and ideological data without valid consent and approved the May 2026 report

Canada's privacy regulators ruled that OpenAI violated the law by developing ChatGPT: it collected sensitive data without valid consent and with a lack of transparency to access, correct, or delete.

07 of may of 2026 at 11:05h
Canada concludes that OpenAI collected medical and ideological data without valid consent and approved the May 2026 report
Canada concludes that OpenAI collected medical and ideological data without valid consent and approved the May 2026 report

Canada's privacy regulators determined that OpenAI violated federal and provincial legislation in developing ChatGPT. The resolution concludes that the company collected sensitive data without valid consent.

The report was presented on May 6, 2026, by Philippe Dufresne, Privacy Commissioner of Canada, along with his counterparts from Quebec, British Columbia, and Alberta. The authorities identified serious breaches in the management of personal information throughout the entire lifecycle of the artificial intelligence model.

The company collected medical and ideological data without authorization

The investigation revealed that OpenAI obtained large volumes of information from public sources, social networks, and forums. The system processed details about medical conditions, political ideology, and data of minors. Regulators pointed out that this practice lacked a solid legal basis.

Philippe Dufresne stated that the company launched its product "without having fully addressed" the known risks. The commissioner warned that this situation exposed citizens to potential damages such as data leaks or discrimination.

"Appropriate safeguards are the cornerstone of responsible innovation." - Philippe Dufresne, Privacy Commissioner of Canada, Office of the Privacy Commissioner of Canada

The authorities highlighted the lack of transparency and the difficulties for users to access, correct, or delete their data. Furthermore, the system generated responses with inaccurate or fabricated personal information, which aggravated the technological firm's corporate responsibility.

Silence in the face of the Tumbler Ridge shooting accelerates legal pressure

This ruling comes weeks after Sam Altman, CEO of OpenAI, apologized to the town of Tumbler Ridge. The company did not alert authorities about a user who subsequently committed a deadly shooting on February 10.

An 18-year-old young woman with mental disorders killed five children, a teacher, and two relatives before committing suicide. Authorities confirmed that OpenAI detected alarming interactions on the author's account but decided not to communicate them to the Police.

Altman acknowledged in an April letter that the company failed in its duty to inform law enforcement. This omission has led to civil legal actions against the platform in the United States.

Several families of the victims filed a lawsuit in San Francisco at the end of April. The plaintiffs are seeking compensation of up to 1 billion dollars for the damages suffered.

Following the Canadian investigation, OpenAI agreed to implement additional measures to address the Privacy Office's concerns. The company improved its tools to detect and mask personal data in training sets.

Dufresne explained that these actions will significantly limit the personal information used to train new models. OpenAI will have to submit quarterly reports to regulators to certify compliance with these acquired commitments.

About the author
Redacción
See biography