close
close
Local

CSAM Pedophiles Identified via Dark Web Malware : Tech : Tech Times

Dark web information-stealing malware logs have identified many people downloading and sharing child sexual abuse material (CSAM), highlighting a new law enforcement technique.

This information allowed Insikt Group, a Recorded Future company, to identify 3,324 accounts visiting portals distributing child sexual abuse material. Analysts used the stolen data to trace these identities across multiple platforms, obtaining usernames, IP addresses, and system characteristics, as reported by BleepingComputer.

Law enforcement uses this information to identify perpetrators and make arrests. Infostealer logs such as Redline, Raccoon, and Vidar contain critical data including passwords, browsing history, cryptocurrency information, and more. The dark web aggregates and sells these records, facilitating criminal activity.

Between February 2021 and February 2024, Insikt identified the culprits by cross-referencing the stolen credentials with known CSAM domains. They discovered unique username-password matches by removing duplicates.

CSAM content authors exposed

Researchers can use information-stealing malware to link CSAM account users to email, banking, and social media accounts. Digital currency transactions, browsing history, and autofill data provide additional information.

Insikt's pioneering use of information thieves' data demonstrates its potential to improve tracking and convictions in child sexual exploitation cases.

The development comes as child predators increasingly use artificial intelligence (AI) to create sexually explicit photographs of children, hampering law enforcement's attempts to prevent online sexual exploitation.

Stanford University's Internet Observatory found that AI-powered technologies have allowed criminals to create fake images and videos based on real photos of children, increasing child sexual abuse content, CNA reported.

Bryce Westlake, an associate professor in the Department of Legal Studies at San Jose State University, said online criminals can create images of “anything they can imagine,” underscoring concerns among law enforcement. He added that child sexual abuse material is now prevalent on social media and private websites.

In 2023, the National Center for Missing and Exploited Children's CyberTipline recorded more than 36 million suspected incidents of child sexual abuse, highlighting the pervasive impact on victims and their families.

The Children's Online Safety Act in the United States and the Online Harms Act in Canada aim to hold social media companies accountable for harmful content generated by AI.

Read also: Google's climate change targets fall as emissions soar due to AI

(Photo: PAUL J. RICHARDS/AFP via Getty Images)

A computer terminal is seen in the newly expanded facility of the Department of Homeland Security's ICE Cybercrime Center in Fairfax, Virginia, July 22, 2015. The forensic lab combats cybercrime cases involving underground online marketplaces, child exploitation, intellectual property theft and other computer and online crimes.


Artificial intelligence hampers US law enforcement efforts against online child abuse

However, according to a Guardian investigation, social media companies using AI for content moderation are making it harder to detect and report child sexual abuse, potentially allowing offenders to escape prison.

U.S. law requires social media companies to submit child sexual abuse content to the National Center for Missing and Exploited Children. In 2022, NCMEC received more than 32 million reports of suspected child sexual exploitation, including 88 million photographs, videos, and related documents, from private and public sources.

Meta, which includes Facebook, Instagram, and WhatsApp, generates 84% ​​of those reports, or more than 27 million. The sites’ AI algorithms identify questionable content, which human moderators review before reporting it to NCMEC and law enforcement. Law enforcement can only access AI-generated child sexual abuse reports after obtaining a search warrant from the company that reported them, which can take days or weeks.

NCMEC's ​​vice president of analytical services, Staca Shehan, pointed out the legal constraints: “If the company has not indicated that it has viewed the file before reporting it to NCMEC, law enforcement cannot open it or review it without due process.”

More than a decade ago, judges ruled that NCMEC's ​​investigations were government action, requiring Fourth Amendment privacy protections against arbitrary searches and seizures.

Child safety experts and legal professionals warn that such delays can compromise investigations, lead to the loss of evidence and put children at risk. An anonymous assistant U.S. attorney noted that such delays pose a high risk to community safety because they allow criminals to “continue their activities undetected, putting all children at risk.”

Despite challenges, AI is helping police and groups combat online child exploitation.

National Centre on Sexual Exploitation senior vice president Dr Marcel Van der Watt called the role of AI “augmented intelligence”, with chatbots interacting with pedophiles online, helping law enforcement identify and prosecute the criminals.

Related article: Pentagon to launch cloud supercomputer service for US military, improving remote access

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.

Related Articles

Back to top button