AICybersecurityTechnology

Data Poisoning Emerges as Critical Threat to AI Model Integrity

Security researchers have demonstrated that just a few hundred poisoned documents can create hidden backdoors in AI models. Financial institutions report growing concerns as regulators establish new oversight frameworks to address these emerging threats.

The Stealth Threat to AI Systems

Recent analysis from artificial intelligence researchers reveals that surprisingly small amounts of manipulated data can compromise large language models, according to reports. Sources indicate that approximately 250 poisoned documents inserted into a training dataset can create hidden “backdoors” that trigger abnormal model behavior when activated by specific phrases.

CybersecuritySecurityTechnology

Russian Coldriver Hackers Launch Sophisticated ‘NoRobot’ Malware Campaign

Russian intelligence-linked hackers have shifted to a new malware family called NoRobot after their previous LostKeys malware was exposed. The sophisticated attack chain uses fake CAPTCHA pages to trick targets into downloading malicious files. Security analysts report this represents a significant escalation in the group’s operational tempo.

Russian Hackers Deploy New Malware Suite

The Russian-affiliated hacking collective Coldriver has been observed deploying a sophisticated new malware set, according to researchers at the Google Threat Intelligence Group. The report states this new malware family, tracked as NoRobot, appears to have replaced the group’s previous primary malware LostKeys since it was publicly disclosed in May 2025.