Artificial Intelligence (AI) is one of the fastest-growing aspects of the tech industry. Whether for professional or personal use, AI is a part of almost everyone’s life, from Google searches to work applications. As AI capabilities expand and more use cases emerge, the risk of exploitation also increases. While AI is a tool that IT professionals can utilize, it can also be leveraged by cybercriminals to enhance their attacks.
Just like for legitimate uses, AI’s potential in an attack capacity is vast. AI can be automated to perform tasks, and in some cases, it can operate with limited supervision, known as Agentic AI, making large-scale attacks more efficient. As generative AI has advanced and become multi-modal, content for social engineering can be made more convincing, whether in visual or auditory media. Emails can be crafted with minimal mistakes, and pictures can be easily produced to appear more believable. Data scraping is also made easier, eliminating the significant time typically spent on manual attacks.
A recent attack reported by Anthropic that used their AI system, Claude, and was believed to be performed by a Chinese-backed criminal group, targeted multiple companies across various industries. By using different tools and the advanced capabilities of AI, including its agentic aspect, they were able to break down their attack into smaller steps and limit the context, ultimately leveraging Claude’s far-reaching capabilities to perform tasks with minimal intervention. The group used Claude to test the targets’ systems and produce a write-up detailing their security and vulnerabilities, including any harvested information, such as credentials and data. The AI was used for at least 80% of the attack, significantly reducing the time it would have taken if it were just people doing the work.
Another growing concern in the threat landscape is the rise of closed-source, offensive AI. As AI developers impose tighter restrictions and begin to limit the capabilities of publicly available models, threat actors are turning to privately trained and closed systems, which operate without the oversight that other AI systems have. Open-source models, though capable and offering broader accessibility, tend to lack the power and fine-tuning capabilities that closed-source models possess. They are also more regulated and cannot customize as needed for advanced attacks. Closed-source models can be tailored more specifically and trained for specific attack purposes, making them increasingly popular with sophisticated threat groups and nation-backed actors.
From a defensive standpoint, specific actions can be taken to prepare for the rise in AI-powered attacks. The most obvious is the implementation of AI-powered solutions. Just as AI can be harnessed for offensive purposes, it can also be utilized in a defensive manner. AI-powered solutions enable faster data and trend analysis, automate monitoring tasks and analysis, and can be used to appropriately alert teams to high-risk activity based on past trends, allowing them to prioritize accordingly. Investing in tools that identify AI-generated material, such as deepfakes or text, would also be helpful. Routine security assessments should be performed to establish baselines for activity and identification of any potential vulnerabilities. Having an incident response plan developed and readily available to employees in the event of a successful attack is crucial for mitigating its impact. A proper response plan should outline true positive detection, remediation, including containment and eradication in affected systems, and any recovery steps that can be taken. Post-mortems should also be utilized to analyze attacks after the fact and provide insight for mitigation.
Keeping your organization and employees up to date on current trends and the risks associated with the threat landscape is one of the most critical defense methods available. Ensuring that you can recognize attacks for what they are and possess the knowledge to mitigate them is invaluable. Attacks are AI-powered but human-driven, and security starts with the people. Ensuring awareness, proper security measures, and updating tools and information accordingly helps to reduce the risk of compromise.


Leave a Reply