<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>AI Security Brief</title>
    <link>https://aithreatbrief.com</link>
    <description>AI-assisted security briefings on AI-powered threats, privacy defence strategies, and security tooling for technology professionals.</description>
    <language>en-au</language>
    <atom:link href="https://aithreatbrief.com/feed.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>The Best LLM Firewalls Evaluated (2026 Guide)</title>
      <link>https://aithreatbrief.com/blog/best-llm-firewall-2026</link>
      <guid isPermaLink="true">https://aithreatbrief.com/blog/best-llm-firewall-2026</guid>
      <description>A technical evaluation of the top MLSecOps firewalls for filtering prompts, preventing injection attacks, and securing large language model deployments in production.</description>
      <pubDate>Fri, 03 Apr 2026 00:00:00 GMT</pubDate>
      <category>Tools</category>
    </item>
    <item>
      <title>The Zero-BS Guide to Preventing Prompt Injection Attacks</title>
      <link>https://aithreatbrief.com/blog/preventing-prompt-injection-attacks</link>
      <guid isPermaLink="true">https://aithreatbrief.com/blog/preventing-prompt-injection-attacks</guid>
      <description>A tactical guide to mitigating prompt injection attacks in production. Moving beyond fragile regex filters to semantic validation and dual-LLM architectural defenses.</description>
      <pubDate>Fri, 03 Apr 2026 00:00:00 GMT</pubDate>
      <category>Security Engineering</category>
    </item>
    <item>
      <title>Claude Extension Flaw Enabled Zero-Click XSS Prompt Injection via Any Website</title>
      <link>https://aithreatbrief.com/blog/claude-extension-flaw-enabled-zero-click-xss-prompt-injection-via-any-website</link>
      <guid isPermaLink="true">https://aithreatbrief.com/blog/claude-extension-flaw-enabled-zero-click-xss-prompt-injection-via-any-website</guid>
      <description>A critical vulnerability in Anthropic&apos;s Claude Chrome Extension exposes users to zero-click XSS prompt injection attacks, allowing malicious actors to execute commands without user interaction. This article examines the technical details, risks, and defensive strategies for organizations and individuals.</description>
      <pubDate>Mon, 30 Mar 2026 00:00:00 GMT</pubDate>
      <category>AI Threats</category>
    </item>
    <item>
      <title>Security Researchers Sound the Alarm on Vulnerabilities in AI-Generated Code</title>
      <link>https://aithreatbrief.com/blog/security-researchers-sound-the-alarm-on-vulnerabilities-in-ai-generated-code</link>
      <guid isPermaLink="true">https://aithreatbrief.com/blog/security-researchers-sound-the-alarm-on-vulnerabilities-in-ai-generated-code</guid>
      <description>A surge in vulnerabilities linked to AI-generated code has prompted security researchers to call for urgent improvements in code review and development practices. This article explores the risks, trends, and defensive strategies for organizations leveraging AI in software development.</description>
      <pubDate>Mon, 30 Mar 2026 00:00:00 GMT</pubDate>
      <category>AI Threats</category>
    </item>
    <item>
      <title>Best Password Managers for Security Teams 2026</title>
      <link>https://aithreatbrief.com/blog/best-password-managers-for-security-teams-2026</link>
      <guid isPermaLink="true">https://aithreatbrief.com/blog/best-password-managers-for-security-teams-2026</guid>
      <description>A security-focused comparison of enterprise password managers — evaluating zero-knowledge architecture, audit history, SSO integration, and secrets management for IT teams.</description>
      <pubDate>Thu, 26 Mar 2026 00:00:00 GMT</pubDate>
      <category>Privacy</category>
    </item>
    <item>
      <title>Best VPNs for Cybersecurity Professionals 2026</title>
      <link>https://aithreatbrief.com/blog/best-vpns-for-cybersecurity-professionals-2026</link>
      <guid isPermaLink="true">https://aithreatbrief.com/blog/best-vpns-for-cybersecurity-professionals-2026</guid>
      <description>A data-driven comparison of the best VPNs for security analysts, threat researchers, and IT teams — evaluated on encryption, jurisdiction, audit history, and operational security.</description>
      <pubDate>Tue, 24 Mar 2026 00:00:00 GMT</pubDate>
      <category>Privacy</category>
    </item>
    <item>
      <title>NordVPN vs ProtonVPN: A Security Professional&apos;s Comparison</title>
      <link>https://aithreatbrief.com/blog/nordvpn-vs-protonvpn-security-comparison</link>
      <guid isPermaLink="true">https://aithreatbrief.com/blog/nordvpn-vs-protonvpn-security-comparison</guid>
      <description>A head-to-head comparison of NordVPN and ProtonVPN for cybersecurity work — covering encryption, jurisdiction, open-source status, audit history, and pricing.</description>
      <pubDate>Tue, 24 Mar 2026 00:00:00 GMT</pubDate>
      <category>Privacy</category>
    </item>
    <item>
      <title>AI Flaws in Amazon Bedrock, LangSmith, and SGLang Enable Data Exfiltration and RCE</title>
      <link>https://aithreatbrief.com/blog/ai-flaws-in-amazon-bedrock-langsmith-and-sglang-enable-data-exfiltration-and-rce</link>
      <guid isPermaLink="true">https://aithreatbrief.com/blog/ai-flaws-in-amazon-bedrock-langsmith-and-sglang-enable-data-exfiltration-and-rce</guid>
      <description>Three independent research teams disclosed critical vulnerabilities across Amazon Bedrock AgentCore, LangSmith, and SGLang in March 2026 — collectively enabling DNS-based data exfiltration, account takeover, and unauthenticated remote code execution against AI infrastructure. One of these flaws remains unpatched, and CERT/CC has issued a public advisory.</description>
      <pubDate>Thu, 19 Mar 2026 00:00:00 GMT</pubDate>
      <category>AI Threats</category>
    </item>
    <item>
      <title>CursorJack: How MCP Deeplinks Turn Cursor IDE Into a Code Execution Vector</title>
      <link>https://aithreatbrief.com/blog/cursorjack-attack-path-exposes-code-execution-risk-in-ai-development-environment</link>
      <guid isPermaLink="true">https://aithreatbrief.com/blog/cursorjack-attack-path-exposes-code-execution-risk-in-ai-development-environment</guid>
      <description>Proofpoint Threat Research has demonstrated that a single crafted deeplink can weaponise Cursor IDE&apos;s MCP installation flow to execute arbitrary commands under a developer&apos;s full privileges — exposing the structural security gap at the heart of the Model Context Protocol ecosystem.</description>
      <pubDate>Thu, 19 Mar 2026 00:00:00 GMT</pubDate>
      <category>AI Threats</category>
    </item>
    <item>
      <title>LLM Guardrails Are Failing: What the 2025–2026 Research Actually Shows</title>
      <link>https://aithreatbrief.com/blog/researchers-discover-major-security-gaps-in-llm-guardrails</link>
      <guid isPermaLink="true">https://aithreatbrief.com/blog/researchers-discover-major-security-gaps-in-llm-guardrails</guid>
      <description>Palo Alto Networks&apos; Unit 42, Oxford researchers, and a Nature Communications study converge on the same finding: the safety layers enterprises rely on to govern generative AI can be bypassed at scale, by automated tools, in under 60 seconds. The attack success rates are not marginal — they are, in several documented cases, approaching 100%.</description>
      <pubDate>Tue, 17 Mar 2026 00:00:00 GMT</pubDate>
      <category>AI Threats</category>
    </item>
    <item>
      <title>Hive0163 Deploys AI-Assisted Slopoly Malware for Persistent Access in Ransomware Attacks</title>
      <link>https://aithreatbrief.com/blog/hive0163-uses-ai-assisted-slopoly-malware-for-persistent-access-in-ransomware-attacks</link>
      <guid isPermaLink="true">https://aithreatbrief.com/blog/hive0163-uses-ai-assisted-slopoly-malware-for-persistent-access-in-ransomware-attacks</guid>
      <description>IBM X-Force discovered Slopoly, an LLM-generated PowerShell backdoor deployed by Hive0163 that maintained persistent access for 7+ days during a live ransomware engagement. The code self-describes as a &quot;Polymorphic C2 Persistence Client&quot; — but isn&apos;t actually polymorphic. That gap between what AI claims to build and what it actually builds is the story.</description>
      <pubDate>Mon, 16 Mar 2026 00:00:00 GMT</pubDate>
      <category>AI Threats</category>
    </item>
    <item>
      <title>OpenClaw AI Agent Flaws: CVEs, Prompt Injection, and a Government Warning</title>
      <link>https://aithreatbrief.com/blog/openclaw-ai-agent-flaws-could-enable-prompt-injection-and-data-exfiltration</link>
      <guid isPermaLink="true">https://aithreatbrief.com/blog/openclaw-ai-agent-flaws-could-enable-prompt-injection-and-data-exfiltration</guid>
      <description>China&apos;s CNCERT issued a formal warning on March 14 about OpenClaw&apos;s inherently weak default configurations. With 97+ CVEs tracked, 135,000+ exposed instances, and zero-click data exfiltration demonstrated in the wild, the open-source AI agent has become a case study in what happens when autonomous systems ship without a security baseline.</description>
      <pubDate>Mon, 16 Mar 2026 00:00:00 GMT</pubDate>
      <category>AI Threats</category>
    </item>
    <item>
      <title>Australia&apos;s Privacy Act Reforms 2026: What You Need to Know</title>
      <link>https://aithreatbrief.com/blog/australias-privacy-act-reforms-2026</link>
      <guid isPermaLink="true">https://aithreatbrief.com/blog/australias-privacy-act-reforms-2026</guid>
      <description>Australia&apos;s Privacy Act has undergone its most significant overhaul since 1988 — with new penalties reaching $50 million, mandatory disclosure of AI decision-making systems, and a new statutory tort for serious privacy invasions. Here&apos;s what changed and what it means for your organisation.</description>
      <pubDate>Sat, 14 Mar 2026 00:00:00 GMT</pubDate>
      <category>Privacy</category>
    </item>
    <item>
      <title>Agentic AI Security Risks: What Every Developer Must Know</title>
      <link>https://aithreatbrief.com/blog/agentic-ai-security-risks</link>
      <guid isPermaLink="true">https://aithreatbrief.com/blog/agentic-ai-security-risks</guid>
      <description>Autonomous AI agents are now embedded in enterprise workflows with privileged access to databases, APIs, and critical systems — but the security infrastructure governing them hasn&apos;t kept pace. Here&apos;s what developers and security teams need to understand right now.</description>
      <pubDate>Thu, 12 Mar 2026 00:00:00 GMT</pubDate>
      <category>AI Threats</category>
    </item>
    <item>
      <title>How AI Is Being Used to Launch Cyberattacks in 2026</title>
      <link>https://aithreatbrief.com/blog/how-ai-is-being-used-to-launch-cyberattacks-in-2026</link>
      <guid isPermaLink="true">https://aithreatbrief.com/blog/how-ai-is-being-used-to-launch-cyberattacks-in-2026</guid>
      <description>AI has crossed a threshold in 2026 — adversaries are no longer just using it to write phishing emails. Autonomous attack agents, AI-generated malware, and deepfake social engineering are redefining the threat landscape at machine speed.</description>
      <pubDate>Tue, 10 Mar 2026 00:00:00 GMT</pubDate>
      <category>AI Threats</category>
    </item>
    <item>
      <title>AI Model Prompt Injection Attacks Explained</title>
      <link>https://aithreatbrief.com/blog/ai-model-prompt-injection-attacks-explained</link>
      <guid isPermaLink="true">https://aithreatbrief.com/blog/ai-model-prompt-injection-attacks-explained</guid>
      <description>Prompt injection is the #1 vulnerability in deployed AI systems — and unlike traditional software flaws, it cannot be patched. Here&apos;s how direct and indirect attacks work, real-world exploits from 2024–2026, and the defence strategies that actually reduce risk.</description>
      <pubDate>Sun, 08 Mar 2026 00:00:00 GMT</pubDate>
      <category>AI Threats</category>
    </item>
  </channel>
</rss>