
TOP STORY
Sep 3, 2025
AI agents are being heralded as the future of enterprise applications, with Gartner projecting their integration in 40% of applications by 2026, up from less than 5% in 2025.
The Director’s Cut: Balancing Speed and Security in AI Agent Deployments
AI agents are being heralded as the future of enterprise applications, with Gartner projecting their integration in 40% of applications by 2026, up from less than 5% in 2025. These task-specific agents promise to automate operations, enhance productivity, and enable real-time collaboration, driving significant business value. However, Gartner’s urgency around agent adoption (“CIOs Have Three to Six Months to Set Their Agentic AI Strategy and Investments”) can open the door to serious risks, particularly with regard to security and governance.
Rushing deployments without sufficient focus on security is dangerous. Immature AI agents can exhibit vulnerabilities such as unauthorized access, data breaches, or even malicious exploitation by threat actors (more on that below). Additionally, AI agents’ ability to autonomously connect with multiple systems heightens the risk of cascading failures if security is inadequate.
Prioritizing speed over security jeopardizes operational continuity and exposes organizations to reputational and regulatory risks. Boards must insist on methodical planning that incorporates robust cybersecurity protocols, rigorous testing, and oversight frameworks to ensure AI deployments are secure, ethical, and sustainable in the long term.
Questions directors should ask management:
- What specific measures are in place to address risks like unauthorized access, system vulnerabilities, and data breaches as we deploy AI agents?
- What steps are we taking to evaluate and mitigate risks before, during, and after AI agent deployment, rather than ceding to pressures for quick wins?
- Is our AI adoption strategy aligned with a clear governance framework that prioritizes security, compliance, and ethical considerations?
On the Radar:
The Fallout of SaaS Exploits: Lessons from the Farmers Insurance Breach
The recent data breach at Farmers Insurance that exposed sensitive information for 1.1 million customers is the latest in a series of attacks targeting the Salesforce platform and third-party vendors. Allianz, Tiffany & Co., Workday, and Google are among the attackers’ other victims. The incident demonstrates how attackers are bypassing traditional infrastructure defenses by exploiting employee trust and vendor vulnerabilities.
Attackers impersonated Salesforce support staff to gain unauthorized access and exfiltrate sensitive data. While no misuse of the stolen data has been reported yet, the exposure of driver’s license and Social Security numbers poses a prolonged risk of identity theft and fraud, potentially impacting both customers and the victim company’s reputation for years.
- How are we enhancing third-party vendor monitoring and SaaS governance to mitigate risks posed by social engineering campaigns and data supply chain vulnerabilities?
Threat Actors Use AI to Scale Cyberattacks: A New Frontier
Attackers are now leveraging AI with the same efficiency-driven goals as businesses: boosting productivity while reducing effort and resources. Anthropic’s latest threat intelligence report highlights a campaign in which cybercriminals abused its Claude Code AI tool to automate and scale data theft and extortion (ransomware) campaigns. This marks a turning point, with AI not just assisting, but actively performing attacks.
Claude Code enabled attackers to automate malware creation, streamline intrusions, and scale data extortion efforts with unprecedented efficiency. Seventeen organizations were victimized concurrently, a volume of simultaneous attacks that demonstrates the disruptive potential of AI-driven cybercrime, and which human-centered operations would struggle to match. This is another example of how ransomware attackers are increasingly focused on data theft rather than data encryption.
- What security measures are we implementing to detect and prevent AI-driven cyberattacks, including those leveraging generative tools for large-scale automation?
Personal Liability and Security: Growing Risks for CISOs
According to Dark Reading, the evolving role of chief information security officers brings heightened accountability and exposure to both legal liability and personal security threats. High-profile cases, such as charges against the SolarWinds CISO and Uber’s former CISO’s conviction, have triggered widespread concern over inadequate liability protections. Many CISOs bear extensive accountability without proportional authority, increasing the risk of being penalized for breaches or responses beyond their control.
While some companies have addressed liability concerns through policies or insurance, such measures often sideline the root issue: strong security culture and robust operational protections. Directors must prioritize both security improvements and risk mitigation strategies to support CISOs while enhancing organizational resilience.
- How are we balancing liability protections for CISOs with measurable investments in security culture, operational safeguards, and proactive strategies to minimize personal and organizational risk?
*****
Zscaler is a proud partner of NACD’s Northern California and Research Triangle chapters. We are here as a resource for directors to answer questions about cybersecurity or AI risks, and are happy to arrange dedicated board briefings.
Please email Rob Sloan, VP, Cybersecurity Advocacy at Zscaler, to learn more.
Recommended