
Written by
Ron Williams
Article
•
12 min
Over the last ten years or so, artificial intelligence (AI) has advanced so quickly that a lot of businesses are having a hard time keeping pace. The very systems designed to maintain security, compliance, and productivity are being sidestepped by the unregulated use of AI tools. This situation has led to experts dubbing it "the great AI headache."
AI has made advanced capabilities accessible to everyone, not just specialists. With more than a million free AI models just a click away, companies are now facing a pressing challenge: how to use AI securely and effectively without jeopardizing their enterprise systems.
Why Traditional Security Models Are Failing
Traditional security models often fall short when it comes to handling threats driven by AI advancements. These models were, unfortunately, built for a time when attacks were predictable and human-driven. However, today's AI threats are faster, more scalable, and more adaptive, making enterprises more vulnerable than ever. Here’s a few reasons on why this is the case:
BYOD on Steroids
As mentioned above, employees can now access over a million AI models with a single click, circumventing years of carefully built security and compliance frameworks. This "shadow AI" phenomenon is an evolution of the BYOD (bring your own device) problem, which previously involved external hardware and mobile devices. Now, with AI tools being freely accessible, sensitive data and processes are regularly exposed to external, third-party AI models run by individuals, companies and state actors all over the world. The risk gets worse when these AI model providers train their AIs on proprietary data – teaching them how and why specific data holds value, effectively leaking key operational insights and valuable intellectual property.
Hacker’s Playground
The dramatic reduction in AI costs, a 99% drop over 18 months, has fundamentally shifted the balance of power toward cybercriminals. For just $250, a hacker can generate 5 million personalized phishing emails or 100,000 lines of malicious code, all tailored to specific targets using contextual data. Open-source hacking co-pilots, trained on billions of lines of code and threat intelligence from around the world, democratize access to cybercriminal tools that previously required more specific expertise. This has resulted in cyberattacks becoming more frequent, highly targeted, and scalable.
The GPU Compute Gap
AI compute power which leverages GPUs from Nvidia has seen a 1 billion-fold improvement in operations per dollar over the last 20 years, yet most enterprises are still relying on legacy Intel-based CPU systems to run their workloads. These CPU powered systems lack the efficiency, knowledge and speed necessary to power AIs so companies that have not switched key workloads to GPUs are competing with outdated infrastructure and security teams not leveraging GPUs are fighting with one hand tied behind their back while blindfolded. Enterprises that fail to migrate to infrastructures optimized for AI workloads are not only wasting resources but are also leaving large gaps in their defenses. This lack of compute adaptability means they cannot adequately handle the speed and complexity of newer threats, making them easy targets for AI-native cyberattacks and generally ill equipped to survive the AI wave as their competition migrates to new infrastructure.
What Is Agentic Security?
Agentic Security is essentially an innovative approach to cybersecurity and other fields that uses AI agents - which are autonomous systems designed to perform tasks, learn, and adapt in a variety of settings.
AI agents serve as digital coworkers capable of analyzing data, making decisions, and carrying out actions based on predefined or learned criteria. They work autonomously or in collaboration with human teams, monitoring processes and adapting as conditions change.
Unlike traditional software programs that adhere to static rules, AI agents can quickly, and dynamically respond to changing inputs by leveraging machine learning models, natural language processing, and massive contextual awareness.
At their core, AI agents embody three main components:
1) Sensing: The ability to monitor and collect data from multiple sources, including sensors, databases, and live network feeds.
2) Reasoning: Algorithms and machine learning enable agents to process and interpret data, identify patterns, predict outcomes, and determine optimal responses.
3) Acting: Based on their reasoning, agents execute tasks or suggest actions, whether deploying a security patch, optimizing a cloud resource, or escalating a service ticket.
Three Detailed AI Security Use Cases
It’s important to note that AI agents are changing the way businesses handle security, IT operations, and development by automating complex workflows. Their ability to operate across domains means they not only protect systems, but also improve efficiency and promote continuous development. The following are three key areas where AI agents are having an impact:
1. Security Incident Response
Traditionally, security analysts perform an initial triage process that includes event enrichment, extracting indicators of compromise (IOCs), and uploading them to well-known platforms like VirusTotal, Pulsedive, and Shodan for further analysis. This manual process is susceptible to delays and human error, as analysts must manually correlate information and ensure key data points are not overlooked. AI agents automate this entire workflow by collecting threat intelligence, enriching event data, and extracting relevant IOCs with speed and precision. For example, they can query multiple threat intelligence platforms and cross-reference data, ensuring no information is missed during the analysis phase.
This automation means that by the time the analyst steps in, they are already presented with an enriched and prioritized security incident, saving time. However, AI agents can go even further. Using the enriched data, they can determine the likelihood and severity of the threat by looking at patterns across multiple platforms. For example, an AI agent could detect correlations between anomalous IP addresses flagged by Shodan and known malware hashes from VirusTotal, assessing the overall threat level. Based on this assessment, the agent could autonomously initiate actions like blocking an IP, isolating a system, or generating an alert for further investigation all without human intervention unless needed.
2. DevOps Incident Response
AI-powered agents can assist DevOps teams in identifying and resolving outages faster. By analyzing observability tools and monitoring code deployments, they automate root cause analysis, pinpointing problems like IAM misconfigurations, failed deployments, or Kubernetes infrastructure errors. For example, think of a scenario where a production outage occurs due to a failed Kubernetes deployment. The AI agent quickly analyzes error logs, configuration files, and recent changes in the environment.
It identifies that an improperly configured access control policy caused the failure and recommends immediate corrective actions, like reverting to a previous configuration or applying a permissions fix. This automation not only reduces mean time to recovery (MTTR) but also prevents similar incidents by learning from the event and updating the system’s response protocols for future occurrences, ensuring continuous system availability.
3. IT Compliance Automation
With built-in support for audit trails, access control policies, and regulatory requirements, AI agents monitor enterprise systems for compliance violations. For example, in a large IT environment, an AI agent can track system changes, analyze access logs, and assess configuration files against frameworks like SOC2, ISO 27001, and GDPR. Instead of requiring time consuming, manual, periodic reviews by teams, the AI agent constantly checks for anomalies or violations in real time, flagging and recommending corrective actions before risks escalate.
An AI agent might detect that a cloud service has drifted from approved security settings, like allowing overly permissive access to sensitive resources. It could then automatically alert IT teams or correct the configuration directly, ensuring the system stays compliant. This monitoring means that enterprises aren’t limited to after-the-fact audits. Instead, they can maintain ongoing compliance through proactive, real-time adjustments driven by the AI’s ability to learn from the environment and apply regulatory knowledge dynamically.
The Kindo and WhiteRabbitNeo Blueprint
The backbone of agentic security for us lies in advanced orchestration and AI models, working together to ensure security, operational efficiency, and development optimization.
Kindo acts as the central hub for orchestrating AI agents, managing their deployment, and integrating them securely across workflows without disrupting existing systems. It automates tasks, enforces governance, and tracks agent activity for full auditability.
1) Kindo allows enterprises to mix and match AI models, enabling flexibility across different workflows for cost, scaling, and security requirements. For example, OpenAI GPT-4.0 could be used to analyze data for accuracy and context, while Anthropic Claude 3.5 generates the final reports, leveraging its strengths in writing and summarization. Other companies will need to leverage state of the art open source models that are self managed or run on premise because they care which AI sees their data or who they expose their valuable business insights to. Kindo provides flexibility and many options. This task-specific, step-by-step allocation ensures optimized performance and security for a broad set of workflows.
2) Kindo also connects natively with tools like ServiceNow, GitHub, and Splunk, ensuring minimal disruption during integration. For example, security alerts from Splunk can automatically trigger a remediation task in ServiceNow, while GitHub tracks and documents patches applied to affected systems, creating an end-to-end response workflow.
The AI model WhiteRabbitNeo is our DevSecOps offering designed to support use cases in offensive and defensive cybersecurity, secure infrastructure design, secure code generation, and more. It generally excels in:
• Investigating, launching, and remediating attacks using reconnaissance data.
• Understanding 80+ programming and scripting languages, including enterprise APIs.
• Recognizing vulnerabilities through knowledge of MITRE’s NVD and global threat actors.
• Automating threat detection and remediation using infrastructure and IAM expertise.
Peter Clay, CISO of Aireon, the global aviation navigation data leader, highlighted the impact of WhiteRabbitNeo recently:
"We used WhiteRabbitNeo to accelerate and simplify our threat-hunting processes, which enabled us to increase the value of our existing SIEM infrastructure and identify issues before they became real problems. The savings in time and potential impact have resulted in current cost savings of over $2 million per year and growing."
Together, Kindo and WhiteRabbitNeo create a unique end-to-end solution that enables enterprises to deploy AI agents capable of automating security incident response, optimizing IT operations, and improving development pipelines, making agentic security more of an actual reality for organizations around the world.
The Agentic Imperative
The future of enterprise security isn’t human vs. AI, it’s humans empowered by AI agents. Jensen Huang (Nvidia CEO) “we have over 100,000 agents running at Nvidia now” and “we can’t run our cyber security without agents now” he also predicts, “100 million AI agents will soon run at Nvidia.” Enterprises that delay adopting agentic security will face relentless attacks from AI-armed adversaries. Those that act now will turn AI from a headache into their greatest ally.
The era of agentic security is here, and it’s time for enterprises to seize the advantage. To find out more, get in touch with our team today.