Introducing Self-Managed Kindo

Introducing Self-Managed Kindo

Written by

Andy Manoske

Article
8 mins

Agentic Security and Infrastructure Management for Self-Managed Environments

Kindo’s mission is to realize a future of autonomous infrastructure: one where professionals charged with the protection and management of enterprise infrastructure (cloud, on-prem, hybrid, etc.) define strategy and command a fleet of generative AI agents in natural language to implement that strategy in their systems for them.  

We believe that autonomous infrastructure is critical for the next generation of enterprise security and infrastructure. 

For security, the rise of professional and state-sponsored adversaries utilizing generative AI to rapidly scale attacks means that defenders unable to scale with them will be outflanked and ultimately compromised due to the defender’s dilemma: defenders need to be successful all the time, but their adversaries only need to be successful once. 

To defend against modern and future adversaries commanding armies of constantly-probing genAI agents, enterprise security teams must command legions of their own. 

For infrastructure, the rise of multi-cloud and diverse mutli-environment systems has enabled greater flexibility and cost optimization to satisfy the digital migration of the world. But it has also significantly complicated the work of teams such as Site Reliability Engineering (SRE) and DevOps who are charged with managing these increasingly complicated environments. 

Our goal is to empower human professionals in security and infrastructure with GenAI. Rather than being forced to maintain a library of scripts or be technical experts over a vast universe of tools and technologies, SecOps and DevOps/Platform Engineering professionals command an army of agents who perform these actions in responding to incidents, performing change management, and enforcing compliance and vulnerability assessments/remediation on their behalf. 

This allows human professionals to focus more on making strategic decisions about their infrastructure, and let agents take over the tactical burden of implementing that decision. 

Kindo’s GenAI agents determine what data or technology is necessary for their work and can communicate directly with infrastructure and API endpoints in code. But they communicate with human beings and read human-written documents such as runbooks or playbooks in natural language, ensuring that SecOps, DevOps/Platform Engineering, and ITOps professionals can comfortably instruct and work with them as if they were another human on their team. 

To accomplish this mission, Kindo’s agents must access critical parts of an enterprise’s infrastructure. Kindo’s agents frequently may interact with sensitive systems such as Identity and Access Management (IAM) infrastructure or sensitive data protected by Governance and Regulatory Compliance (GRC) requirements. 

Due to the sensitivity and criticality of such systems and data, enterprises commonly require that they and any software interacting with them operate within protected self-managed environments.

Thus Kindo’s agents often run into an increasingly common challenge for most generative AI agents in the enterprise: how does an autonomous genAI agent exist and execute its operations within a self-managed infrastructure environment?

What is Self-Managed Infrastructure?

Self-Managed infrastructure is any technology infrastructure that is fully managed by an enterprise and its internal teams. It can be located anywhere: on-prem in a dedicated data center, within a self-managed cloud environment provided by a cloud provider like Amazon AWS or Microsoft Azure, locally on a laptop or endpoint system, etc. 

Enterprises deploy self-managed infrastructure for a variety of reasons that ultimately sum to one: control. By controlling how their infrastructure holds, processes, and transforms sensitive data, enterprises can directly manage their exposure against breaches of compliance, adversaries, reliability issues, and other threats to their organization’s legal and financial risk profile.

Agentic Architecture Challenges in Self-Managed Infrastructure

Above: An example of a common challenge faced by GenAI agents with Self-Managed Infrastructure.

Without deploying GenAI agents, their management framework, and their model providers into self-managed infrastructure, those agents will be unable to access critical systems or stores of essential data to complete their tasks. 

Most modern GenAI agents are not well-instrumented to run in self-managed environments. 

Agents require constant access to a GenAI model (typically a Large Language Model or LLM) for their intelligence. The relationship between an agent and its LLM is a client-server one; the LLM serves a REST API and applications (including agents) make client requests against that API. Whoever operates a LLM is thus charged with providing a highly complex and resource intensive server. 

The most common LLM providers exist as SaaS platforms. This makes sense for two reasons:

  1. Difficulty: LLMs are complicated to set up and reliably serve to users and agents. By providing them via a SaaS platform, users and agents can rapidly find value while offloading their management burden to their provider.

  2. Cost: LLMs are very computationally intensive. They often require a large amount of dedicated GPU resources to operate, ensuring that LLM providers must incur a significant set of fixed and variable costs. When these costs are amortized across a variety of customers, LLM providers can find profit and their customers can leverage genAI without making a significant investment. 

SaaS offerings are definitionally managed on their provider’s infrastructure. For LLM providers, this means that their clients must be able to directly communicate with the model provider’s infrastructure. 

For agents that must operate in self-managed environments this may be difficult or impossible. Network boundaries that define the borders of self-managed infrastructure such as VPCs may make it impossible for agents hosted within to communicate with an external model provider. 

Even if an agent can navigate that border, providing data from systems within a self managed environment may violate the control that self-managed environments are meant to provide. Techniques to ensure LLMs respond accurately and utilize up-to-date information such as Retrieval Augmented Generation (RAG) require well-managed secure access to sensitive data. If that data is stored in a self-managed environment, exfiltrating it outside of that environment may violate security or GRC policies.

How Kindo Secures Agentic Architecture

To run an agent that manages critical systems in infrastructure (both self-managed and not) you need to provide strong audit and security controls to protect how that agent utilizes sensitive data. 

Kindo provides a variety of enterprise security and privacy controls to govern how its agents secure and manage its users’ infrastructure. 

These include:

  • Enterprise Single Sign On (SSO) Support: Kindo extends existing SSO systems, ensuring organizations can utilize their existing Identity Providers (IDPs) to access Kindo and its models. 

  • Groups and Role-Based Access Control (RBAC): Kindo manages how individuals and systems within an enterprise can access GenAI models. These can be managed at an individual level or (more common to enterprises) at a group level to protect least privilege.

  • Audit Logging: Kindo maintains a robust JSON-formatted audit log of all activity hosted across its infrastructure between humans, agents, and model providers.

  • Data Loss Prevention (DLP) Filters: Administrators within Kindo can filter sensitive data contained within prompts instructing agents to perform workflows. This ensures that data such as sensitive IP, Personally Identifiable Information (PII), and anything a Kindo administrator deems sensitive is tokenized out of instructions provided to a LLM.

  • Secrets Manager Support: Some data, such as credentials used in REST API calls, is too sensitive to ever be managed by users. Kindo can work with enterprise secrets managers - including cloud-based systems such as AWS KMS and dedicated enterprise secrets managers such as HashiCorp Vault - to store and retrieve these sacrosanct data.

There is infrastructure necessary to operate Kindo’s features - including and especially its security features. 

We’ve historically managed that infrastructure and provided Kindo as a SaaS offering. But owing to the challenges of having SaaS infrastructure interact with agents inside of self-managed infrastructure, we’ve been working on enabling Kindo’s enterprise users to fully operate their own self-managed versions of Kindo within their self-managed infrastructure. 

We’re excited to reveal the result of that effort: Self-Managed Kindo. 

Introducing Self-Managed Kindo 


Above: Self-Managed Kindo is a new version of Kindo capable of wholly operating within self-managed environments: cloud, on-prem, hybrid, and more.

Self-Managed Kindo is a new enterprise offering of Kindo designed to operate within a user’s self-managed environment. 

With Self-Managed Kindo, Kindo administrators can fully control every aspect of Kindo’s operation and deploy it as a highly secure/resilient service operating wholly within their self-managed environment: on-prem, in the cloud, or even across hybrid cloud and multi-cloud infrastructures.

Self-Managed Kindo is deployed as a service on Kubernetes via Helm chart. It’s designed to operate within cloud-based managed Kubernetes services such as Amazon EKS or Azure Kubernetes Service, as well as on multi-environment platforms such as RedHat OpenShift and SuSE Rancher. 

Our goal is to eventually support Kindo even in austere, bare metal Kubernetes environments that support Helm: targeting the deployments of agents and the infrastructure to power them in extreme security environments such as air gapped and extreme edge infrastructure. 

Self-Managed Kindo will be able to operate in such extreme environments because it uses fully user-managed infrastructure. Resources such as databases it uses for managing state, its secrets manager, and its vector stores used for Retrieval Augmented Generation (RAG) to ensure that genAI models respond accurately with up-to-date information are all user controlled and configurable. As long as a user has network-addressable access to these resources, Kindo can use them to power its agents. 

Self-Managed Kindo is currently in a closed-alpha stage. But we’re excited to launch it to the world soon, and if you’re at RSA Convention 2025 in San Francisco we’d love to show it to you! Sign up for a meeting and come join us just outside Moscone.

Unlock the Power of Agentic Security with Kindo

Request a personalized demo with team.

Unlock the Power of Agentic Security with Kindo

Request a personalized demo with team.

Unlock the Power of Agentic Security with Kindo

Request a personalized demo with team.

Upgrade your workflow

Upgrade your workflow

Upgrade your workflow

© 2024 Usable Machines, Inc. (dba Kindo)

© 2024 Usable Machines, Inc. (dba Kindo)

© 2024 Usable Machines, Inc. (dba Kindo)