Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.kguardian.dev/llms.txt

Use this file to discover all available pages before exploring further.

This guide will help you install kguardian and generate your first security policy.

Prerequisites

Before you begin, ensure you have:
  • Kubernetes v1.19 or later
  • Linux nodes with kernel 6.2+ (for eBPF support)
  • kubectl configured and connected to your cluster
  • Admin access to install the controller (DaemonSet, RBAC, etc.)
  • Permission to create resources in your target namespaces
The kguardian controller needs the privileged Pod Security Admission level because it loads eBPF programs. If your cluster enforces Pod Security Admission, label the install namespace before running helm install:
kubectl create namespace kguardian
kubectl label namespace kguardian pod-security.kubernetes.io/enforce=privileged --overwrite
If you skip this on a PSA-enforced cluster, the controller pods will fail to admit with a violates PodSecurity "restricted:..." error.
Kernel Version Check: kguardian requires Linux kernel 6.2+ for eBPF functionality. Run uname -r on your nodes to verify.

Step 1: Install the Controller

The Controller runs as a DaemonSet and uses eBPF to observe your workloads.
helm install kguardian oci://ghcr.io/kguardian-dev/charts/kguardian \
  --version 1.9.1 \
  --namespace kguardian \
  --create-namespace \
  --wait
For specific versions, custom Helm values, Kind-based local development, and external PostgreSQL setups, see the Installation Guide — that page is the canonical install reference.

Verify Installation

Check that all components are running:
kubectl get pods -n kguardian

# Expected output:
# NAME                              READY   STATUS    RESTARTS   AGE
# kguardian-controller-xxxxx        1/1     Running   0          2m
# kguardian-broker-xxxxxxxxxx-xxxxx 1/1     Running   0          2m
# kguardian-ui-xxxxxxxxxx-xxxxx     1/1     Running   0          2m
All pods should show Running status. If not, see Troubleshooting.

Step 2: Install the CLI Plugin

The kguardian CLI is a kubectl plugin for generating policies.

Step 3: Let Your Workloads Run

kguardian learns from actual runtime behavior, so let your applications run normally for 5-15 minutes to collect meaningful data.
1

Deploy Test Workload (Optional)

If you don’t have existing workloads, deploy a small test app:
kubectl create deployment nginx --image=nginx:latest
kubectl expose deployment nginx --port=80
kubectl run curl-pod --image=curlimages/curl --command -- sleep 3600

# Generate some traffic
kubectl exec curl-pod -- curl nginx
2

Monitor Data Collection

Check that the broker is receiving data:
# Port-forward to the broker
kubectl port-forward -n kguardian svc/kguardian-broker 9090:9090 &

# Check for traffic data (replace 'nginx' with your pod name)
curl http://localhost:9090/pod/traffic/name/default/nginx
The longer you let workloads run, the more traffic and syscall variety the controller will capture, and the closer the generated policy will be to your real runtime profile.

Step 4: Generate Your First Network Policy

Now generate a network policy based on observed traffic. --dry-run=true is the default — the CLI writes YAML to --output-dir and does not apply anything to the cluster:
# Generate policy for a specific pod (dry-run is on by default)
kubectl kguardian gen networkpolicy nginx -n default \
  --output-dir ./policies

# View the generated policy
cat ./policies/default-nginx-networkpolicy.yaml
To apply directly during generation instead of writing files for review, pass --dry-run=false. See gen networkpolicy for the full flag reference.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: nginx
  namespace: default
  labels:
    kguardian.dev/managed-by: kguardian
    kguardian.dev/version: v1.0.0
spec:
  podSelector:
    matchLabels:
      app: nginx
  policyTypes:
    - Ingress
    - Egress
  ingress:
    - from:
        - podSelector:
            matchLabels:
              run: curl-pod
          namespaceSelector:
            matchLabels:
              kubernetes.io/metadata.name: default
      ports:
        - protocol: TCP
          port: 80
  egress:
    - to:
        - namespaceSelector:
            matchLabels:
              kubernetes.io/metadata.name: kube-system
          podSelector:
            matchLabels:
              k8s-app: kube-dns
      ports:
        - protocol: UDP
          port: 53
Success! kguardian automatically discovered that your nginx pod receives traffic from the curl-pod on port 80 and makes DNS queries.

Apply the Policy

The default --dry-run=true only writes the YAML file. To put the policy on the cluster, either re-run the CLI with --dry-run=false, or kubectl apply the saved file:
# Option A: let the CLI apply directly
kubectl kguardian gen networkpolicy nginx -n default \
  --output-dir ./policies --dry-run=false

# Option B: apply the saved YAML yourself
kubectl apply --dry-run=client -f ./policies/default-nginx-networkpolicy.yaml  # validate
kubectl apply -f ./policies/default-nginx-networkpolicy.yaml                   # apply

# Verify it's created
kubectl get networkpolicies -n default

Step 5: Generate a Seccomp Profile

Generate a seccomp profile to restrict syscalls:
# Generate seccomp profile for nginx
kubectl kguardian gen seccomp nginx -n default \
  --output-dir ./seccomp

# View the generated profile
cat ./seccomp/default-nginx-seccomp.json
{
  "defaultAction": "SCMP_ACT_ERRNO",
  "architectures": ["SCMP_ARCH_X86_64"],
  "syscalls": [
    {
      "names": ["read", "write", "open", "close", "stat"],
      "action": "SCMP_ACT_ALLOW"
    },
    {
      "names": ["socket", "connect", "accept", "bind", "listen"],
      "action": "SCMP_ACT_ALLOW"
    },
    {
      "names": ["mmap", "munmap", "brk", "mprotect"],
      "action": "SCMP_ACT_ALLOW"
    }
  ]
}

Apply the Seccomp Profile

kguardian generates the profile JSON. Distributing the profile to each node’s /var/lib/kubelet/seccomp/ directory is the user’s responsibility — kguardian does not push profiles to nodes today. Recommended distribution options:
  • Security Profiles Operator (SPO) — Wrap the generated JSON in a SeccompProfile CRD; SPO’s DaemonSet writes it to the kubelet seccomp directory on every node. See the SPO docs for the CRD schema and DaemonSet behavior.
  • A custom hostPath DaemonSet — Mount /var/lib/kubelet/seccomp/ and copy the profile in. Suitable if you do not want a separate operator.
  • Image-baked profiles — Bake the profile into a config map or container image and ship it with your existing deployment pipeline.
Once the profile file lives at /var/lib/kubelet/seccomp/nginx-profile.json on every node that may run the pod, reference it from your workload:
apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  securityContext:
    seccompProfile:
      type: Localhost
      localhostProfile: nginx-profile.json
  containers:
  - name: nginx
    image: nginx:latest
Automated seccomp profile distribution from kguardian is on the roadmap. Until then, pair kguardian with SPO or a DaemonSet of your own.

Next Steps

Architecture

See how the Controller, Broker, and UI fit together

Generate Cilium Policies

Create enhanced L7-aware policies

Batch Generation

Generate policies for all pods at once

Policy Gallery

Worked examples for nginx, Postgres, kube-dns, Prometheus, Istio, Go

Common Issues

Solution: Ensure your pods have been running and generating traffic for at least 5 minutes. Check broker logs:
kubectl logs -n kguardian deployment/kguardian-broker
Solution: Verify kernel version (6.2+) and that nodes support eBPF:
kubectl describe pod -n kguardian kguardian-controller-xxxxx
Solution: The CLI auto-discovers the broker via port-forwarding. Ensure you have permissions:
kubectl auth can-i create pods/portforward -n kguardian
For more troubleshooting, see the Troubleshooting Guide.

Learn more about kguardian's architecture

Understand how the components work together →