Skip to main content
This guide will help you install kguardian and generate your first security policy.

Prerequisites

Before you begin, ensure you have:
  • Kubernetes v1.19 or later
  • Linux nodes with kernel 6.2+ (for eBPF support)
  • kubectl configured and connected to your cluster
  • Admin access to install the controller (DaemonSet, RBAC, etc.)
  • Permission to create resources in your target namespaces
Kernel Version Check: kguardian requires Linux kernel 6.2+ for eBPF functionality. Run uname -r on your nodes to verify.

Step 1: Install the Controller

The kguardian controller runs as a DaemonSet and uses eBPF to observe your workloads.

Verify Installation

Check that all components are running:
kubectl get pods -n kguardian

# Expected output:
# NAME                              READY   STATUS    RESTARTS   AGE
# kguardian-controller-xxxxx        1/1     Running   0          2m
# kguardian-broker-xxxxxxxxxx-xxxxx 1/1     Running   0          2m
# kguardian-ui-xxxxxxxxxx-xxxxx     1/1     Running   0          2m
All pods should show Running status. If not, see Troubleshooting.

Step 2: Install the CLI Plugin

The kguardian CLI is a kubectl plugin for generating policies.

Step 3: Let Your Workloads Run

kguardian learns from actual runtime behavior, so let your applications run normally for 5-15 minutes to collect meaningful data.
1

Deploy Test Workload (Optional)

If you don’t have existing workloads, deploy a simple app:
kubectl create deployment nginx --image=nginx:latest
kubectl expose deployment nginx --port=80
kubectl run curl-pod --image=curlimages/curl --command -- sleep 3600

# Generate some traffic
kubectl exec curl-pod -- curl nginx
2

Monitor Data Collection

Check that the broker is receiving data:
# Port-forward to the broker
kubectl port-forward -n kguardian svc/kguardian-broker 9090:9090 &

# Check for traffic data (replace 'nginx' with your pod name)
curl http://localhost:9090/pod/traffic/name/default/nginx
The longer you let workloads run, the more comprehensive your policies will be.

Step 4: Generate Your First Network Policy

Now generate a network policy based on observed traffic:
# Generate policy for a specific pod
kubectl kguardian gen networkpolicy nginx -n default \
  --output-dir ./policies

# View the generated policy
cat ./policies/default-nginx-networkpolicy.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: nginx
  namespace: default
  labels:
    kguardian.dev/managed-by: kguardian
    kguardian.dev/version: v1.0.0
spec:
  podSelector:
    matchLabels:
      app: nginx
  policyTypes:
    - Ingress
    - Egress
  ingress:
    - from:
        - podSelector:
            matchLabels:
              run: curl-pod
          namespaceSelector:
            matchLabels:
              kubernetes.io/metadata.name: default
      ports:
        - protocol: TCP
          port: 80
  egress:
    - to:
        - namespaceSelector:
            matchLabels:
              kubernetes.io/metadata.name: kube-system
          podSelector:
            matchLabels:
              k8s-app: kube-dns
      ports:
        - protocol: UDP
          port: 53
Success! kguardian automatically discovered that your nginx pod receives traffic from the curl-pod on port 80 and makes DNS queries.

Apply the Policy

Review the generated policy and apply it:
# Review first (always recommended)
kubectl apply --dry-run=client -f ./policies/default-nginx-networkpolicy.yaml

# Apply to cluster
kubectl apply -f ./policies/default-nginx-networkpolicy.yaml

# Verify it's created
kubectl get networkpolicies -n default

Step 5: Generate a Seccomp Profile

Generate a seccomp profile to restrict syscalls:
# Generate seccomp profile for nginx
kubectl kguardian gen seccomp nginx -n default \
  --output-dir ./seccomp

# View the generated profile
cat ./seccomp/default-nginx-seccomp.json
{
  "defaultAction": "SCMP_ACT_ERRNO",
  "architectures": ["SCMP_ARCH_X86_64"],
  "syscalls": [
    {
      "names": ["read", "write", "open", "close", "stat"],
      "action": "SCMP_ACT_ALLOW"
    },
    {
      "names": ["socket", "connect", "accept", "bind", "listen"],
      "action": "SCMP_ACT_ALLOW"
    },
    {
      "names": ["mmap", "munmap", "brk", "mprotect"],
      "action": "SCMP_ACT_ALLOW"
    }
  ]
}

Apply the Seccomp Profile

To use the profile, you need to:
  1. Copy to nodes (for local profiles):
    kubectl cp ./seccomp/default-nginx-seccomp.json \
      kguardian-controller-xxxxx:/var/lib/kubelet/seccomp/nginx-profile.json \
      -n kguardian
    
  2. Update your deployment to reference it:
    apiVersion: v1
    kind: Pod
    metadata:
      name: nginx
    spec:
      securityContext:
        seccompProfile:
          type: Localhost
          localhostProfile: nginx-profile.json
      containers:
      - name: nginx
        image: nginx:latest
    
Future versions will support automatic seccomp profile management via the Security Profiles Operator.

Next Steps

Common Issues

Solution: Ensure your pods have been running and generating traffic for at least 5 minutes. Check broker logs:
kubectl logs -n kguardian deployment/kguardian-broker
Solution: Verify kernel version (6.2+) and that nodes support eBPF:
kubectl describe pod -n kguardian kguardian-controller-xxxxx
Solution: The CLI auto-discovers the broker via port-forwarding. Ensure you have permissions:
kubectl auth can-i create pods/portforward -n kguardian
For more troubleshooting, see the Troubleshooting Guide.

Learn more about kguardian's architecture

Understand how the components work together →