Skip to main content

Kubeaudit

License Apache-2.0GitHub release (latest SemVer)OWASP Lab ProjectArtifact HUBGitHub Repo starsMastodon Follower

What is Kubeaudit?

Kubeaudit finds security misconfigurations in you Kubernetes Resources and gives tips on how to resolve these.

Kubeaudit comes with a large lists of "auditors" which test various aspects, like the SecurityContext of pods. You can find the complete list of auditors here.

To learn more about the kubeaudit itself visit kubeaudit GitHub.

Deployment

The kubeaudit chart can be deployed via helm:

# Install HelmChart (use -n to configure another namespace)
helm upgrade --install kubeaudit secureCodeBox/kubeaudit

Scanner Configuration

The following security scan configuration example are based on the [kube-hunter Documentation], please take a look at the original documentation for more configuration examples.

  • To specify remote machines for hunting, select option 1 or use the --remote option. Example: kube-hunter --remote some.node.com
  • To specify interface scanning, you can use the --interface option (this will scan all the machine's network interfaces). Example: kube-hunter --interface
  • To specify a specific CIDR to scan, use the --cidr option. Example: kube-hunter --cidr 192.168.0.0/24

Requirements

Kubernetes: >=v1.11.0-0

Values

KeyTypeDefaultDescription
cascadingRules.enabledboolfalseEnables or disables the installation of the default cascading rules for this scanner
imagePullSecretslist[]Define imagePullSecrets when a private registry is used (see: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/)
kubeauditScopestring"namespace"Automatically sets up rbac roles for kubeaudit to access the resources it scans. Can be either "cluster" (ClusterRole) or "namespace" (Role)
parser.affinityobject{}Optional affinity settings that control how the parser job is scheduled (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/)
parser.envlist[]Optional environment variables mapped into each parseJob (see: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/)
parser.image.pullPolicystring"IfNotPresent"Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images
parser.image.repositorystring"docker.io/securecodebox/parser-kubeaudit"Parser image repository
parser.image.tagstringdefaults to the charts versionParser image tag
parser.nodeSelectorobject{}Optional nodeSelector settings that control how the scanner job is scheduled (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/)
parser.resourcesobject{ requests: { cpu: "200m", memory: "100Mi" }, limits: { cpu: "400m", memory: "200Mi" } }Optional resources lets you control resource limits and requests for the parser container. See https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
parser.scopeLimiterAliasesobject{}Optional finding aliases to be used in the scopeLimiter.
parser.tolerationslist[]Optional tolerations settings that control how the parser job is scheduled (see: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/)
parser.ttlSecondsAfterFinishedstringnilseconds after which the Kubernetes job for the parser will be deleted. Requires the Kubernetes TTLAfterFinished controller: https://kubernetes.io/docs/concepts/workloads/controllers/ttlafterfinished/
scanner.activeDeadlineSecondsstringnilThere are situations where you want to fail a scan Job after some amount of time. To do so, set activeDeadlineSeconds to define an active deadline (in seconds) when considering a scan Job as failed. (see: https://kubernetes.io/docs/concepts/workloads/controllers/job/#job-termination-and-cleanup)
scanner.affinityobject{}Optional affinity settings that control how the scanner job is scheduled (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/)
scanner.backoffLimitint3There are situations where you want to fail a scan Job after some amount of retries due to a logical error in configuration etc. To do so, set backoffLimit to specify the number of retries before considering a scan Job as failed. (see: https://kubernetes.io/docs/concepts/workloads/controllers/job/#pod-backoff-failure-policy)
scanner.envlist[]Optional environment variables mapped into each scanJob (see: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/)
scanner.extraContainerslist[]Optional additional Containers started with each scanJob (see: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/)
scanner.extraVolumeMountslist[]Optional VolumeMounts mapped into each scanJob (see: https://kubernetes.io/docs/concepts/storage/volumes/)
scanner.extraVolumeslist[]Optional Volumes mapped into each scanJob (see: https://kubernetes.io/docs/concepts/storage/volumes/)
scanner.image.pullPolicystring"IfNotPresent"Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images
scanner.image.repositorystring"docker.io/securecodebox/scanner-kubeaudit"Container Image to run the scan
scanner.image.tagstringnildefaults to the charts appVersion
scanner.nameAppendstringnilappend a string to the default scantype name.
scanner.nodeSelectorobject{}Optional nodeSelector settings that control how the scanner job is scheduled (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/)
scanner.podSecurityContextobject{}Optional securityContext set on scanner pod (see: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/)
scanner.resourcesobject{}CPU/memory resource requests/limits (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/, https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/)
scanner.securityContextobject{"allowPrivilegeEscalation":false,"capabilities":{"drop":["all"]},"privileged":false,"readOnlyRootFilesystem":true,"runAsNonRoot":true}Optional securityContext set on scanner container (see: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/)
scanner.securityContext.allowPrivilegeEscalationboolfalseEnsure that users privileges cannot be escalated
scanner.securityContext.capabilities.drop[0]string"all"This drops all linux privileges from the container.
scanner.securityContext.privilegedboolfalseEnsures that the scanner container is not run in privileged mode
scanner.securityContext.readOnlyRootFilesystembooltruePrevents write access to the containers file system
scanner.securityContext.runAsNonRootbooltrueEnforces that the scanner image is run as a non root user
scanner.suspendboolfalseif set to true the scan job will be suspended after creation. You can then resume the job using kubectl resume <jobname> or using a job scheduler like kueue
scanner.tolerationslist[]Optional tolerations settings that control how the scanner job is scheduled (see: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/)
scanner.ttlSecondsAfterFinishedstringnilseconds after which the Kubernetes job for the scanner will be deleted. Requires the Kubernetes TTLAfterFinished controller: https://kubernetes.io/docs/concepts/workloads/controllers/ttlafterfinished/

License

License

Code of secureCodeBox is licensed under the Apache License 2.0.

CPU architectures

The scanner is currently supported for these CPU architectures:

  • linux/amd64

Examples

juice-shop

In this example we execute an kubeaudit scan against the intentional vulnerable juice-shop

Initialize juice-shop in cluster

Before executing the scan, make sure to setup juice-shop

helm upgrade --install juice-shop secureCodeBox/juice-shop --wait

After that you can execute the scan in this directory:

kubectl apply -f scan.yaml

Troubleshooting:

Make sure to install juice-shop in the same namespace as the scanner! If you juice-shop runs in, e.g., the kubeaudit-tests namespace, install the chart and run the scan there too

# Install HelmChart in kubeaudit-tests namespace
helm upgrade --install kubeaudit secureCodeBox/kubeaudit -n kubeaudit-tests
# Run scan in kubeaudit-tests namespace
kubectl apply -f scan.yaml -n kubeaudit-tests

Also, you must adjust the namespace in the scan.yaml with the -n flag.

Alternatively, you can set the scope of kubeaudit to cluster:

helm upgrade --install kubeaudit secureCodeBox/kubeaudit -n kubeaudit-tests --set="kubeauditScope=cluster"
# SPDX-FileCopyrightText: the secureCodeBox authors
#
# SPDX-License-Identifier: Apache-2.0

apiVersion: "execution.securecodebox.io/v1"
kind: Scan
metadata:
name: "kubeaudit-juiceshop"
spec:
scanType: "kubeaudit"
parameters:
- "-n"
- "default"