Ping Identity Devops

Deploy a local Kubernetes Cluster

If you do not have access to a managed Kubernetes cluster you can deploy one on your local machine or or a virtual machine (VM). This document describes deploying a cluster with kind and also for minikube. Refer to the documentation of each product for additional information.

The instructions in this document are for testing and learning, and not intended for use in production.
The processes outlined on this page will create either a Kubernetes in Docker (kind) or a minikube cluster. In both cases, the cluster you get is very similar in functionality to the Docker Desktop implementation of Kubernetes. However, a distinct advantage of both offerings is portability (not requiring Docker Desktop). As with the Deploy an Example Stack procedure, the files provided will enable and deploy an ingress controller for communicating with the services in the cluster from your local environment.
To use the both examples below, you will need to ensure the Kubernetes feature of Docker Desktop is turned off, as it will conflict.
This note applies only if using Docker as a backing for either solution. kind uses Docker by default, and it is also an option for minikube. Docker on Linux is typically installed with root privileges and thus has access to the full resources of the machine. Docker Desktop for Mac and Windows provides a way to set the resources allocated to Docker. For this documentation, a Macbook Pro with the Apple silicon chipset was configured to use 6 CPUs and 12 GB Memory. You can adjust these values as necessary for your needs.

Kind cluster

This section will cover the kind installation process. See the Minikube cluster section for minikube instructions.

Prerequisites

  • docker

  • kubectl

  • ports 80 and 443 available on machine (optional but recommended for standard URLs)

For this guide, Kubernetes 1.35.0 is used. It is deployed using version 0.31.0 of kind.
At the time of the writing of this guide, Docker Desktop was version 4.46.0 (204649), running Docker Engine 28.4.0.
We install Traefik as the ingress controller to align with the Docker Desktop example.

Install and confirm the cluster

  1. Install kind on your platform.

  2. Use the provided sample kind.yaml file to create a kind cluster named ping. The config maps host ports 80/443 into the kind control-plane node and forwards them to Traefik NodePorts (30080/30443) so Traefik can serve standard URLs. From the root of your copy of the repository code, run the wrapper script:

./20-kubernetes/create-kind-cluster.sh
If the cluster already exists, the script deletes and recreates it.

Output:

Creating cluster "ping" ...
 ✓ Ensuring node image (kindest/node:v1.35.0) đŸ–ŧ
 ✓ Preparing nodes đŸ“Ļ
 ✓ Writing configuration 📜
 ✓ Starting control-plane đŸ•šī¸
 ✓ Installing CNI 🔌
 ✓ Installing StorageClass 💾
Set kubectl context to "kind-ping"
You can now use your cluster with:

kubectl cluster-info --context kind-ping

Have a nice day! 👋
  1. Test cluster health by running the following commands:

    kubectl cluster-info
    
    # Output - port will vary
    Kubernetes control plane is running at https://127.0.0.1:64129
    CoreDNS is running at https://127.0.0.1:64129/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
    
    To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
    
    ------------------
    
    kubectl version
    
    < output clipped >
    Server Version: v1.35.0
    
    ------------------
    
    kubectl get nodes
    
    NAME                 STATUS   ROLES           AGE     VERSION
    ping-control-plane   Ready    control-plane   55m     v1.35.0

Enable ingress with Traefik

  1. Install Traefik to handle Ingress resources:

    helm repo add traefik https://helm.traefik.io/traefik
    helm repo update
    helm upgrade --install traefik traefik/traefik \
      --namespace traefik --create-namespace \
      -f ./30-helm/ingress-traefik-values-kind.yaml

    The values file used to configure Traefik is set to skip backend TLS verification so the Ping product consoles with self-signed certificates work via Traefik. Do not use this setting in production.

  2. Wait for Traefik to be ready:

    kubectl get pods --namespace traefik

Our examples will use the Helm release name myping and DNS domain suffix pingdemo.example for accessing applications. After the Ingress resources exist, add all expected hosts to /etc/hosts. When using host ports 80/443 (recommended), map the hosts to 127.0.0.1:

echo "127.0.0.1 myping-pingaccess-admin.pingdemo.example myping-pingaccess-engine.pingdemo.example myping-pingauthorize.pingdemo.example myping-pingauthorizepap.pingdemo.example myping-pingdataconsole.pingdemo.example myping-pingdelegator.pingdemo.example myping-pingdirectory.pingdemo.example myping-pingfederate-admin.pingdemo.example myping-pingfederate-engine.pingdemo.example myping-pingcentral.pingdemo.example" | sudo tee -a /etc/hosts > /dev/null
If ports 80/443 are not available, remove the extraPortMappings from kind.yaml and use the Traefik NodePorts (30080/30443) directly.

Setup is complete. This local Kubernetes environment should be ready to deploy our Helm examples.

Deploy the Example Stack

  1. Create a namespace for running the stack in your Kubernetes cluster:

    # Create the namespace
    kubectl create ns pinghelm
    
    # Set the kubectl context to the namespace
    kubectl config set-context --current --namespace=pinghelm
    
    # Confirm
    kubectl config view --minify | grep namespace:
  2. Create a secret in the namespace you will be using to run the example (pinghelm) using the pingctl utility. This secret will obtain an evaluation license based on your Ping DevOps username and key:

    kubectl create secret generic devops-secret \
      --from-literal=PING_IDENTITY_DEVOPS_USER="$PING_IDENTITY_DEVOPS_USER" \
      --from-literal=PING_IDENTITY_DEVOPS_KEY="$PING_IDENTITY_DEVOPS_KEY" \
      --from-literal=PING_IDENTITY_ACCEPT_EULA="$PING_IDENTITY_ACCEPT_EULA" \
      --type=Opaque \
      --dry-run=client -o yaml | kubectl apply -f -
  3. To install the chart, go to your local "${PING_IDENTITY_DEVOPS_HOME}"/pingidentity-devops-getting-started/30-helm directory and run the command shown here. In this example, the release (deployment into Kubernetes by Helm) is called myping, forming the prefix for all objects created. The ingress-demo-kind.yaml file configures the ingresses to use the Traefik ingress class:

    helm upgrade --install myping pingidentity/ping-devops -f everything.yaml -f ingress-demo-kind.yaml

At this point, the flow will be the same as found in the Getting Started Example after the products are deployed using helm. The URLs will be prefaced with myping rather than demo.

Stop the cluster

When you are finished, you can remove the cluster by running the following command, which removes the cluster completely. You will be required to recreate the cluster to use kind again.

kind delete cluster --name ping

Minikube cluster

In this section, a minikube installation with ingress is created. Minikube is simpler than kind overall to configure, but ends up needing one step to configured a tunnel to the cluster that must be managed. For this guide, the Docker driver will be used. As with kind above, Kubernetes in Docker Desktop must be disabled.

Prerequisites

At the time of the writing of this guide, minikube was version 1.38.0, which installs Kubernetes version 1.35.0.

Install and configure minikube

  1. Install minikube for your platform. See the product Get Started! page for details.

  2. Configure the minikube resources and virtualization driver. For example, the following options were used on an Apple Macbook Pro with Docker as the backing platform:

    minikube config set cpus 6
    minikube config set driver docker
    minikube config set memory 12g
    See the documentation for more details on configuring minikube.
  3. Start the cluster. Optionally you can include a profile flag (--profile <name>). Naming the cluster enables you to run multiple minikube clusters simultaneously. If you use a profile name, you will need to include it on other minikube commands.

    minikube start --kubernetes-version=v1.35.0

    Output:

😄  minikube v1.38.0 on Darwin 26.2 (arm64)
✨  Using the docker driver based on user configuration
❗  Starting v1.39.0, minikube will default to "containerd" container runtime. See #21973 for more info.
📌  Using Docker Desktop driver with root privileges
👍  Starting "minikube" primary control-plane node in "minikube" cluster
🚜  Pulling base image v0.0.49 ...
đŸ”Ĩ  Creating docker container (CPUs=6, Memory=12288MB) ...
đŸŗ  Preparing Kubernetes v1.35.0 on Docker 29.2.0 ...
🔗  Configuring bridge CNI (Container Networking Interface) ...
🔎  Verifying Kubernetes components...
    â–Ē Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
  1. Test cluster health by running the following commands:

    kubectl cluster-info
    
    # Output - Port will vary
    Kubernetes control plane is running at https://127.0.0.1:51042
    CoreDNS is running at https://127.0.0.1:51042/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
    
    To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
    
    ------------------
    
    kubectl version
    
    < output clipped >
    Server Version: v1.35.0
    
    ------------------
    
    kubectl get nodes
    
    NAME       STATUS   ROLES           AGE    VERSION
    minikube   Ready    control-plane   6m2s   v1.35.0

Enable ingress with Traefik

  1. Install ingress:

    helm upgrade --install traefik traefik/traefik \
        --namespace traefik --create-namespace \
        -f 30-helm/ingress-traefik-values.yaml
    
    Release "traefik" does not exist. Installing it now.
    NAME: traefik
    LAST DEPLOYED: Thu Feb 12 11:24:51 2026
    NAMESPACE: traefik
    STATUS: deployed
    REVISION: 1
    DESCRIPTION: Install complete
    TEST SUITE: None
    NOTES:
    traefik with docker.io/traefik:v3.6.7 has been deployed successfully on traefik namespace!
  2. Confirm ingress is operational:

    kubectl get po -n traefik
    
    NAME                       READY   STATUS    RESTARTS   AGE
    traefik-79b96fcb9b-qp99r   1/1     Running   0          2m31s
  3. Start a tunnel. This command will tie up the terminal:

    minikube tunnel
    
    ✅  Tunnel successfully started
    
    📌  NOTE: Please do not close this terminal as this process must stay alive for the tunnel to be accessible ...
    
    ❗  The service/ingress traefik requires privileged ports to be exposed: [80 443]
    🔑  sudo permission will be asked for it.
    🔗  Starting tunnel for service traefik.

    Our examples will use the Helm release name myping and DNS domain suffix pingdemo.example for accessing applications. You can add all expected hosts to /etc/hosts:

echo '127.0.0.1 myping-pingaccess-admin.pingdemo.example myping-pingaccess-engine.pingdemo.example myping-pingauthorize.pingdemo.example myping-pingauthorizepap.pingdemo.example myping-pingdataconsole.pingdemo.example myping-pingdelegator.pingdemo.example myping-pingdirectory.pingdemo.example myping-pingfederate-admin.pingdemo.example myping-pingfederate-engine.pingdemo.example myping-pingcentral.pingdemo.example' | sudo tee -a /etc/hosts > /dev/null

Setup is complete.

Deploy the Example Stack

This local Kubernetes environment should be ready to deploy the stack as you did above for kind. The only difference is to use the other ingress-demo values file for minikube:

helm upgrade --install myping pingidentity/ping-devops -f everything.yaml -f ingress-demo.yaml

Optional features

Dashboard

Minikube provides other add-ons that enhance your experience when working with your cluster. One such add-on is the Dashboard, which can also provide metrics as follows:

minikube addons enable metrics-server minikube dashboard

Multiple nodes

If you have enough system resources, you can create a multi-node cluster.

For example, to start a 3-node cluster:

minikube start --nodes 3

Keep in mind that each node will receive the RAM/CPU/Disk configured for minikube. Using the example configuration provided above, a 3-node cluster would need 36GB of RAM and 18 CPUs.

Stop the cluster

When you are finished, you can stop the cluster by running the following command. Stopping retains the configuration and state of the cluster (namespaces, deployments, and so on) that will be restored when starting the cluster again.

minikube stop

You can also pause and unpause the cluster:

minikube pause minikube unpause

Alternatively, you can delete the minikube environment, which will do a reset and recreate everything the next time it is started.

minikube delete