Scaling Kubernetes Workloads with KEDA, RabbitMQ, and Cron

February 16, 2025

Summary

Introduction

Let's explore KEDA, or Kubernetes Event-Driven Autoscaler, and how it's transforming the way we scale applications in Kubernetes. KEDA brings a new level of efficiency by dynamically adjusting resources based on real-time events, rather than just relying on traditional metrics like CPU or memory usage.

We'll delve into the mechanics of KEDA and demonstrate how it integrates with RabbitMQ, a powerful message broker, to enable dynamic scaling of message consumers. This means your application can seamlessly adapt to fluctuating workloads, ensuring optimal performance and resource utilization.

Furthermore, we'll uncover how KEDA can be leveraged with a Cron-based trigger to automate scaling for specific time periods. This capability is particularly valuable for tasks like scheduled batch jobs or pre-processing operations, where you need resources available at predefined intervals.

What is KEDA?

KEDA (Kubernetes Event-Driven Autoscaler) is an open-source solution that enables dynamic scaling of Kubernetes workloads based on external events. Unlike traditional autoscalers that depend on CPU or memory usage, KEDA responds to real-time triggers from sources such as message queues, databases, and cloud services, ensuring efficient and responsive scaling.

Why Use KEDA in Kubernetes?

  • Efficient Resource Utilization: Pods only scale up when there's actual demand, helping to minimize idle resource usage.
  • Event-Driven Scaling: Automatically scales based on real-time metrics from external event sources.
  • Native Kubernetes Integration: Works seamlessly with the Kubernetes Horizontal Pod Autoscaler (HPA).
  • Supports Multiple Event Sources: RabbitMQ, Kafka, AWS SQS, Azure Queues, Prometheus, and more.
  • Cron-Based Scaling: Allows pods to scale at specific times based on a schedule, ensuring efficient resource utilization without requiring them to run continuously.

Integrating KEDA with RabbitMQ and Cron

KEDA is a powerful solution for demand-based scaling, and when integrated with RabbitMQ, it enables consumers to automatically scale based on queue length. Additionally, KEDA’s Cron trigger allows scheduled scaling at specific times, ensuring efficient resource management and automation within defined time windows.

Home Lab Hands-on

Prerequisites

  • Docker installed on your machine
  • kubectl configured
  • Kind (Kubernetes In Docker) installed: Kind Installation Guide
  • Helm installed
  • KEDA installed
  • A RabbitMQ instance if you scale based on queue length.

Step 1: Create a Kind Cluster

First, create a Kind cluster with a configuration suitable for Karpenter.

# kind-config.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
  - role: control-plane
  - role: worker

Create the cluster:

kind create cluster --config=kind-config.yaml

Verify the cluster is up and running:

kubectl cluster-info

Step 2: Install KEDA

helm repo add kedacore https://kedacore.github.io/charts
helm repo update
helm install keda kedacore/keda --namespace keda --create-namespace

Step 3: Deploy RabbitMQ

Deploy RabbitMQ using Helm:

helm install rabbitmq bitnami/rabbitmq --set auth.username=admin --set auth.password=admin

Step 4: Deploy a RabbitMQ Consumer file with the name "rabbitmq-consumer.yaml"

This deployment uses the image eopires/node-rabbitmq-service:1.0.0, which I created to simplify message production and consumption via environment variables. With this structure, testing different behaviors becomes easy. Below, you can access the project's details.

Create a simple deployment that consumes messages from RabbitMQ:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: app-consumer
  labels:
    app: app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: app
  template:
    metadata:
      labels:
        app: app
    spec:
      containers:
      - name: app-consumer
        image: eopires/node-rabbitmq-service:1.0.0
        imagePullPolicy: Always
        resources:
          requests:
            cpu: "100m"
            memory: "128Mi"
          limits:
            cpu: "1000m"
            memory: "256Mi"
        env:
          - name: CONSUMER_DISABLE
            value: "false"
          - name: CONSUMER_PREFETCH
            value: "1" # 1 message consume
          - name: CONSUMER_DELAY_MS
            value: "5000" # each 5 seconds 

          - name: PRODUCER_DISABLE
            value: "true"

          - name: RABBITMQ_URL
            value: "amqp://admin:admin@rabbitmq-service.default.svc.cluster.local:5672"
          - name: RABBITMQ_QUEUE
            value: "queue-teste"

Step 5: Deploy a RabbitMQ Producer file with the name "rabbitmq-producer.yaml"

This deployment uses the image eopires/node-rabbitmq-service:1.0.0, which I created to simplify message production and consumption via environment variables. With this structure, testing different behaviors becomes easy. Below, you can access the project's details.

Create a simple deployment that produces messages to RabbitMQ:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: app-producer
  labels:
    app: app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: app
  template:
    metadata:
      labels:
        app: app
    spec:
      containers:
      - name: app-producer
        image: eopires/node-rabbitmq-service:1.0.0
        resources:
          requests:
            cpu: "100m"
            memory: "128Mi"
          limits:
            cpu: "1000m"
            memory: "256Mi"
        env:
          - name: CONSUMER_DISABLE
            value: "true"

          - name: PRODUCER_DISABLE
            value: "false"
          - name: PRODUCER_MESSAGE
            value: "test"
          - name: PRODUCER_MESSAGE_QTD
            value: "5" # 5 message published
          - name: PRODUCER_DELAY_MS
            value: "5000" # each 5 seconds

          - name: RABBITMQ_URL
            value: "amqp://admin:admin@rabbitmq-service.default.svc.cluster.local:5672"
          - name: RABBITMQ_QUEUE
            value: "queue-teste"

Step 6: Create a KEDA ScaledObject for RabbitMQ with the file name "rabbitmq-scaledobject.yaml"

Define a ScaledObject that scales the consumer based on the RabbitMQ queue length.

apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
  name: app-consumer-scaledobject
  namespace: default
spec:
  scaleTargetRef:
    name: app-consumer
  minReplicaCount: 1  # Minimum number of replicas
  maxReplicaCount: 7 # Maximum number of replicas
  pollingInterval: 30  # Override default polling interval (in seconds)
  cooldownPeriod: 300  # Override default cooldown period (in seconds)
  triggers:
    - type: rabbitmq
      metadata:
        queueName: queue-teste
        host: amqp://admin:admin@rabbitmq-service.default.svc.cluster.local:5672
        queueLength: "20"

Step 7: Create a KEDA Cron with the file name "rabbitmq-scaledobject-cron.yaml"

Define a ScaledObject that scales the consumer based on a time window.

apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
  name: app-consumer-scaledobject
  namespace: default
spec:
  scaleTargetRef:
    name: app-consumer
  minReplicaCount: 1  # Minimum number of replicas
  maxReplicaCount: 7 # Maximum number of replicas
  triggers:
    - type: cron
      metadata:
        timezone: "America/Sao_Paulo" # Optional, depends on your requirement
        start: 0 19 * * 1-5           # At 19:00 PM (Monday to Friday)
        end: 0 21 * * 1-5             # At 21:00 PM (Monday to Friday)
        desiredReplicas: "5"          # Desired replicas during this time

Step 8: Apply the Configuration

# Producer/Consumer
kubectl apply -f rabbitmq-consumer.yaml
kubectl apply -f rabbitmq-producer.yaml

# Scaled Objetcs
kubectl apply -f rabbitmq-scaledobject.yaml
# or
kubectl apply -f rabbitmq-scaledobject-cron.yaml

Step 9: Check environment behaviors

# Port foward to access RabbitMQ
kubectl port-forward service/rabbitmq-service 15672:15672

# Access the URL below and check the message flow in the RabbitMQ interface
http://localhost:15672

# Check pods its running 
kubectl get pods -l app=app

# Check application logs 
kubectl logs -f $(pod-name)

# Check Scaled Object
kubectl get scaledobjects -n default

# Info Scaled Object
kubectl describe scaledobjects app-consumer-scaledobject -n default

# Check HPA object created
 kubectl get hpa -n default

Step 10: Uninstall the Configuration

# Producer/Consumer
kubectl delete -f rabbitmq-consumer.yaml
kubectl delete -f rabbitmq-producer.yaml

# Scaled Objetcs
kubectl apply -f rabbitmq-scaledobject.yaml
# or
kubectl apply -f rabbitmq-scaledobject-cron.yaml

# RabbitMQ
helm uninstall rabbitmq -n default

# KEDA
helm uninstall keda -n keda

# Cluster
kind delete cluster --name kind-kind

Project example

You can access my GitHub project, which includes the defined structure and a detailed step-by-step guide to implementing these scenarios.

Conclusion

To conclude, KEDA simplifies autoscaling in Kubernetes with event-driven scaling. When integrated with RabbitMQ, it dynamically adjusts resources based on queue length, optimizing performance while preventing over-provisioning. Additionally, KEDA’s Cron trigger enables scheduled scaling at specific times, ensuring efficient task execution and resource management within Kubernetes.

Further Reading