Skip to content

Migrating From the Community‑Based to F5‑Based Wallarm Ingress Controller

This topic explains why and how to migrate from the Wallarm Ingress Controller based on the Community Ingress NGINX to the new controller based on F5 NGINX Ingress Controller.

Why the migration is required

Previously, Wallarm provided an Ingress Controller based on the Community Ingress NGINX.

In November 2025, the Kubernetes community announced the retirement of this project due to growing maintenance challenges and unresolved technical issues.

Wallarm will fully support this controller (including new feature releases) until March 2026. After that date, the controller will remain functional but will no longer receive updates, bug fixes, or security patches.

Continuing to use it after March 2026 may expose your environment to unresolved defects and security vulnerabilities.

To ensure ongoing support and security, we strongly recommend migrating to a supported deployment option, such as the Wallarm Ingress Controller based on the F5 NGINX Ingress Controller. The sections below describe the migration steps and their benefits.

About the new Ingress Controller

The new Wallarm Ingress Controller is based on the F5 NGINX Ingress Controller and is the recommended replacement for the Community NGINX-based deployment.

It provides long-term stability, vendor-backed support, regular updates and security patches, and advanced traffic management.

For a detailed overview of the changes and new features, see the What's New guide.

NGINX Plus is not supported

The Wallarm Ingress Controller uses the open-source edition of the F5 NGINX Ingress Controller. NGINX Plus is not included and is not supported.

Choosing your migration strategy

You can migrate to the new Wallarm Ingress Controller using one of four strategies. The appropriate option depends on your infrastructure, IP requirements, and tolerance for downtime.

Review the summary table below to determine which approach best fits your environment. Detailed descriptions of each strategy follow.

Strategy Downtime IP changes Complexity Best for Est. time
Load balancer None No High Environments with an external load balancer 4–8 hours (includes staged rollout and monitoring)
DNS switch None (DNS propagation applies) Yes Low Environments where IP changes are acceptable 3–4 hours plus DNS propagation time (depends on your TTL setting)
Selector swap None No Medium Production environments with strict IP requirements 4–6 hours
Direct replacement 5-15 minutes Yes Low Development and staging environments 2–3 hours (including the downtime window)

Recommendation

If unsure, use selector swap for production environments and direct replacement for development or staging.

Migration - part 1 (strategy independent)

Purpose: Prepare the new Ingress Controller and validate your converted Ingress configuration without changing production. You will deploy the new controller alongside the existing one, convert Ingress manifests on copies, and test them in a separate namespace. Part 1 is the same for every migration strategy.

Requirements

Before starting the migration, ensure the following requirements are met:

  • Kubernetes platform version 1.28-1.35

  • Helm version 3.10+

  • Ability to create, modify, and delete resources in the target Kubernetes namespace

  • Compatibility of your services with the Wallarm Ingress Controller based on the F5 NGINX Ingress Controller version 5.4.0

  • Access to the account with the Administrator role in Wallarm Console for the US Cloud or EU Cloud

  • Access to https://us1.api.wallarm.com for working with US Wallarm Cloud or to https://api.wallarm.com for working with EU Wallarm Cloud

  • Access to https://charts.wallarm.com to add the Wallarm Helm charts. Ensure the access is not blocked by a firewall

  • Access to the Wallarm repositories on Docker Hub https://hub.docker.com/r/wallarm. Make sure the access is not blocked by a firewall

  • Access to the IP addresses and their corresponding hostnames (if any) listed below. This is needed for downloading updates to attack detection rules and API specifications, as well as retrieving precise IPs for your allowlisted, denylisted, or graylisted countries, regions, or data centers

    node-data0.us1.wallarm.com - 34.96.64.17
    node-data1.us1.wallarm.com - 34.110.183.149
    us1.api.wallarm.com - 35.235.66.155
    34.102.90.100
    34.94.156.115
    35.235.115.105
    
    node-data1.eu1.wallarm.com - 34.160.38.183
    node-data0.eu1.wallarm.com - 34.144.227.90
    api.wallarm.com - 34.90.110.226
    

Depending on the chosen migration strategy, additional access may be required:

  • Load balancer access (required for strategy A – Access to your external load balancer configuration.

  • DNS management access (required for strategies B and D) – Ability to create/update A/CNAME records.

Step 0. Collect current Ingress deployment details and validate environment

Before starting the migration, gather the following information from your existing Ingress Controller deployment and complete basic environment validations.

  1. Gather deployment information and save it - you will need it throughout the migration:

    # 1. Identify the namespace of the current Ingress Controller
    kubectl get pods --all-namespaces | grep ingress
    # Note the namespace name (usually 'ingress-nginx')
    
    # 2. Record the current LoadBalancer external IP
    kubectl get svc -n <ingress-namespace> -o wide
    # Note the value in the EXTERNAL-IP column
    
    # 3. List all domains and hostnames handled by Ingress
    kubectl get ingress --all-namespaces -o jsonpath='{.items[*].spec.rules[*].host}' | tr ' ' '\n' | sort -u
    # Save this list
    
    # 4. Identify the Wallarm API endpoint in use
    kubectl get configmap -n <ingress-namespace> -o yaml | grep -i wallarm
    # Typical values: api.wallarm.com (US), api.wallarm.eu (EU), or us1.api.wallarm.com
    
    # 5. Determine the current Helm release name
    helm list -A | grep ingress
    # Note the release name (usually 'ingress-nginx')
    
  2. Perform pre-flight validations.

    Complete these checks to verify the cluster and environment are ready for migration:

    • Back up all Ingress resources:
    kubectl get ingress --all-namespaces -o yaml > backup-ingresses-$(date +%Y%m%d).yaml
    echo "Backup saved to: backup-ingresses-$(date +%Y%m%d).yaml"
    
    • Export current Helm configuration:
    helm list -A | grep ingress  # Find your release name
    helm get values <release-name> -n <namespace> > backup-helm-values-$(date +%Y%m%d).yaml
    
    • Document current load balancer IP (critical for rollback):
    kubectl get svc -n <ingress-namespace> -o jsonpath='{.items[?(@.spec.type=="LoadBalancer")].status.loadBalancer.ingress[0].ip}'
    
    • Verify cluster resources:
    kubectl top nodes
    # Check: CPU < 70%, Memory < 80% on all nodes
    
    • Reduce DNS TTL (critical for strategies B and D) - Lower TTL 24-48 hours before migration:
    # Check current TTL
    dig your-domain.com | grep -A 1 "ANSWER SECTION"
    # Look for the number before IN A (that's your TTL in seconds)
    
    # Recommended: Set TTL to 300 seconds (5 minutes) before migration
    # This allows faster DNS propagation during the migration
    # Update in your DNS provider (Route53, Cloudflare, etc.)
    
    # After migration is stable (48h+), you can increase TTL back to normal (3600 or higher)
    
  3. Identify a maintenance window:

    • Production: Prefer low-traffic period (e.g., weekends or off-hours)
    • Development and staging environments: Flexible, anytime is acceptable
  4. Notify stakeholders about the following:

    • Migration schedule
    • Expected duration
    • Potential risks
    • Rollback plan

Step 1: Review the new Ingress Controller documentation

  1. Read the comparison guide to understand the differences between the previous and new Ingress Controller implementations:

  2. Read the new Ingress Controller deployment guide and the configuration parameters.

    Key configuration areas include:

    • Wallarm API credentials (config.wallarm.api.host, config.wallarm.api.token)
    • API Firewall configuration (optional)
    • Resource limits and scaling
    • Metrics and monitoring endpoints

Step 2: Deploy the new Controller

  1. Deploy the new controller.

    Deploy the new Ingress Controller in your cluster using the provided values.yaml file:

    # Add the Wallarm Helm repository
    helm repo add wallarm https://charts.wallarm.com/
    helm repo update
    
    # Deploy the new Ingress Controller
    helm install wallarm-ingress-new wallarm/wallarm-ingress \
      --version 7.0.0 \
      -n wallarm-ingress-new \
      --create-namespace \
      -f values.yaml
    

    IngressClass name

    Use a different IngressClass name (e.g., nginx-new) to run the new controller alongside the old one during migration.

    Example of the values.yaml file with the minimum configuration is below. See more configuration parameters.

    config:
      wallarm:
        enabled: true
        api:
          host: "us1.api.wallarm.com"
          token: "<NODE_TOKEN>"
          # nodeGroup: defaultIngressGroup
    
    config:
      wallarm:
        enabled: true
        api:
          host: "api.wallarm.com" 
          token: "<NODE_TOKEN>"
          # nodeGroup: defaultIngressGroup
    

    <NODE_TOKEN> is the API token generated for Wallarm Node deployment. You can reuse the existing API token with the Node deployment/Deployment usage type from your current NGINX Ingress Controller deployment or generate a new one.

  2. Verify the Ingress Controller deployment in Kubernetes:

    # Check controller pods
    kubectl get pods -n wallarm-ingress-new
    
    # Check Wallarm WCLI logs for cloud connectivity and errors
    kubectl logs -n wallarm-ingress-new -l app.kubernetes.io/component=controller \
      -c wallarm-wcli --tail=50 | grep -i "sync\|connect\|error"
    
    # Check Postanalytics logs
    kubectl logs -n wallarm-ingress-new -l app.kubernetes.io/component=wallarm-postanalytics --tail=50
    
  3. Verify the new Ingress Controller in the Wallarm Console.

    To do so, go to Wallarm Console → SettingsNodes and check if the new Ingress Controller node appears. It should show up within 2–3 minutes of deployment.

Step 3. Prepare your Ingress resources

Collect all Ingress resources that use the old Ingress Controller:

  • Simple method (no jq required):

    # List all Ingress resources across all namespaces
    kubectl get ingress --all-namespaces
    
    # Export all Ingress resources to a backup file (for reference and rollback purposes)
    kubectl get ingress --all-namespaces -o yaml > old-ingresses-backup.yaml
    echo "Backup saved to: old-ingresses-backup.yaml"
    
    # Count the total number of Ingress resources
    kubectl get ingress --all-namespaces --no-headers | wc -l
    
  • Advanced method (requires jq – a JSON processor):

    # Filter only Ingress resources using the old controller
    # Breakdown:
    #   kubectl get ingress --all-namespaces -o json  → Get all Ingress as JSON
    #   jq '.items[]'                                  → Loop through each Ingress
    #   select(...)                                    → Filter by IngressClass
    kubectl get ingress --all-namespaces \
      -o json | jq '.items[] | select(
        .metadata.annotations["kubernetes.io/ingress.class"] == "nginx" or
        .spec.ingressClassName == "nginx" or
        (.metadata.annotations["kubernetes.io/ingress.class"] // "" | length == 0)
      )' > old-ingresses.json
    

Default Ingress Controller

Ingress resources without an explicit IngressClass may be using the default ingress controller. Verify which controller is set as default: kubectl get ingressclass -o yaml.

Use the exported files (e.g. old-ingresses-backup.yaml) as the source for creating working copies to convert in Step 4. Do not modify the live Ingress resources in the cluster until you apply validated changes in Part 2.

Step 4. Convert annotations

Work with copies, not production

Do not modify existing production Ingress resources directly. Changing annotations or the IngressClass will immediately change which controller serves them, causing traffic disruption. Instead, work with copies of your Ingress manifests: export or copy them, apply the conversions below to the copies, and test in a separate namespace (Step 5). In Part 2, you will apply these converted copies as new Ingress resources alongside the originals (for strategies A, B) or replace the originals (for strategies C, D).

Update your copied Ingress manifests to ensure compatibility with the new Ingress Controller, as follows.

  1. Change the IngressClass to match the new controller. Also rename the Ingress resource (e.g., add a -new suffix) so it can coexist with the original during migration (required for strategies A and B):

    # Old
    metadata:
      name: my-ingress
      annotations:
        kubernetes.io/ingress.class: nginx
    
    # New (renamed + new IngressClass)
    metadata:
      name: my-ingress-new
      annotations:
        kubernetes.io/ingress.class: nginx-new
    
  2. Update controller-specific annotations by replacing the old NGINX annotation prefix with the new one:

    # Old
    annotations:
      nginx.ingress.kubernetes.io/rewrite-target: /$2
      nginx.ingress.kubernetes.io/ssl-redirect: "true"
    
    # New
    annotations:
      nginx.org/rewrites: "serviceName=myservice rewrite=/$2"
      nginx.org/redirect-to-https: "true"
    
  3. Wallarm-related annotations (e.g., wallarm-mode, wallarm-application) remain unchanged and continue to work with the new controller. Only the prefix changes from nginx.ingress.kubernetes.io/ to nginx.org/.

    annotations:
      nginx.org/wallarm-mode: "block"
      nginx.org/wallarm-application: "1"
      nginx.org/wallarm-parse-response: "on"
    

Step 5. Test converted Ingress resources

Before migrating production traffic, verify your converted Ingress resources in a separate test namespace. Production Ingress resources remain unchanged at this stage. You are only validating the converted copies.

Example values

The domain names (e.g. test.example.com), resource names (e.g. test-ingress), and command outputs shown in Step 5 are for illustration only. Use the host defined in your own test Ingress manifest (the value under spec.rules[].host) in the verification commands, and expect your actual resource names and outputs to differ.

  1. Apply the manifest file containing the converted Ingress resource(s) from Step 4 — the test version with the new IngressClass and nginx.org/* annotations. You can use any filename (e.g., test-ingress-new.yaml). The name is for your reference only.

    kubectl apply -f test-ingress-new.yaml -n test-namespace  
    

    If this step was successful, you will see output similar to (example):

    ingress.networking.k8s.io/test-ingress created
    

    or configured if you are updating an existing Ingress. Your actual resource name and namespace will match your manifest.

  2. Check the NGINX configuration generated by the new controller. Use the host from your test Ingress in the grep pattern (the example below uses a placeholder host test.example.com):

    kubectl exec -n wallarm-ingress-new \
      $(kubectl get pod -n wallarm-ingress-new -l app.kubernetes.io/component=controller -o name | head -1) \
      -- nginx -T | grep -A 20 "server_name test.example.com"
    

    The presence of your host in the server_name directive in the output confirms the domain is configured correctly.

  3. Get the new load balancer IP:

    NEW_LB_IP=$(kubectl get svc -n wallarm-ingress-new wallarm-ingress-new-controller \
      -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
    
    if [ -z "$NEW_LB_IP" ]; then
      echo "ERROR: LoadBalancer IP not assigned yet. Wait and retry."
      exit 1
    fi
    
    echo "New LoadBalancer IP: $NEW_LB_IP"
    
  4. Test HTTP or HTTPS connectivity. Use the host from your test Ingress in the Host header (example below uses a placeholder):

    # HTTP
    curl -H "Host: test.example.com" http://$NEW_LB_IP/
    # HTTPS
    curl -H "Host: test.example.com" https://$NEW_LB_IP/
    

    Expected outcome:

    • The HTTP response status is 200 OK.
    • Your application responds correctly, e.g.:
      • The homepage HTML is returned as expected.
      • API endpoints return the expected JSON or other responses.
  5. Test Wallarm protection.

    Simulate a malicious request using the host from your test Ingress in the Host header (example uses a placeholder):

    curl -H "Host: test.example.com" "http://$NEW_LB_IP/?id=1' OR '1'='1"
    

    Wait 2–3 minutes and verify in Wallarm Console → EventsAttacks that the request is detected/blocked.

  6. Test other core Wallarm functionality.

Congratulations! You have completed the strategy-independent portion of the migration. So far you have worked only with copies of Ingress manifests and tested them in a test namespace. Production Ingress resources have not been changed yet.

Migration - part 2 (strategy dependent)

Purpose: Shift traffic from the old controller to the new one and apply the validated Ingress changes to your production Ingress resources. You choose one of four strategies (load balancer, DNS switch, selector swap, or direct replacement) depending on your environment.

How IngressClass controls which controller serves your traffic

A Kubernetes Ingress Controller only processes Ingress resources that match its IngressClass. In each strategy below, you will update your production Ingress resources to use the new controller's IngressClass (nginx-new). This is what tells the new controller to pick them up. The old controller stops serving an Ingress as soon as its IngressClass no longer matches. The infrastructure-level switch (DNS, load balancer, or service selector) determines which controller receives network traffic, while the IngressClass update determines which controller owns the routing configuration. Both must happen for the migration to be complete.

Strategy A - load balancer (traffic splitting)

This method uses an external load balancer (F5, HAProxy, cloud ALB, etc.) to gradually shift traffic from the old controller to the new one.

Migration steps:

  1. Apply converted Ingress resources as new resources alongside the existing ones. Do not modify or delete the original Ingress resources — the old controller must continue serving traffic while you shift it gradually. Apply the validated manifests from Part 1 with a new IngressClass (nginx-new) and different resource names (e.g., add a -new suffix):

    kubectl apply -f <validated-manifests>.yaml
    

    At this point, both controllers are serving traffic: the old controller handles Ingress resources with IngressClass: nginx, and the new controller handles those with IngressClass: nginx-new.

  2. Configure your external load balancer to split traffic:

    # Example: NGINX upstream configuration
    upstream wallarm_ingress {
        server <old-lb-ip>:443 weight=100;  # Start: 100% to old
        server <new-lb-ip>:443 weight=0;    # Start: 0% to new
    }
    
  3. Gradually adjust traffic weights:

    Phase 1: 90/10 (old/new) → Monitor for 2-4 hours
    Phase 2: 75/25           → Monitor for 2-4 hours
    Phase 3: 50/50           → Monitor for 4-8 hours
    Phase 4: 25/75           → Monitor for 2-4 hours
    Phase 5: 0/100           → Complete migration
    
  4. Monitor during each phase:

    # Watch resource usage on both controllers
    kubectl top pods -n ingress-nginx
    kubectl top pods -n wallarm-ingress-new
    
    # Check for HTTP errors
    kubectl logs -n wallarm-ingress-new deployment/wallarm-ingress-new-controller \
      | grep -E "HTTP/[0-9.]+ (4|5)[0-9]{2}"
    

Strategy B - DNS switch

This method deploys the new controller with a new load balancer IP and updates DNS to point to it.

Migration steps:

  1. Get the new load balancer IP:

    NEW_LB_IP=$(kubectl get svc -n wallarm-ingress-new wallarm-ingress-new-controller \
      -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
    
    if [ -z "$NEW_LB_IP" ]; then
      echo "ERROR: LoadBalancer IP not assigned yet. Wait and retry."
      exit 1
    fi
    
    echo "New LoadBalancer IP: $NEW_LB_IP"
    
  2. Apply converted Ingress resources as new resources alongside the existing ones. Do not modify or delete the original Ingress resources yet — the old controller must continue serving traffic via the old DNS until propagation completes. Apply the validated manifests from Part 1 with a new IngressClass (nginx-new) and different resource names:

    kubectl apply -f <validated-manifests>.yaml
    

    Both controllers are now serving traffic: old via the old IP, new via the new IP.

  3. Test the new setup directly against the new IP:

    # Test HTTP connectivity directly against the new IP
    curl -H "Host: your-domain.com" http://$NEW_LB_IP/
    
    # Test with attack simulation
    curl -H "Host: your-domain.com" "http://$NEW_LB_IP/test?id=1' OR '1'='1"
    
  4. Verify in Wallarm Console → EventsAttacks that the attack was detected successfully.

  5. Update DNS records to point to the new IP:

    # Example using AWS Route53:
    aws route53 change-resource-record-sets \
      --hosted-zone-id <zone-id> \
      --change-batch "{
        \"Changes\": [{
          \"Action\": \"UPSERT\",
          \"ResourceRecordSet\": {
            \"Name\": \"your-domain.com\",
            \"Type\": \"A\",
            \"TTL\": 300,
            \"ResourceRecords\": [{\"Value\": \"$NEW_LB_IP\"}]
          }
        }]
      }"
    
  6. Monitor DNS propagation and traffic:

    # Check DNS resolution
    dig +short your-domain.com
    nslookup your-domain.com
    
    # Monitor logs and traffic on the new controller
    kubectl logs -n wallarm-ingress-new deployment/wallarm-ingress-new-controller -f
    
  7. Wait for DNS TTL to expire while monitoring the old controller for declining traffic.

Strategy C - Selector Swap

This method preserves the existing load balancer IP by switching the Kubernetes service selector from the old controller pods to the new ones.

Recommended timing

Perform the migration during a low-traffic window (e.g., Saturday morning).

Migration steps:

  1. Get the current load balancer IP:

    OLD_LB_IP=$(kubectl get svc -n <CURRENT_IC_NAMESPACE> <OLD_IC_SERVICE_NAME> \
      -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
    
    if [ -z "$OLD_LB_IP" ]; then
      echo "ERROR: Could not retrieve current LoadBalancer IP"
      exit 1
    fi
    
    echo "Current LoadBalancer IP (must preserve): $OLD_LB_IP"
    
  2. Update your values file (e.g., values-same-namespace.yaml) with the following configuration for the new Ingress Controller:

    controller:
      podLabels:
        app.kubernetes.io/name: nginx-ingress-new
        app.kubernetes.io/instance: wallarm-new
        app.kubernetes.io/component: controller-new
    
      service:
        create: false  # Critical: Do not create a new LoadBalancer Service
    
      ingressClass:
        name: nginx-new
    
    config:
      wallarm:
        enabled: true
        api:
          host: "api.wallarm.com"  # or us1.api.wallarm.com
          token: "<NODE_TOKEN>"
    
  3. Deploy the new Ingress Controller in the same namespace as the old controller using the updated values file:

    helm install wallarm-ingress-new wallarm/wallarm-ingress \
      --version 7.0.0 \
      -n <CURRENT_IC_NAMESPACE> \
      -f values-same-namespace.yaml
    

    Where <CURRENT_IC_NAMESPACE> is the namespace where your existing Ingress Controller is deployed (e.g., ingress-nginx).

  4. Verify no new LoadBalancer service was created:

    kubectl get svc -n <CURRENT_IC_NAMESPACE>
    # Only the OLD LoadBalancer Service should be visible
    
  5. Verify new pods are running:

    kubectl get pods -n <CURRENT_IC_NAMESPACE> -l app.kubernetes.io/instance=wallarm-new
    
  6. Test new controller pods directly:

    NEW_POD=$(kubectl get pod -n <CURRENT_IC_NAMESPACE> \
      -l app.kubernetes.io/instance=wallarm-new \
      -o jsonpath='{.items[0].metadata.name}')
    
    kubectl port-forward -n <CURRENT_IC_NAMESPACE> $NEW_POD 8080:80 &
    PF_PID=$!
    
    sleep 2  # Wait for port-forward to establish
    curl -H "Host: your-domain.com" http://localhost:8080/
    
    # Stop port-forward
    kill $PF_PID
    
  7. Update your production Ingress resources to use the new controller (IngressClass and nginx.org/* annotations as validated in Part 1). This ensures the new controller will serve them once the service selector is switched. For each production Ingress (or in bulk per namespace):

    kubectl patch ingress <ingress-name> -n <namespace> \
      -p '{"metadata":{"annotations":{"kubernetes.io/ingress.class":"nginx-new"}}}'
    

    Apply the full set of annotation changes from your validated manifests if you use more than IngressClass (e.g. nginx.org/wallarm-mode, nginx.org/rewrites). Repeat for all production Ingress resources that will be served by the new controller.

    Brief traffic disruption

    Between updating the IngressClass (this step) and switching the service selector (next steps), traffic arriving at the old controller pods will not match any Ingress configuration. Execute the next steps immediately after this one to minimize the gap to a few seconds. If zero downtime is critical, consider Strategy A with dual Ingress resources instead.

  8. Check the labels of the new controller pods. You will need them later to update the service selector:

    kubectl get pods -n <CURRENT_IC_NAMESPACE> \
      -l app.kubernetes.io/instance=wallarm-new \
      --show-labels
    

    Example output:

    # NAME                           LABELS
    # wallarm-new-abc123             app.kubernetes.io/name=nginx-ingress-new,app.kubernetes.io/instance=wallarm-new,...
    
  9. Update the LoadBalancer service to point to the new pods:

    kubectl patch svc <OLD_IC_SERVICE_NAME> -n <CURRENT_IC_NAMESPACE> -p '{
      "spec": {
        "selector": {
          "app.kubernetes.io/name": "nginx-ingress-new",
          "app.kubernetes.io/instance": "wallarm-new",
          "app.kubernetes.io/component": "controller-new"
        }
      }
    }'
    

    Where:

    • kubectl patch - Updates an existing resource without replacing it entirely.
    • svc <OLD_IC_SERVICE_NAME> - The Kubernetes Service name of your old Ingress Controller (e.g., ingress-nginx-controller). Find it with kubectl get svc -n <CURRENT_IC_NAMESPACE> -l app.kubernetes.io/component=controller.
    • -n <CURRENT_IC_NAMESPACE> - The namespace where the service exists.
    • -p '{ "spec": { "selector": {...} } }' - JSON patch to update the selector.
    • The selector labels MUST match your new controller pod labels exactly.

    Expected outcome:

    service/<OLD_IC_SERVICE_NAME> patched
    

    This confirms that the service selector was updated successfully and traffic will now be routed to the new controller pods.

  10. Verify IP preservation:

    NEW_LB_IP=$(kubectl get svc -n <CURRENT_IC_NAMESPACE> <OLD_IC_SERVICE_NAME> \
      -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
    echo "LoadBalancer IP after switch: $NEW_LB_IP"
    
    if [ "$OLD_LB_IP" == "$NEW_LB_IP" ]; then
      echo "SUCCESS: IP address preserved - $NEW_LB_IP"
    else
      echo "CRITICAL: IP address changed from $OLD_LB_IP to $NEW_LB_IP - INITIATE ROLLBACK IMMEDIATELY"
      exit 1
    fi
    
  11. Monitor traffic on the new controller:

    # Watch logs on the new controller
    kubectl logs -n <CURRENT_IC_NAMESPACE> -l app.kubernetes.io/instance=wallarm-new -f
    
    # Check metrics (find the correct deployment name first)
    DEPLOYMENT_NAME=$(kubectl get deployment -n <CURRENT_IC_NAMESPACE> \
      -l app.kubernetes.io/instance=wallarm-new \
      -o jsonpath='{.items[0].metadata.name}')
    
    kubectl exec -n <CURRENT_IC_NAMESPACE> \
      deployment/wallarm-ingress-new-controller \
      -c wallarm-wcli -- wcli metric
    
  12. After validation period (24–48 hours), scale down the old controller:

    # Scale to zero (keeps configuration for potential rollback)
    kubectl scale deployment -n <CURRENT_IC_NAMESPACE> \
      <OLD_CONTROLLER_DEPLOYMENT> --replicas=0
    
  13. After an additional 24 hours of stable traffic, delete the old controller:

    helm uninstall <OLD_RELEASE_NAME> -n <CURRENT_IC_NAMESPACE>
    

    Recommended cleanup

    After the migration has been stable for 30+ days, schedule a maintenance window to properly clean up the setup (see the steps below). This ensures the service is owned and managed by the new Helm chart. It also simplifies future upgrades and prevents operational confusion.

  14. Create a new LoadBalancer Service managed by the new Helm chart:

    Replace <YOUR_CURRENT_IP> with the IP address from step 1 ($OLD_LB_IP).

    helm upgrade wallarm-ingress-new wallarm/wallarm-ingress \
      -n <CURRENT_IC_NAMESPACE> \
      --set controller.service.create=true \
      --set controller.service.loadBalancerIP="$OLD_LB_IP" \
      -f values.yaml
    
  15. Wait for the new load balancer to be assigned the same IP (cloud provider dependent; typically takes 1–5 minutes).

    # Monitor the service status
    kubectl get svc -n <CURRENT_IC_NAMESPACE> -w
    
  16. Verify the new service has the same IP, then delete the old service:

    # Verify new service has the correct IP
    NEW_SERVICE_IP=$(kubectl get svc -n <CURRENT_IC_NAMESPACE> wallarm-ingress-new-controller \
      -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
    
    if [ "$NEW_SERVICE_IP" == "$OLD_LB_IP" ]; then
      echo "SUCCESS: New service has correct IP - safe to delete old service"
      kubectl delete svc <OLD_IC_SERVICE_NAME> -n <CURRENT_IC_NAMESPACE>
    else
      echo "WARNING: IP mismatch. Review before deleting old service."
    fi
    
  17. Verify the configuration:

    kubectl get svc -n <CURRENT_IC_NAMESPACE>
    

Strategy D - direct replacement

This method removes the old controller and deploys the new one in its place.

Migration steps:

Recommendation

For major cloud providers, we strongly recommend reserving the load balancer IP before migration to prevent IP changes. See steps 1 and 2 in the procedure below.

  1. Allocate or reserve a static public IP, depending on your cloud provider:

    EIP_ALLOC=$(aws ec2 allocate-address --domain vpc --query 'AllocationId' --output text)
    EIP=$(aws ec2 describe-addresses --allocation-ids $EIP_ALLOC --query 'Addresses[0].PublicIp' --output text)
    echo "Reserved EIP: $EIP (Allocation: $EIP_ALLOC)"
    
    gcloud compute addresses create wallarm-ingress-ip \
      --region <your-region>
    
    STATIC_IP=$(gcloud compute addresses describe wallarm-ingress-ip \
      --region <your-region> --format="value(address)")
    echo "Reserved IP: $STATIC_IP"
    
    az network public-ip create \
      --resource-group <resource-group> \
      --name wallarm-ingress-ip \
      --sku Standard \
      --allocation-method Static
    
    STATIC_IP=$(az network public-ip show \
      --resource-group <resource-group> \
      --name wallarm-ingress-ip \
      --query ipAddress \
      --output tsv)
    echo "Reserved IP: $STATIC_IP"
    
  2. Deploy the new controller with the reserved IP:

    # In your values.yaml:
    controller:
      service:
        annotations:
          service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
          service.beta.kubernetes.io/aws-load-balancer-eip-allocations: "<EIP_ALLOC>"
    
    # In your values.yaml:
    controller:
      service:
        loadBalancerIP: "<STATIC_IP>"
        annotations:
          cloud.google.com/load-balancer-type: "External"    
    
    # In your values.yaml:
    controller:
      service:
        loadBalancerIP: "<STATIC_IP>"
        annotations:
          service.beta.kubernetes.io/azure-load-balancer-resource-group: "<resource-group>"    
    
  3. Back up the current configuration:

    # Export all Ingress resources
    kubectl get ingress --all-namespaces -o yaml > ingress-backup.yaml
    
    # Export current controller Helm values
    helm get values ingress-nginx -n ingress-nginx > old-values-backup.yaml
    
  4. Prepare converted Ingress resources:

    • Update all IngressClass annotations as described in Step 4.
    • Update annotation prefixes.
    • Save the updated resources to new-ingresses.yaml.
  5. Schedule a maintenance window and notify users about the expected downtime.

  6. Delete the old controller:

    # Downtime begins here
    helm uninstall ingress-nginx -n ingress-nginx
    
    # Verify resources are removed
    kubectl get pods -n ingress-nginx
    
  7. Install the new controller immediately:

    helm install wallarm-ingress wallarm/wallarm-ingress \
      --version 7.0.0 \
      -n wallarm-ingress \
      --create-namespace \
      -f new-values.yaml
    
    # Wait for pods to become ready
    kubectl wait --for=condition=ready pod \
      -l app.kubernetes.io/name=nginx-ingress \
      -n wallarm-ingress \
      --timeout=300s
    
  8. Apply the converted Ingress resources (the validated manifests from Part 1, saved as new-ingresses.yaml) to your production namespaces. This is the step where production Ingress resources are updated; downtime is already in effect and the new controller is running.

    kubectl apply -f new-ingresses.yaml
    

    If your Ingress resources span multiple namespaces, apply the corresponding manifest to each namespace (or use kubectl apply -f new-ingresses.yaml -n <namespace> as needed).

  9. Verify that services are accessible:

    NEW_LB_IP=$(kubectl get svc -n wallarm-ingress wallarm-ingress-controller \
      -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
    
    if [ -z "$NEW_LB_IP" ]; then
      echo "ERROR: LoadBalancer IP not assigned yet. Wait and retry."
      exit 1
    fi
    
    echo "New LoadBalancer IP: $NEW_LB_IP"
    
    # Test each domain
    curl -H "Host: app1.example.com" http://$NEW_LB_IP/
    curl -H "Host: app2.example.com" http://$NEW_LB_IP/
    
  10. Update DNS records to point to the new load balancer IP (if not pre-reserved).

  11. Test attack detection:

    # Test attack detection
    curl -H "Host: app1.example.com" "http://$NEW_LB_IP/?id=1' OR '1'='1"
    
  12. Verify in Wallarm Console that attacks are detected.

Removing the old controller

After the migration is complete and traffic is fully served by the new controller, remove the old Ingress Controller to avoid resource waste and configuration conflicts.

It is recommended for the new controller to be stable for at least 24-48 hours before removing the old controller.

Steps:

  1. Scale down the old controller first (allows quick rollback if needed):

    # Find the old controller deployment
    kubectl get deployments -n <OLD_IC_NAMESPACE> | grep ingress
    
    # Scale to zero
    kubectl scale deployment <OLD_CONTROLLER_DEPLOYMENT> -n <OLD_IC_NAMESPACE> --replicas=0
    
  2. Monitor for 24 hours to ensure nothing breaks.

  3. Uninstall the old Helm release:

    # Find the old release name
    helm list -A | grep ingress
    
    # Uninstall
    helm uninstall <OLD_RELEASE_NAME> -n <OLD_IC_NAMESPACE>
    
  4. Clean up orphaned resources (if any):

    # Check for leftover resources
    kubectl get all -n <OLD_IC_NAMESPACE> -l app.kubernetes.io/name=ingress-nginx
    

Getting help with migration

If you encounter issues during migration, contact Wallarm support.

To speed up troubleshooting, collect and share the following data:

  1. Your architecture overview and the chosen migration strategy.

  2. Helm release info and values (remove tokens before sharing):

    helm list -A | grep wallarm
    helm get values <RELEASE_NAME> -n <NAMESPACE> > wallarm-values.yaml
    
  3. Pod status:

    kubectl get pods -n <NAMESPACE> -l app.kubernetes.io/name=wallarm-ingress -o wide
    
  4. Controller logs:

    kubectl logs -n <NAMESPACE> -l app.kubernetes.io/component=controller --all-containers --tail=200 > controller-logs.txt
    
  5. Postanalytics logs:

    kubectl logs -n <NAMESPACE> -l app.kubernetes.io/component=wallarm-postanalytics --all-containers --tail=200 > postanalytics-logs.txt
    
  6. NGINX configuration dump:

    kubectl exec -n <NAMESPACE> $(kubectl get pod -n <NAMESPACE> -l app.kubernetes.io/component=controller -o name | head -1) -- nginx -T > nginx-config.txt
    
  7. Kubernetes events:

    kubectl get events -n <NAMESPACE> --sort-by='.lastTimestamp' > k8s-events.txt