Migrating From the Community‑Based to F5‑Based Wallarm Ingress Controller¶
This topic explains why and how to migrate from the Wallarm Ingress Controller based on the Community Ingress NGINX to the new controller based on F5 NGINX Ingress Controller.
Why the migration is required¶
Previously, Wallarm provided an Ingress Controller based on the Community Ingress NGINX.
In November 2025, the Kubernetes community announced the retirement of this project due to growing maintenance challenges and unresolved technical issues.
Wallarm will fully support this controller (including new feature releases) until March 2026. After that date, the controller will remain functional but will no longer receive updates, bug fixes, or security patches.
Continuing to use it after March 2026 may expose your environment to unresolved defects and security vulnerabilities.
To ensure ongoing support and security, we strongly recommend migrating to a supported deployment option, such as the Wallarm Ingress Controller based on the F5 NGINX Ingress Controller. The sections below describe the migration steps and their benefits.
About the new Ingress Controller¶
The new Wallarm Ingress Controller is based on the F5 NGINX Ingress Controller and is the recommended replacement for the Community NGINX-based deployment.
It provides long-term stability, vendor-backed support, regular updates and security patches, and advanced traffic management.
For a detailed overview of the changes and new features, see the What's New guide.
NGINX Plus is not supported
The Wallarm Ingress Controller uses the open-source edition of the F5 NGINX Ingress Controller. NGINX Plus is not included and is not supported.
Choosing your migration strategy¶
You can migrate to the new Wallarm Ingress Controller using one of four strategies. The appropriate option depends on your infrastructure, IP requirements, and tolerance for downtime.
Review the summary table below to determine which approach best fits your environment. Detailed descriptions of each strategy follow.
| Strategy | Downtime | IP changes | Complexity | Best for | Est. time |
|---|---|---|---|---|---|
| Load balancer | None | No | High | Environments with an external load balancer | 4–8 hours (includes staged rollout and monitoring) |
| DNS switch | None (DNS propagation applies) | Yes | Low | Environments where IP changes are acceptable | 3–4 hours plus DNS propagation time (depends on your TTL setting) |
| Selector swap | None | No | Medium | Production environments with strict IP requirements | 4–6 hours |
| Direct replacement | 5-15 minutes | Yes | Low | Development and staging environments | 2–3 hours (including the downtime window) |
Recommendation
If unsure, use selector swap for production environments and direct replacement for development or staging.
Migration - part 1 (strategy independent)¶
Purpose: Prepare the new Ingress Controller and validate your converted Ingress configuration without changing production. You will deploy the new controller alongside the existing one, convert Ingress manifests on copies, and test them in a separate namespace. Part 1 is the same for every migration strategy.
Requirements¶
Before starting the migration, ensure the following requirements are met:
-
Kubernetes platform version 1.28-1.35
-
Helm version 3.10+
-
Ability to create, modify, and delete resources in the target Kubernetes namespace
-
Compatibility of your services with the Wallarm Ingress Controller based on the F5 NGINX Ingress Controller version 5.4.0
-
Access to the account with the Administrator role in Wallarm Console for the US Cloud or EU Cloud
-
Access to
https://us1.api.wallarm.comfor working with US Wallarm Cloud or tohttps://api.wallarm.comfor working with EU Wallarm Cloud -
Access to
https://charts.wallarm.comto add the Wallarm Helm charts. Ensure the access is not blocked by a firewall -
Access to the Wallarm repositories on Docker Hub
https://hub.docker.com/r/wallarm. Make sure the access is not blocked by a firewall -
Access to the IP addresses and their corresponding hostnames (if any) listed below. This is needed for downloading updates to attack detection rules and API specifications, as well as retrieving precise IPs for your allowlisted, denylisted, or graylisted countries, regions, or data centers
Depending on the chosen migration strategy, additional access may be required:
-
Load balancer access (required for strategy A – Access to your external load balancer configuration.
-
DNS management access (required for strategies B and D) – Ability to create/update A/CNAME records.
Step 0. Collect current Ingress deployment details and validate environment¶
Before starting the migration, gather the following information from your existing Ingress Controller deployment and complete basic environment validations.
-
Gather deployment information and save it - you will need it throughout the migration:
# 1. Identify the namespace of the current Ingress Controller kubectl get pods --all-namespaces | grep ingress # Note the namespace name (usually 'ingress-nginx') # 2. Record the current LoadBalancer external IP kubectl get svc -n <ingress-namespace> -o wide # Note the value in the EXTERNAL-IP column # 3. List all domains and hostnames handled by Ingress kubectl get ingress --all-namespaces -o jsonpath='{.items[*].spec.rules[*].host}' | tr ' ' '\n' | sort -u # Save this list # 4. Identify the Wallarm API endpoint in use kubectl get configmap -n <ingress-namespace> -o yaml | grep -i wallarm # Typical values: api.wallarm.com (US), api.wallarm.eu (EU), or us1.api.wallarm.com # 5. Determine the current Helm release name helm list -A | grep ingress # Note the release name (usually 'ingress-nginx') -
Perform pre-flight validations.
Complete these checks to verify the cluster and environment are ready for migration:
- Back up all Ingress resources:
kubectl get ingress --all-namespaces -o yaml > backup-ingresses-$(date +%Y%m%d).yaml echo "Backup saved to: backup-ingresses-$(date +%Y%m%d).yaml"- Export current Helm configuration:
helm list -A | grep ingress # Find your release name helm get values <release-name> -n <namespace> > backup-helm-values-$(date +%Y%m%d).yaml- Document current load balancer IP (critical for rollback):
kubectl get svc -n <ingress-namespace> -o jsonpath='{.items[?(@.spec.type=="LoadBalancer")].status.loadBalancer.ingress[0].ip}'- Verify cluster resources:
# Check current TTL dig your-domain.com | grep -A 1 "ANSWER SECTION" # Look for the number before IN A (that's your TTL in seconds) # Recommended: Set TTL to 300 seconds (5 minutes) before migration # This allows faster DNS propagation during the migration # Update in your DNS provider (Route53, Cloudflare, etc.) # After migration is stable (48h+), you can increase TTL back to normal (3600 or higher) -
Identify a maintenance window:
- Production: Prefer low-traffic period (e.g., weekends or off-hours)
- Development and staging environments: Flexible, anytime is acceptable
-
Notify stakeholders about the following:
- Migration schedule
- Expected duration
- Potential risks
- Rollback plan
Step 1: Review the new Ingress Controller documentation¶
-
Read the comparison guide to understand the differences between the previous and new Ingress Controller implementations:
-
Read the new Ingress Controller deployment guide and the configuration parameters.
Key configuration areas include:
- Wallarm API credentials (
config.wallarm.api.host,config.wallarm.api.token) - API Firewall configuration (optional)
- Resource limits and scaling
- Metrics and monitoring endpoints
- Wallarm API credentials (
Step 2: Deploy the new Controller¶
-
Deploy the new controller.
Deploy the new Ingress Controller in your cluster using the provided
values.yamlfile:# Add the Wallarm Helm repository helm repo add wallarm https://charts.wallarm.com/ helm repo update # Deploy the new Ingress Controller helm install wallarm-ingress-new wallarm/wallarm-ingress \ --version 7.0.0 \ -n wallarm-ingress-new \ --create-namespace \ -f values.yamlIngressClass name
Use a different
IngressClassname (e.g.,nginx-new) to run the new controller alongside the old one during migration.Example of the
values.yamlfile with the minimum configuration is below. See more configuration parameters.<NODE_TOKEN>is the API token generated for Wallarm Node deployment. You can reuse the existing API token with theNode deployment/Deploymentusage type from your current NGINX Ingress Controller deployment or generate a new one. -
Verify the Ingress Controller deployment in Kubernetes:
# Check controller pods kubectl get pods -n wallarm-ingress-new # Check Wallarm WCLI logs for cloud connectivity and errors kubectl logs -n wallarm-ingress-new -l app.kubernetes.io/component=controller \ -c wallarm-wcli --tail=50 | grep -i "sync\|connect\|error" # Check Postanalytics logs kubectl logs -n wallarm-ingress-new -l app.kubernetes.io/component=wallarm-postanalytics --tail=50 -
Verify the new Ingress Controller in the Wallarm Console.
To do so, go to Wallarm Console → Settings → Nodes and check if the new Ingress Controller node appears. It should show up within 2–3 minutes of deployment.
Step 3. Prepare your Ingress resources¶
Collect all Ingress resources that use the old Ingress Controller:
-
Simple method (no jq required):
# List all Ingress resources across all namespaces kubectl get ingress --all-namespaces # Export all Ingress resources to a backup file (for reference and rollback purposes) kubectl get ingress --all-namespaces -o yaml > old-ingresses-backup.yaml echo "Backup saved to: old-ingresses-backup.yaml" # Count the total number of Ingress resources kubectl get ingress --all-namespaces --no-headers | wc -l -
Advanced method (requires jq – a JSON processor):
# Filter only Ingress resources using the old controller # Breakdown: # kubectl get ingress --all-namespaces -o json → Get all Ingress as JSON # jq '.items[]' → Loop through each Ingress # select(...) → Filter by IngressClass kubectl get ingress --all-namespaces \ -o json | jq '.items[] | select( .metadata.annotations["kubernetes.io/ingress.class"] == "nginx" or .spec.ingressClassName == "nginx" or (.metadata.annotations["kubernetes.io/ingress.class"] // "" | length == 0) )' > old-ingresses.json
Default Ingress Controller
Ingress resources without an explicit IngressClass may be using the default ingress controller. Verify which controller is set as default: kubectl get ingressclass -o yaml.
Use the exported files (e.g. old-ingresses-backup.yaml) as the source for creating working copies to convert in Step 4. Do not modify the live Ingress resources in the cluster until you apply validated changes in Part 2.
Step 4. Convert annotations¶
Work with copies, not production
Do not modify existing production Ingress resources directly. Changing annotations or the IngressClass will immediately change which controller serves them, causing traffic disruption. Instead, work with copies of your Ingress manifests: export or copy them, apply the conversions below to the copies, and test in a separate namespace (Step 5). In Part 2, you will apply these converted copies as new Ingress resources alongside the originals (for strategies A, B) or replace the originals (for strategies C, D).
Update your copied Ingress manifests to ensure compatibility with the new Ingress Controller, as follows.
-
Change the
IngressClassto match the new controller. Also rename the Ingress resource (e.g., add a-newsuffix) so it can coexist with the original during migration (required for strategies A and B): -
Update controller-specific annotations by replacing the old NGINX annotation prefix with the new one:
-
Wallarm-related annotations (e.g.,
wallarm-mode,wallarm-application) remain unchanged and continue to work with the new controller. Only the prefix changes fromnginx.ingress.kubernetes.io/tonginx.org/.
Step 5. Test converted Ingress resources¶
Before migrating production traffic, verify your converted Ingress resources in a separate test namespace. Production Ingress resources remain unchanged at this stage. You are only validating the converted copies.
Example values
The domain names (e.g. test.example.com), resource names (e.g. test-ingress), and command outputs shown in Step 5 are for illustration only. Use the host defined in your own test Ingress manifest (the value under spec.rules[].host) in the verification commands, and expect your actual resource names and outputs to differ.
-
Apply the manifest file containing the converted Ingress resource(s) from Step 4 — the test version with the new
IngressClassandnginx.org/*annotations. You can use any filename (e.g.,test-ingress-new.yaml). The name is for your reference only.If this step was successful, you will see output similar to (example):
or
configuredif you are updating an existing Ingress. Your actual resource name and namespace will match your manifest. -
Check the NGINX configuration generated by the new controller. Use the host from your test Ingress in the
greppattern (the example below uses a placeholder hosttest.example.com):kubectl exec -n wallarm-ingress-new \ $(kubectl get pod -n wallarm-ingress-new -l app.kubernetes.io/component=controller -o name | head -1) \ -- nginx -T | grep -A 20 "server_name test.example.com"The presence of your host in the
server_namedirective in the output confirms the domain is configured correctly. -
Get the new load balancer IP:
-
Test HTTP or HTTPS connectivity. Use the host from your test Ingress in the
Hostheader (example below uses a placeholder):# HTTP curl -H "Host: test.example.com" http://$NEW_LB_IP/ # HTTPS curl -H "Host: test.example.com" https://$NEW_LB_IP/Expected outcome:
- The HTTP response status is 200 OK.
- Your application responds correctly, e.g.:
- The homepage HTML is returned as expected.
- API endpoints return the expected JSON or other responses.
-
Test Wallarm protection.
Simulate a malicious request using the host from your test Ingress in the
Hostheader (example uses a placeholder):Wait 2–3 minutes and verify in Wallarm Console → Events → Attacks that the request is detected/blocked.
Congratulations! You have completed the strategy-independent portion of the migration. So far you have worked only with copies of Ingress manifests and tested them in a test namespace. Production Ingress resources have not been changed yet.
Migration - part 2 (strategy dependent)¶
Purpose: Shift traffic from the old controller to the new one and apply the validated Ingress changes to your production Ingress resources. You choose one of four strategies (load balancer, DNS switch, selector swap, or direct replacement) depending on your environment.
How IngressClass controls which controller serves your traffic
A Kubernetes Ingress Controller only processes Ingress resources that match its IngressClass. In each strategy below, you will update your production Ingress resources to use the new controller's IngressClass (nginx-new). This is what tells the new controller to pick them up. The old controller stops serving an Ingress as soon as its IngressClass no longer matches. The infrastructure-level switch (DNS, load balancer, or service selector) determines which controller receives network traffic, while the IngressClass update determines which controller owns the routing configuration. Both must happen for the migration to be complete.
Strategy A - load balancer (traffic splitting)¶
This method uses an external load balancer (F5, HAProxy, cloud ALB, etc.) to gradually shift traffic from the old controller to the new one.
Migration steps:
-
Apply converted Ingress resources as new resources alongside the existing ones. Do not modify or delete the original Ingress resources — the old controller must continue serving traffic while you shift it gradually. Apply the validated manifests from Part 1 with a new
IngressClass(nginx-new) and different resource names (e.g., add a-newsuffix):At this point, both controllers are serving traffic: the old controller handles Ingress resources with
IngressClass: nginx, and the new controller handles those withIngressClass: nginx-new. -
Configure your external load balancer to split traffic:
-
Gradually adjust traffic weights:
-
Monitor during each phase:
Strategy B - DNS switch¶
This method deploys the new controller with a new load balancer IP and updates DNS to point to it.
Migration steps:
-
Get the new load balancer IP:
-
Apply converted Ingress resources as new resources alongside the existing ones. Do not modify or delete the original Ingress resources yet — the old controller must continue serving traffic via the old DNS until propagation completes. Apply the validated manifests from Part 1 with a new
IngressClass(nginx-new) and different resource names:Both controllers are now serving traffic: old via the old IP, new via the new IP.
-
Test the new setup directly against the new IP:
-
Verify in Wallarm Console → Events → Attacks that the attack was detected successfully.
-
Update DNS records to point to the new IP:
# Example using AWS Route53: aws route53 change-resource-record-sets \ --hosted-zone-id <zone-id> \ --change-batch "{ \"Changes\": [{ \"Action\": \"UPSERT\", \"ResourceRecordSet\": { \"Name\": \"your-domain.com\", \"Type\": \"A\", \"TTL\": 300, \"ResourceRecords\": [{\"Value\": \"$NEW_LB_IP\"}] } }] }" -
Monitor DNS propagation and traffic:
-
Wait for DNS TTL to expire while monitoring the old controller for declining traffic.
Strategy C - Selector Swap¶
This method preserves the existing load balancer IP by switching the Kubernetes service selector from the old controller pods to the new ones.
Recommended timing
Perform the migration during a low-traffic window (e.g., Saturday morning).
Migration steps:
-
Get the current load balancer IP:
-
Update your values file (e.g.,
values-same-namespace.yaml) with the following configuration for the new Ingress Controller:controller: podLabels: app.kubernetes.io/name: nginx-ingress-new app.kubernetes.io/instance: wallarm-new app.kubernetes.io/component: controller-new service: create: false # Critical: Do not create a new LoadBalancer Service ingressClass: name: nginx-new config: wallarm: enabled: true api: host: "api.wallarm.com" # or us1.api.wallarm.com token: "<NODE_TOKEN>" -
Deploy the new Ingress Controller in the same namespace as the old controller using the updated values file:
helm install wallarm-ingress-new wallarm/wallarm-ingress \ --version 7.0.0 \ -n <CURRENT_IC_NAMESPACE> \ -f values-same-namespace.yamlWhere
<CURRENT_IC_NAMESPACE>is the namespace where your existing Ingress Controller is deployed (e.g.,ingress-nginx). -
Verify no new
LoadBalancerservice was created: -
Verify new pods are running:
-
Test new controller pods directly:
NEW_POD=$(kubectl get pod -n <CURRENT_IC_NAMESPACE> \ -l app.kubernetes.io/instance=wallarm-new \ -o jsonpath='{.items[0].metadata.name}') kubectl port-forward -n <CURRENT_IC_NAMESPACE> $NEW_POD 8080:80 & PF_PID=$! sleep 2 # Wait for port-forward to establish curl -H "Host: your-domain.com" http://localhost:8080/ # Stop port-forward kill $PF_PID -
Update your production Ingress resources to use the new controller (
IngressClassandnginx.org/*annotations as validated in Part 1). This ensures the new controller will serve them once the service selector is switched. For each production Ingress (or in bulk per namespace):kubectl patch ingress <ingress-name> -n <namespace> \ -p '{"metadata":{"annotations":{"kubernetes.io/ingress.class":"nginx-new"}}}'Apply the full set of annotation changes from your validated manifests if you use more than
IngressClass(e.g.nginx.org/wallarm-mode,nginx.org/rewrites). Repeat for all production Ingress resources that will be served by the new controller.Brief traffic disruption
Between updating the
IngressClass(this step) and switching the service selector (next steps), traffic arriving at the old controller pods will not match any Ingress configuration. Execute the next steps immediately after this one to minimize the gap to a few seconds. If zero downtime is critical, consider Strategy A with dual Ingress resources instead. -
Check the labels of the new controller pods. You will need them later to update the service selector:
kubectl get pods -n <CURRENT_IC_NAMESPACE> \ -l app.kubernetes.io/instance=wallarm-new \ --show-labelsExample output:
-
Update the
LoadBalancerservice to point to the new pods:kubectl patch svc <OLD_IC_SERVICE_NAME> -n <CURRENT_IC_NAMESPACE> -p '{ "spec": { "selector": { "app.kubernetes.io/name": "nginx-ingress-new", "app.kubernetes.io/instance": "wallarm-new", "app.kubernetes.io/component": "controller-new" } } }'Where:
kubectl patch- Updates an existing resource without replacing it entirely.svc <OLD_IC_SERVICE_NAME>- The Kubernetes Service name of your old Ingress Controller (e.g.,ingress-nginx-controller). Find it withkubectl get svc -n <CURRENT_IC_NAMESPACE> -l app.kubernetes.io/component=controller.-n <CURRENT_IC_NAMESPACE>- The namespace where the service exists.-p '{ "spec": { "selector": {...} } }'- JSON patch to update the selector.- The selector labels MUST match your new controller pod labels exactly.
Expected outcome:
This confirms that the service selector was updated successfully and traffic will now be routed to the new controller pods.
-
Verify IP preservation:
NEW_LB_IP=$(kubectl get svc -n <CURRENT_IC_NAMESPACE> <OLD_IC_SERVICE_NAME> \ -o jsonpath='{.status.loadBalancer.ingress[0].ip}') echo "LoadBalancer IP after switch: $NEW_LB_IP" if [ "$OLD_LB_IP" == "$NEW_LB_IP" ]; then echo "SUCCESS: IP address preserved - $NEW_LB_IP" else echo "CRITICAL: IP address changed from $OLD_LB_IP to $NEW_LB_IP - INITIATE ROLLBACK IMMEDIATELY" exit 1 fi -
Monitor traffic on the new controller:
# Watch logs on the new controller kubectl logs -n <CURRENT_IC_NAMESPACE> -l app.kubernetes.io/instance=wallarm-new -f # Check metrics (find the correct deployment name first) DEPLOYMENT_NAME=$(kubectl get deployment -n <CURRENT_IC_NAMESPACE> \ -l app.kubernetes.io/instance=wallarm-new \ -o jsonpath='{.items[0].metadata.name}') kubectl exec -n <CURRENT_IC_NAMESPACE> \ deployment/wallarm-ingress-new-controller \ -c wallarm-wcli -- wcli metric -
After validation period (24–48 hours), scale down the old controller:
-
After an additional 24 hours of stable traffic, delete the old controller:
Recommended cleanup
After the migration has been stable for 30+ days, schedule a maintenance window to properly clean up the setup (see the steps below). This ensures the service is owned and managed by the new Helm chart. It also simplifies future upgrades and prevents operational confusion.
-
Create a new
LoadBalancer Servicemanaged by the new Helm chart:Replace
<YOUR_CURRENT_IP>with the IP address from step 1 ($OLD_LB_IP). -
Wait for the new load balancer to be assigned the same IP (cloud provider dependent; typically takes 1–5 minutes).
-
Verify the new service has the same IP, then delete the old service:
# Verify new service has the correct IP NEW_SERVICE_IP=$(kubectl get svc -n <CURRENT_IC_NAMESPACE> wallarm-ingress-new-controller \ -o jsonpath='{.status.loadBalancer.ingress[0].ip}') if [ "$NEW_SERVICE_IP" == "$OLD_LB_IP" ]; then echo "SUCCESS: New service has correct IP - safe to delete old service" kubectl delete svc <OLD_IC_SERVICE_NAME> -n <CURRENT_IC_NAMESPACE> else echo "WARNING: IP mismatch. Review before deleting old service." fi -
Verify the configuration:
Strategy D - direct replacement¶
This method removes the old controller and deploys the new one in its place.
Migration steps:
Recommendation
For major cloud providers, we strongly recommend reserving the load balancer IP before migration to prevent IP changes. See steps 1 and 2 in the procedure below.
-
Allocate or reserve a static public IP, depending on your cloud provider:
az network public-ip create \ --resource-group <resource-group> \ --name wallarm-ingress-ip \ --sku Standard \ --allocation-method Static STATIC_IP=$(az network public-ip show \ --resource-group <resource-group> \ --name wallarm-ingress-ip \ --query ipAddress \ --output tsv) echo "Reserved IP: $STATIC_IP" -
Deploy the new controller with the reserved IP:
-
Back up the current configuration:
-
Prepare converted Ingress resources:
- Update all
IngressClassannotations as described in Step 4. - Update annotation prefixes.
- Save the updated resources to
new-ingresses.yaml.
- Update all
-
Schedule a maintenance window and notify users about the expected downtime.
-
Delete the old controller:
-
Install the new controller immediately:
-
Apply the converted Ingress resources (the validated manifests from Part 1, saved as
new-ingresses.yaml) to your production namespaces. This is the step where production Ingress resources are updated; downtime is already in effect and the new controller is running.If your Ingress resources span multiple namespaces, apply the corresponding manifest to each namespace (or use
kubectl apply -f new-ingresses.yaml -n <namespace>as needed). -
Verify that services are accessible:
NEW_LB_IP=$(kubectl get svc -n wallarm-ingress wallarm-ingress-controller \ -o jsonpath='{.status.loadBalancer.ingress[0].ip}') if [ -z "$NEW_LB_IP" ]; then echo "ERROR: LoadBalancer IP not assigned yet. Wait and retry." exit 1 fi echo "New LoadBalancer IP: $NEW_LB_IP" # Test each domain curl -H "Host: app1.example.com" http://$NEW_LB_IP/ curl -H "Host: app2.example.com" http://$NEW_LB_IP/ -
Update DNS records to point to the new load balancer IP (if not pre-reserved).
-
Test attack detection:
-
Verify in Wallarm Console that attacks are detected.
Removing the old controller¶
After the migration is complete and traffic is fully served by the new controller, remove the old Ingress Controller to avoid resource waste and configuration conflicts.
It is recommended for the new controller to be stable for at least 24-48 hours before removing the old controller.
Steps:
-
Scale down the old controller first (allows quick rollback if needed):
-
Monitor for 24 hours to ensure nothing breaks.
-
Uninstall the old Helm release:
-
Clean up orphaned resources (if any):
Getting help with migration¶
If you encounter issues during migration, contact Wallarm support.
To speed up troubleshooting, collect and share the following data:
-
Your architecture overview and the chosen migration strategy.
-
Helm release info and values (remove tokens before sharing):
-
Pod status:
-
Controller logs:
-
Postanalytics logs:
-
NGINX configuration dump:
-
Kubernetes events: