Skip to content

Upgrading NGINX Ingress controller with integrated Wallarm modules

These instructions describe the steps to upgrade deployed Wallarm Ingress Controller 3.x to the new version with Wallarm node 4.0.

If you upgrade the node from version 3.4 or 3.2

If you upgrade the node from version 3.4 or 3.2, please note that the version of Community Ingress NGINX Controller the Wallarm Ingress controller is based on has been upgraded from 0.26.2 to 1.2.1.

Since the operation of Community Ingress NGINX Controller 1.2.1 has been significantly changed, its configuration has to be adjusted to these changes during the Wallarm Ingress controller upgrade.

These instructions contain the list of Community Ingress NGINX Controller settings you probably have to change. Nevertheless, please draw up and individual plan for the configuration migration based on the Community Ingress NGINX Controller release notes.

To upgrade the node 2.18 or lower, please use the different instructions.

Requirements

  • Access to the account with the Administrator role in Wallarm Console in the EU Cloud or US Cloud

  • Access to https://api.wallarm.com if working with EU Wallarm Cloud or to https://us1.api.wallarm.com if working with US Wallarm Cloud. Please ensure the access is not blocked by a firewall

Step 1: Update API port

Starting with version 4.0, the filtering node uploads data to the Cloud using the api.wallarm.com:443 (EU Cloud) and us1.api.wallarm.com:443 (US Cloud) API endpoints instead of api.wallarm.com:444 and us1.api.wallarm.com:444.

If your server with the deployed node has a limited access to the external resources and the access is granted to each resource separately, then after upgrade to version 4.0 the synchronization between the filtering node and the Cloud will stop.

To restore the synchronization, in your configuration, change port 444 to 443 for API endpoint for each resource.

Step 2: Update the Wallarm Helm chart repository

helm repo update wallarm

Add the Wallarm Helm repository containing all chart versions by using the command below. Please use the Helm repository for further work with the Wallarm Ingress controller.

helm repo add wallarm https://charts.wallarm.com
helm repo update wallarm

Step 3: Update the values.yaml configuration

If migrating from Wallarm node 3.6 to 4.0

If you migrate from Wallarm node 3.6 to the latest version, skip this step as the Wallarm Ingress Controller is already based on the newer Community Ingress NGINX Controller version.

To migrate from Wallarm Ingress controller 3.4 or 3.2 to the latest version, update the following configuration specified in the values.yaml file:

  • Standard configuration of Community Ingress NGINX Controller

  • Wallarm module configuration

Standard configuration of Community Ingress NGINX Controller

  1. Check out the release notes on Community Ingress NGINX Controller 0.27.0 and higher and define the settings to be changed in the values.yaml file.

  2. Update the defined settings in the values.yaml file.

The following settings are probably to be changed:

  • Proper reporting of end user public IP address if requests are passed through a load balancer before being sent to the Wallarm Ingress controller.

    controller:
      config:
    -    use-forwarded-headers: "true"
    +    enable-real-ip: "true"
    +    forwarded-for-header: "X-Forwarded-For"
    
  • IngressClasses configuration. The version of used Kubernetes API has been upgraded in the new Ingress controller requiring IngressClasses to be configured via the .controller.ingressClass, .controller.ingressClassResource and .controller.watchIngressWithoutClass parameters.

    controller:
    +  ingressClass: waf-ingress
    +  ingressClassResource:
    +    name: waf-ingress
    +    default: true
    +  watchIngressWithoutClass: true
    
  • ConfigMap (.controller.config) parameter set, e.g.:

    controller:
    config:
    +  allow-backend-server-header: "false"
      enable-brotli: "true"
      gzip-level: "3"
      hide-headers: Server
      server-snippet: |
        proxy_request_buffering on;
        wallarm_enable_libdetection on;
    
  • Validation of Ingress syntax via "admission webhook" is now enabled by default.

    controller:
    +  admissionWebhooks:
    +    enabled: true
    

    Disabling the Ingress syntax validation

    It is recommended to disable the Ingress syntax validation only if it destabilizes the operation of Ingress objects.

  • Label format. If the values.yaml file sets pod affinity rules, change the label format in these rules, e.g.:

    controller:
      affinity:
        podAntiAffinity:
        preferredDuringSchedulingIgnoredDuringExecution:
        - podAffinityTerm:
            labelSelector:
                matchExpressions:
    -            - key: app
    +            - key: app.kubernetes.io/name
                operator: In
                values:
                - waf-ingress
    -            - key: component
    +            - key: app.kubernetes.io/component
                operator: In
                values:
    -              - waf-ingress
    +              - controller
    -            - key: release
    +            - key: app.kubernetes.io/instance
                operator: In
                values:
                - waf-ingress-ingress
            topologyKey: kubernetes.io/hostname
            weight: 100
    

Wallarm module configuration

Change the Wallarm module configuration set in the values.yaml file as follows:

  • Remove the explicit monitoring service configuration. In the new Wallarm Ingress controller version, the monitoring service is enabled by default and does not require any additional configuration.

    controller:
    wallarm:
      enabled: true
      tarantool:
        resources: {}
    -  metrics:
    -    enabled: true
    -    service:
    -      annotations: {}
    
  • If the page &/usr/share/nginx/html/wallarm_blocked.html configured via ConfigMap is returned to blocked requests, adjust its configuration to the released changes.

    In new node version, the Wallarm sample blocking page has the updated UI with no logo and support email specified by default.

  • If you have customized the overlimit_res attack detection via the wallarm_process_time_limit and wallarm_process_time_limit_block NGINX directives, please transfer this settings to the rule and delete from the values.yaml file.

Step 4: Transfer the overlimit_res attack detection configuration from directives to the rule

Starting from the version 3.6, you can fine-tune the overlimit_res attack detection using the rule in Wallarm Console.

Earlier, the wallarm_process_time_limit and wallarm_process_time_limit_block NGINX directives have been used. The listed directives are considered to be deprecated with the new rule release and will be deleted in future releases.

If the overlimit_res attack detection settings are customized via the listed directives, it is recommended to transfer them to the rule as follows:

  1. Open Wallarm Console → Rules and proceed to the Fine-tune the overlimit_res attack detection rule setup.

  2. Configure the rule as done via the NGINX directives:

    • The rule condition should match the NGINX configuration block with the wallarm_process_time_limit and wallarm_process_time_limit_block directives specified.
    • The time limit for the node to process a single request (milliseconds): the value of wallarm_process_time_limit.
    • Request processing: the Stop processing option is recommended.

      Risk of running out of system memory

      The high time limit and/or continuation of request processing after the limit is exceeded can trigger memory exhaustion or out-of-time request processing.

    • Register the overlimit_res attack: the Register and display in the events option is recommended.

      If the wallarm_process_time_limit_block or process_time_limit_block value is off, choose the Do not create attack event option.

    • The rule does not have the explicit equivalent option for the wallarm_process_time_limit_block directive. If the rule sets Register and display in the events, the node will either block or pass the overlimit_res attack depending on the node filtration mode:

      • In the monitoring mode, the node forwards the original request to the application address. The application has the risk to be exploited by the attacks included in both processed and unprocessed request parts.
      • In the safe blocking mode, the node blocks the request if it is originated from the greylisted IP address. Otherwise, the node forwards the original request to the application address. The application has the risk to be exploited by the attacks included in both processed and unprocessed request parts.
      • In the block mode, the node blocks the request.
  3. Delete the wallarm_process_time_limit and wallarm_process_time_limit_block NGINX directives from the values.yaml configuration file.

    If the overlimit_res attack detection is fine-tuned using both the directives and the rule, the node will process requests as the rule sets.

Step 5: Check out all coming K8s manifest changes

To avoid unexpectedly changed Ingress controller behavior, check out all coming K8s manifest changes using Helm Diff Plugin. This plugin outputs the difference between the K8s manifests of the deployed Ingress controller version and of the new one.

To install and run the plugin:

  1. Install the plugin:

    helm plugin install https://github.com/databus23/helm-diff
    
  2. Run the plugin:

    helm diff upgrade <RELEASE_NAME> -n <NAMESPACE> wallarm/wallarm-ingress --version 4.0.4 -f <PATH_TO_VALUES>
    
    • <RELEASE_NAME>: the name of the release with the deployed Ingress controller
    • <NAMESPACE>: the namespace the Ingress controller is deployed to
    • <PATH_TO_VALUES>: the path to the values.yaml file defining the Ingress controller 4.0 settings
  3. Make sure that no changes can affect the stability of the running services and carefully examine the errors from stdout.

    If stdout is empty, make sure that the values.yaml file is valid.

If you migrate from Wallarm node 3.4 or 3.2 to the latest version, please note the changes of the following configuration:

  • Immutable field, e.g. the Deployment and/or StatefulSet selectors.

  • Pod labels. The changes can result in the NetworkPolicy operation termination, e.g.:

    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    spec:
      egress:
      - to:
        - namespaceSelector:
            matchExpressions:
            - key: name
              operator: In
              values:
              - kube-system # ${NAMESPACE}
          podSelector:
            matchLabels: # RELEASE_NAME=waf-ingress
    -         app: waf-ingress
    +         app.kubernetes.io/component: "controller"
    +         app.kubernetes.io/instance: "waf-ingress"
    +         app.kubernetes.io/name: "waf-ingress"
    -         component: waf-ingress
    
  • Configuration of Prometheus with new labels, e.g.:

     - job_name: 'kubernetes-ingress'
       kubernetes_sd_configs:
       - role: pod
         namespaces:
           names:
             - kube-system # ${NAMESPACE}
       relabel_configs: # RELEASE_NAME=waf-ingress
         # Selectors
    -    - source_labels: [__meta_kubernetes_pod_label_app]
    +    - source_labels: [__meta_kubernetes_pod_label_app_kubernetes_io_name]
           action: keep
           regex: waf-ingress
    -    - source_labels: [__meta_kubernetes_pod_label_release]
    +    - source_labels: [__meta_kubernetes_pod_label_app_kubernetes_io_instance]
           action: keep
           regex: waf-ingress
    -    - source_labels: [__meta_kubernetes_pod_label_component]
    +    - source_labels: [__meta_kubernetes_pod_label_app_kubernetes_io_component]
           action: keep
    -      regex: waf-ingress
    +      regex: controller
         - source_labels: [__meta_kubernetes_pod_container_port_number]
           action: keep
           regex: "10254|18080"
           # Replacers
         - action: replace
           target_label: __metrics_path__
           regex: /metrics
         - action: labelmap
           regex: __meta_kubernetes_pod_label_(.+)
         - source_labels: [__meta_kubernetes_namespace]
           action: replace
           target_label: kubernetes_namespace
         - source_labels: [__meta_kubernetes_pod_name]
           action: replace
           target_label: kubernetes_pod_name
         - source_labels: [__meta_kubernetes_pod_name]
           regex: (.*)
           action: replace
           target_label: instance
           replacement: "$1"
    
  • Analyze all other changes.

Step 6: Upgrade the Ingress controller

There are three ways of upgrading the Wallarm Ingress controller. Depending on whether there is a load balancer deployed to your environment, select the upgrade method:

  • Deployment of the temporary Ingress controller

  • Regular re‑creation of the Ingress controller release

  • Ingress controller release re‑creation without affecting the load balancer

Using the staging environment or minikube

If the Wallarm Ingress controller is deployed to your staging environment, it is recommended to upgrade it first. With all services operating correctly in the staging environment, you can proceed to the upgrade procedure in the production environment.

Otherwise it is recommended to deploy the Wallarm Ingress controller 4.0 with the updated configuration using minikube or another service first. Make sure that all services operates as expected and then upgrade the Ingress controller in the production environment.

This approach helps to avoid downtime of the services in the production environment.

Method 1: Deployment of the temporary Ingress controller

This method enables you to deploy Ingress Controller 4.0 as an additional entity in your environment and switch the traffic to it gradually. It helps to avoid even temporary downtime of services and ensures safe migration.

  1. Copy the IngressClass configuration from the values.yaml file of the previous version to the values.yaml file for the Ingress controller 4.0.

    With this configuration, the Ingress controller will identify the Ingress objects but will not process their traffic.

  2. Deploy the Ingress controller 4.0:

    helm install <RELEASE_NAME> -n <NAMESPACE> wallarm/wallarm-ingress --version 4.0.4 -f <PATH_TO_VALUES>
    
    • <RELEASE_NAME>: the name for the Ingress controller release
    • <NAMESPACE>: the namespace to deploy the Ingress controller to
    • <PATH_TO_VALUES>: the path to the values.yaml file defining the Ingress controller 4.0 settings
  3. Make sure that all services operate correctly.

  4. Switch the load to the new Ingress controller gradually.

Method 2: Regular re‑creation of the Ingress controller release

If the load balancer and Ingress controller are NOT described in the same Helm chart, you can just re‑create the Helm release. It will take several minutes and the Ingress controller will be unavailable for this time.

If Helm chart sets the configuration of a load balancer

If Helm chart sets the configuration of a load balancer along with the Ingress controller, release re‑creation can result in a long load balancer downtime (depending on the cloud provider). The load balancer IP address can be changed after the upgrade unless the constant address is assigned.

Please analyze all possible risks if using this method.

To re‑create the Ingress controller release:

  1. Delete the previous release:

    helm delete <RELEASE_NAME> -n <NAMESPACE>
    
    • <RELEASE_NAME>: the name of the release with the deployed Ingress controller

    • <NAMESPACE>: the namespace the Ingress controller is deployed to

    Please do not use the --wait option when executing the command since it can increase the upgrade time.

  2. Create a new release with Ingress controller 4.0:

    helm install <RELEASE_NAME> -n <NAMESPACE> wallarm/wallarm-ingress --version 4.0.4 -f <PATH_TO_VALUES>
    
    • <RELEASE_NAME>: the name for the Ingress controller release

    • <NAMESPACE>: the namespace to deploy the Ingress controller to

    • <PATH_TO_VALUES>: the path to the values.yaml file defining the Ingress controller 4.0 settings

  1. Set the wait = false option in the Terraform configuration to decrease the upgrade time:

    resource "helm_release" "release" {
      ...
    
    + wait = false
    
      ...
    }
    
  2. Delete the previous release:

    terraform taint helm_release.release
    
  3. Create the new release with the Ingress controller 4.0:

    terraform apply -target=helm_release.release
    

Method 3: Ingress controller release re‑creation without affecting the load balancer

When using the load balancer configured by the cloud provider, it is recommended to upgrade the Ingress controller with this method because it does not affect the load balancer.

Release re‑creation will take several minutes and the Ingress controller will be unavailable for this time.

  1. Get objects to be deleted (except for the load balancer):

    helm get manifest <RELEASE_NAME> -n <NAMESPACE> | yq -r '. | select(.spec.type != "LoadBalancer") | .kind + "/" + .metadata.name' | tr 'A-Z' 'a-z' > objects-to-remove.txt
    

    To install the utility yq, please use the instructions.

    Objects to be deleted will be output to the objects-to-remove.txt file.

  2. Delete listed objects and re‑create the relese:

    cat objects-to-remove.txt | xargs kubectl delete --wait=false -n <NAMESPACE>    && \
    helm upgrade <RELEASE_NAME> -n <NAMESPACE> wallarm/wallarm-ingress --version 4.0.4 -f `<PATH_TO_VALUES>`
    

    To decrease service downtime, it is NOT recommended to execute commands separately.

  3. Make sure that all objects are created:

    helm get manifest <RELEASE_NAME> -n <NAMESPACE> | kubectl create -f -
    

    The output should say that all objects already exist.

The following parameters are passed in the commands:

  • <RELEASE_NAME>: the name of the release with the deployed Ingress controller

  • <NAMESPACE>: the namespace the Ingress controller is deployed to

  • <PATH_TO_VALUES>: the path to the values.yaml file defining the Ingress controller 4.0 settings

Step 7: Test the upgraded Ingress controller

  1. Make sure the version of the Helm chart was upgraded:

    helm ls
    

    The chart version should correspond to wallarm-ingress-4.0.4.

  2. Get the list of pods specifying the name of the Wallarm Ingress controller in <INGRESS_CONTROLLER_NAME>:

    kubectl get pods -l release=<INGRESS_CONTROLLER_NAME>
    

    Each pod status should be STATUS: Running or READY: N/N. For example:

    NAME                                                              READY     STATUS    RESTARTS   AGE
    ingress-controller-nginx-ingress-controller-675c68d46d-cfck8      4/4       Running   0          5m
    ingress-controller-nginx-ingress-controller-wallarm-tarantljj8g   4/4       Running   0          5m
    
  3. Send the request with test SQLI and XSS attacks to the Wallarm Ingress controller address:

    curl http://<INGRESS_CONTROLLER_IP>/?id='or+1=1--a-<script>prompt(1)</script>'
    

    If the filtering node is working in the block mode, the code 403 Forbidden will be returned in response to the request and attacks will be displayed in Wallarm Console → Nodes.

Step 8: Customize the Ingress annotations according to the released changes

If you upgrade the node from version 3.4 or 3.2, adjust the following Ingress annotations to the changes released in the newer Ingress controller:

  1. If the Ingress is annotated with nginx.ingress.kubernetes.io/wallarm-instance, rename this annotation to nginx.ingress.kubernetes.io/wallarm-application.

    It is annotation name that has changed, its logic remains the same. The annotation with the former name will be deprecated soon, so you are recommended to rename it before.

  2. If the page &/usr/share/nginx/html/wallarm_blocked.html configured via Ingress annotations is returned to blocked requests, adjust its configuration to the released changes.

    In new node versions, the Wallarm blocking page has the updated UI with no logo and no support email specified by default.

Back to top