Skip to content

Deploying F5 NGINX Ingress Controller with Integrated Wallarm Services

These instructions provide you with the steps to deploy the Wallarm NGINX-based Ingress controller to your K8s cluster. The solution is deployed from the Wallarm Helm chart.

The solution is based on the F5 NGINX Ingress Controller with integrated Wallarm services. It uses the NGINX Ingress Controller image version 5.4.0. The Wallarm controller image is built on NGINX stable 1.29.x and uses Alpine Linux 3.23 as the base image.

Migrating from Community-based solution

If you currently have the Wallarm NGINX Ingress Controller based on the Community NGINX Ingress Controller, refer to the migration guide for instructions on migrating to this F5-based solution.

Traffic flow

Traffic flow with Wallarm Ingress Controller:

Solution architecture

Use cases

Among all supported Wallarm deployment options, this solution is the recommended one for the following use cases:

  • There is no Ingress controller and security layer routing traffic to Ingress resources compatible with F5 NGINX Ingress Controller

  • You are currently using F5 NGINX Ingress Controller and are in search of a security solution that offers both the standard controller functionality and enhanced security features. In this case, you can effortlessly switch to the Wallarm-NGINX Ingress Controller detailed in these instructions. Simply migrate your existing configuration to a new deployment to complete the replacement.

    For simultaneous use of both the existing Ingress controller and the Wallarm controller, refer to the Ingress Controller chaining guide for configuration details.

  • You are currently using the Community Ingress NGINX controller (with or without Wallarm) and want to ensure continued support and integrated security capabilities. Since the upstream Community Ingress NGINX project has been retired, we recommend migrating to the F5-based Wallarm Ingress Controller.

    See the migration guide for detailed instructions.

Requirements

  • Kubernetes platform version 1.28-1.35

  • Helm version 3.10+

  • Ability to create, modify, and delete resources in the target Kubernetes namespace

  • Compatibility of your services with the Wallarm Ingress Controller based on the F5 NGINX Ingress Controller version 5.4.0

  • Access to the account with the Administrator role in Wallarm Console for the US Cloud or EU Cloud

  • Access to https://us1.api.wallarm.com for working with US Wallarm Cloud or to https://api.wallarm.com for working with EU Wallarm Cloud

  • Access to https://charts.wallarm.com to add the Wallarm Helm charts. Ensure the access is not blocked by a firewall

  • Access to the Wallarm repositories on Docker Hub https://hub.docker.com/r/wallarm. Make sure the access is not blocked by a firewall

  • Access to the IP addresses and their corresponding hostnames (if any) listed below. This is needed for downloading updates to attack detection rules and API specifications, as well as retrieving precise IPs for your allowlisted, denylisted, or graylisted countries, regions, or data centers

    node-data0.us1.wallarm.com - 34.96.64.17
    node-data1.us1.wallarm.com - 34.110.183.149
    us1.api.wallarm.com - 35.235.66.155
    34.102.90.100
    34.94.156.115
    35.235.115.105
    
    node-data1.eu1.wallarm.com - 34.160.38.183
    node-data0.eu1.wallarm.com - 34.144.227.90
    api.wallarm.com - 34.90.110.226
    

Known restrictions

  • Operation without the Postanalytics module is not supported.

  • Scaling down the Postanalytics module may result in a partial loss of attack data.

Deployment

Step 1: Generate a filtering node token

Generate a Node API token:

  1. Open Wallarm Console → SettingsAPI tokens in the US Cloud or EU Cloud.

  2. Find or create API token with the Node deployment/Deployment usage type.

  3. Copy this token.

Creation of a Wallarm node

Step 2: Install the Wallarm Ingress Controller

  1. Create a Kubernetes namespace to deploy the Helm chart with the Wallarm Ingress Controller:

    kubectl create namespace <KUBERNETES_NAMESPACE>
    
  2. Add the Wallarm chart repository:

    helm repo add wallarm https://charts.wallarm.com
    helm repo update wallarm
    
  3. Create the values.yaml file with the Wallarm configuration. Example of the file with the minimum configuration is below.

    When using an API token, specify a node group name in the nodeGroup parameter. Your node will be assigned to this group, shown in the Wallarm Console's Nodes section. The default group name is defaultIngressGroup.

    config:
      wallarm:
        enabled: true
        api:
          host: "us1.api.wallarm.com"
          token: "<NODE_TOKEN>"
          # nodeGroup: defaultIngressGroup
    
    config:
      wallarm:
        enabled: true
        api:
          host: "api.wallarm.com" 
          token: "<NODE_TOKEN>"
          # nodeGroup: defaultIngressGroup
    

    <NODE_TOKEN> is the API token generated for Wallarm Node deployment.

    You can also store the Wallarm node token in Kubernetes secrets and pull it to the Helm chart.

    Deployment from your own registries

    You can overwrite elements of the values.yaml file to install the Wallarm Ingress Controller from the images stored in your own registries.

  4. Install the Wallarm packages:

    helm install --version 7.0.0 <RELEASE_NAME> wallarm/wallarm-ingress -n <KUBERNETES_NAMESPACE> -f <PATH_TO_VALUES>
    
    • <RELEASE_NAME> is the name for the Helm release of the Ingress controller chart
    • <KUBERNETES_NAMESPACE> is the Kubernetes namespace you have created for the Helm chart with the Wallarm Ingress Controller
    • <PATH_TO_VALUES> is the path to the values.yaml file

Step 3: Enable traffic analysis for your Ingress

kubectl annotate ingress <YOUR_INGRESS_NAME> -n <YOUR_INGRESS_NAMESPACE> nginx.org/wallarm-mode=monitoring
kubectl annotate ingress <YOUR_INGRESS_NAME> -n <YOUR_INGRESS_NAMESPACE> nginx.org/wallarm-application="<APPLICATION_ID>"
  • <YOUR_INGRESS_NAME> is the name of your Ingress

  • <YOUR_INGRESS_NAMESPACE> is the namespace of your Ingress

  • <APPLICATION_ID> is a positive number that is unique to each of your applications or application groups. This will allow you to obtain separate statistics and to distinguish between attacks aimed at the corresponding applications

Step 4: Test the Wallarm Ingress Controller operation

  1. Verify that the Wallarm Ingress Controller pods are running:

    kubectl get pods -n <NAMESPACE> -l app.kubernetes.io/name=wallarm-ingress
    

    The Wallarm pod status should be STATUS: Running and READY: N/N:

    NAME                                                                  READY   STATUS    RESTARTS   AGE
    <RELEASE_NAME>-wallarm-ingress-controller-<POD_SUFFIX>             3/3     Running   0          8m7s
    <RELEASE_NAME>-wallarm-ingress-wallarm-postanalytics-<POD_SUFFIX>  3/3     Running   0          8m7s
    
  2. Send the test Path Traversal attack to the Ingress Controller Service:

    curl http://<INGRESS_CONTROLLER_IP>/etc/passwd
    

    If the filtering node is working in the block mode, the code 403 Forbidden will be returned in the response to the request and the attack will be displayed in Wallarm Console → Attacks.

    Attacks in the interface

ARM64 deployment

The F5 NGINX Ingress Controller supports ARM64 processors. Originally designed for x86 architectures, deploying on ARM64 nodes requires updating the Helm chart parameters.

In ARM64 settings, Kubernetes nodes often carry an arm64 label. To assist the Kubernetes scheduler in allocating the Wallarm workload to the appropriate node type, reference this label using nodeSelector, tolerations, or affinity rules in the Wallarm Helm chart configuration.

Below is the Wallarm Helm chart example for Google Kubernetes Engine (GKE), which uses the kubernetes.io/arch: arm64 label for relevant nodes. This template is modifiable for compatibility with other cloud setups, respecting their ARM64 labeling conventions.

# Set `nodeSelector` for both the controller and Postanalytics components:
controller:
  nodeSelector:
    kubernetes.io/arch: arm64

postanalytics:
  nodeSelector:
    kubernetes.io/arch: arm64
# Set `tolerations` for both the controller and Postanalytics components:
controller:
  tolerations:
    - key: kubernetes.io/arch
      operator: Equal
      value: arm64
      effect: NoSchedule

postanalytics:
  tolerations:
    - key: kubernetes.io/arch
      operator: Equal
      value: arm64
      effect: NoSchedule

Deployment from your own registries

If you cannot pull Docker images from the Wallarm public repository (e.g., due to company security policies restricting external resources), you can instead:

  1. Clone these images to your private registry.

  2. Install Wallarm NGINX-based Ingress controller using them.

The following Docker images are used by the Helm chart for NGINX-based Ingress Controller deployment:

To install Wallarm NGINX-based Ingress controller using images stored in your registry, overwrite the values.yaml file of Wallarm Ingress Controller Helm chart:

config:
  images:
    controller:
      repository: <YOUR_REGISTRY>
      tag: <IMAGE_TAG>
      pullPolicy: IfNotPresent
    helper:
      repository: <YOUR_REGISTRY>
      tag: <IMAGE_TAG>
      pullPolicy: IfNotPresent

Then run installation using your modified values.yaml.

Security Context Constraints (SCC) in OpenShift

When deploying the F5 NGINX Ingress Controller on OpenShift, it is necessary to define a custom Security Context Constraint (SCC) to suit the security requirements of the platform. The default constraints may be insufficient for the Wallarm solution, potentially leading to errors.

Below is the recommended custom SCC for the Wallarm NGINX Ingress Controller.

Important

Apply the SCC before deploying the controller.

  1. Create the wallarm-scc.yaml file with the following SCC:

    ---
    allowHostDirVolumePlugin: false
    allowHostIPC: false
    allowHostNetwork: false
    allowHostPID: false
    allowHostPorts: false
    allowPrivilegeEscalation: false
    allowPrivilegedContainer: false
    allowedCapabilities:
      - NET_BIND_SERVICE
    apiVersion: security.openshift.io/v1
    defaultAddCapabilities: null
    fsGroup:
      type: MustRunAs
    groups: []
    kind: SecurityContextConstraints
    metadata:
      name: wallarm-ingress-controller
      annotations:
        kubernetes.io/description: wallarm-ingress-controller provides features similar to restricted-v2 SCC but pins user id to 101 and is a little more restrictive for volumes
    priority: null
    readOnlyRootFilesystem: false
    requiredDropCapabilities:
      - ALL
    runAsUser:
      type: MustRunAs
      uid: 101
    seLinuxContext:
      type: MustRunAs
    seccompProfiles:
      - runtime/default
    supplementalGroups:
      type: RunAsAny
    users: []
    volumes:
      - configMap
      - secret
      - emptyDir
      - projected
    
  2. Apply this policy to a cluster:

    kubectl apply -f wallarm-scc.yaml
    
  3. Create a Kubernetes namespace where the NGINX Ingress controller will be deployed:

    kubectl create namespace <KUBERNETES_NAMESPACE>
    
  4. Deploy the Wallarm Ingress Controller Helm chart into wallarm-ingress namespace.

  5. Determine the ServiceAccount name used by the controller workloads:

    • If the controller is deployed as a Deployment:
    kubectl -n <KUBERNETES_NAMESPACE> get deployment -l app.kubernetes.io/component=controller \
      -o jsonpath='{.items[0].spec.template.spec.serviceAccountName}{"\n"}'
    
    • If the controller is deployed as a DaemonSet:
    kubectl -n <KUBERNETES_NAMESPACE> get daemonset -l app.kubernetes.io/component=controller \
      -o jsonpath='{.items[0].spec.template.spec.serviceAccountName}{"\n"}'
    
  6. Grant the SCC to that ServiceAccount, e.g.:

    oc adm policy add-scc-to-user wallarm-ingress-controller \
      -z <SERVICE_ACCOUNT_NAME> -n <KUBERNETES_NAMESPACE>
    
  7. Verify the SCC is applied by checking the SCC annotation on a controller pod:

    POD=$(kubectl -n <KUBERNETES_NAMESPACE> get pods -l app.kubernetes.io/component=controller -o name | head -n 1 | cut -d/ -f2)
    kubectl -n <KUBERNETES_NAMESPACE> get pod "${POD}" -o jsonpath='{.metadata.annotations.openshift\.io\/scc}{"\n"}'
    

The expected output is wallarm-ingress-controller.