Fine-Tuning the Wallarm Ingress Controller (F5 NGINX IC-Based)¶
This page describes the Helm chart configuration options for the Wallarm Ingress Controller based on F5 NGINX Ingress Controller.
Official documentation for F5 NGINX Ingress Controller
The fine‑tuning of the Wallarm Ingress Controller is similar to that of the F5 NGINX Ingress Controller described in the official documentation. When working with Wallarm, all options for setting up the original F5 NGINX Ingress Controller are available.
Wallarm-specific configuration in values.yaml¶
The settings are defined in the values.yaml file. You can view its default state in the GitHub repository.
Below are the configuration parameters that are most likely changed:
Show YAML configuration
config:
wallarm:
enabled: false
api:
host: "api.wallarm.com"
port: 443
ssl: true
token: ""
nodeGroup: "defaultIngressGroup"
existingSecret:
enabled: false
fallback: "on"
wstoreMaxConns: 2
wcliPostanalytics:
logLevel: "WARN"
commands:
blkexp:
logLevel: INFO
botexp:
logLevel: WARN
cntexp:
logLevel: ERROR
cntsync:
logLevel: INFO
credstuff:
logLevel: INFO
envexp:
logLevel: INFO
jwtexp:
logLevel: INFO
mrksync:
logLevel: INFO
register:
logLevel: INFO
reqexp:
logLevel: INFO
wcliController:
logLevel: "WARN"
commands:
apispec:
logLevel: INFO
envexp:
logLevel: INFO
ipfeed:
logLevel: INFO
iplist:
logLevel: INFO
metricsexp:
logLevel: INFO
register:
logLevel: INFO
syncnode:
logLevel: INFO
apiFirewall:
enabled: true
readBufferSize: 8192
writeBufferSize: 8192
maxRequestBodySize: 4194304
disableKeepalive: false
maxConnectionsPerIp: 0
maxRequestsPerConnection: 0
maxErrorsInResponse: 3
config:
mainPort: 18081
healthPort: 18082
specificationUpdatePeriod: "1m"
unknownParametersDetection: true
logLevel: "INFO"
logFormat: "TEXT"
controller:
name: controller
kind: deployment
selectorLabels: {}
annotations: {}
nginxReloadTimeout: 60000
enableConfigSafety: false
wallarm:
metrics:
enabled: false
port: 18080
portName: wallarm-metrics
endpointPath: "/wallarm-metrics"
service:
annotations: {}
labels: {}
externalIPs: []
loadBalancerSourceRanges: []
servicePort: 18080
type: ClusterIP
serviceMonitor:
enabled: false
additionalLabels: {}
annotations: {}
namespace: ""
namespaceSelector: {}
scrapeInterval: 30s
targetLabels: []
relabelings: []
metricRelabelings: []
initContainer:
resources: {}
securityContext: {}
extraEnvs: []
wcli:
resources: {}
securityContext: {}
metrics:
enabled: true
port: 9004
portName: wcli-ctrl-mtrc
endpointPath: ""
host: ":9004"
service:
annotations: {}
labels: {}
externalIPs: []
loadBalancerSourceRanges: []
servicePort: 9004
type: ClusterIP
serviceMonitor:
enabled: false
additionalLabels: {}
annotations: {}
namespace: ""
namespaceSelector: {}
scrapeInterval: 30s
targetLabels: []
relabelings: []
metricRelabelings: []
extraEnvs: []
apiFirewall:
resources: {}
securityContext: {}
livenessProbeEnabled: false
readinessProbeEnabled: false
extraEnvs: []
extraVolumes: []
extraVolumeMounts: []
hostNetwork: false
hostPort:
enable: false
http: 80
https: 443
containerPort:
http: 80
https: 443
dnsPolicy: ClusterFirst
nginxDebug: false
shareProcessNamespace: false
logLevel: info
logFormat: glog
directiveAutoAdjust: false
customPorts: []
lifecycle: {}
customConfigMap: ""
config:
annotations: {}
entries: {}
defaultTLS:
cert: ""
key: ""
secret: ""
wildcardTLS:
cert: ""
key: ""
secret: ""
terminationGracePeriodSeconds: 30
autoscaling:
enabled: false
create: true
annotations: {}
minReplicas: 1
maxReplicas: 3
targetCPUUtilizationPercentage: 50
targetMemoryUtilizationPercentage: 50
behavior: {}
resources:
requests:
cpu: 100m
memory: 128Mi
podSecurityContext:
seccompProfile:
type: RuntimeDefault
securityContext:
{}
initContainerSecurityContext: {}
initContainerResources:
requests:
cpu: 100m
memory: 128Mi
tolerations: []
affinity: {}
env: []
volumes: []
volumeMounts: []
initContainers: []
minReadySeconds: 0
podDisruptionBudget:
enabled: false
annotations: {}
networkPolicy:
enabled: false
annotations: {}
strategy: {}
extraContainers: []
replicaCount: 1
ingressClass:
name: nginx
create: true
setAsDefaultIngress: false
watchNamespace: ""
watchNamespaceLabel: ""
watchSecretNamespace: ""
enableCustomResources: true
enableTLSPassthrough: false
tlsPassthroughPort: 443
enableCertManager: false
enableExternalDNS: false
globalConfiguration:
create: false
customName: ""
spec: {}
enableSnippets: false
healthStatus: false
healthStatusURI: "/nginx-health"
nginxStatus:
enable: true
port: 10246
allowCidrs: "127.0.0.1"
service:
create: true
type: LoadBalancer
externalTrafficPolicy: Local
annotations: {}
extraLabels: {}
loadBalancerIP: ""
clusterIP: ""
externalIPs: []
loadBalancerSourceRanges: []
httpPort:
enable: true
port: 80
targetPort: 80
name: "http"
httpsPort:
enable: true
port: 443
targetPort: 443
name: "https"
customPorts: []
sessionAffinity:
enable: false
type: ClientIP
timeoutSeconds: 3600
serviceAccount:
annotations: {}
imagePullSecretName: ""
imagePullSecretsNames: []
reportIngressStatus:
enable: true
ingressLink: ""
enableLeaderElection: true
leaderElectionLockName: ""
annotations: {}
pod:
annotations: {}
extraLabels: {}
readyStatus:
enable: true
port: 8081
initialDelaySeconds: 0
startupStatus:
enable: false
enableLatencyMetrics: false
disableIPV6: false
defaultHTTPListenerPort: 80
defaultHTTPSListenerPort: 443
readOnlyRootFilesystem: false
enableSSLDynamicReload: true
postanalytics:
kind: "Deployment"
replicaCount: 1
imagePullSecrets: []
arena: "2.0"
serviceAddress: "0.0.0.0:3313"
serviceProtocol: "tcp4"
resources: {}
securityContext: {}
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
extraEnvs: []
extraVolumes: []
extraVolumeMounts: []
service:
annotations: {}
labels: {}
podAnnotations: {}
podLabels: {}
nodeSelector: {}
tolerations: []
affinity: {}
topologySpreadConstraints: []
annotations: {}
networkPolicy:
enabled: false
annotations: {}
podDisruptionBudget:
enabled: false
annotations: {}
terminationGracePeriodSeconds: 30
tls:
enabled: false
metrics:
listenAddress: "127.0.0.1:9001"
protocol: "tcp4"
initContainer:
resources: {}
securityContext: {}
extraEnvs: []
wcli:
resources: {}
securityContext: {}
metrics:
enabled: true
port: 9003
portName: wcli-post-mtrc
endpointPath: ""
host: ":9003"
service:
annotations: {}
labels: {}
externalIPs: []
loadBalancerSourceRanges: []
servicePort: 9003
type: ClusterIP
serviceMonitor:
enabled: false
additionalLabels: {}
annotations: {}
namespace: ""
namespaceSelector: {}
scrapeInterval: 30s
targetLabels: []
relabelings: []
metricRelabelings: []
extraEnvs: []
appstructure:
resources: {}
securityContext: {}
extraEnvs: []
clusterrole:
create: true
prometheus:
create: true
port: 9113
secret: ""
scheme: http
service:
create: false
labels:
service: "nginx-ingress-prometheus-service"
serviceMonitor:
create: false
labels: {}
selectorMatchLabels:
service: "nginx-ingress-prometheus-service"
endpoints:
- port: prometheus
prometheusExtended:
enabled: false
port: 10113
portName: prom-ext
endpointPath: "/vts-status"
# detailedCodes: ""
# shmSize: ""
service:
create: false
annotations: {}
labels: {}
externalIPs: []
loadBalancerSourceRanges: []
servicePort: 10113
type: ClusterIP
serviceMonitor:
enabled: false
additionalLabels: {}
annotations: {}
namespace: ""
namespaceSelector: {}
scrapeInterval: 30s
targetLabels: []
relabelings: []
metricRelabelings: []
To change the settings, we recommend using the option --set of helm install (if installing the Ingress controller) or helm upgrade (if updating the installed Ingress controller parameters). For example:
A description of the main parameters you can set up is provided below. Other parameters come with default values and rarely need to be changed.
Wallarm configuration parameters¶
config.wallarm.enabled¶
Enables or disables the Wallarm module in the Ingress Controller.
Default value: false
config.wallarm.api.host¶
Wallarm API endpoint. Can be:
Default value: api.wallarm.com
config.wallarm.api.port¶
Wallarm API endpoint port.
Default value: 443
config.wallarm.api.ssl¶
Enables TLS when communicating with the Wallarm API.
Default value: true
config.wallarm.api.token¶
The Node token value. It is required to access the Wallarm API.
The token can be one of these types:
-
API token (recommended) - Ideal if you need to dynamically add/remove node groups for UI organization or if you want to control token lifecycle for added security. To generate an API token:
To generate an API token:
- Go to Wallarm Console → Settings → API tokens in either the US Cloud or EU Cloud.
- Create an API token with the Node deployment/Deployment usage type.
- During node deployment, use the generated token and specify the group name using the
config.wallarm.api.nodeGroupparameter. You can add multiple nodes to one group using different API tokens.
-
Node token - Suitable when you already know the node groups that will be used.
To generate a node token:
The parameter is ignored if config.wallarm.existingSecret.enabled: true.
Default value: not specified
config.wallarm.api.nodeGroup¶
The name of the node group to which the newly deployed Node will be added.
This parameter is required when the Node is registered using an API token with the Node deployment / Deployment usage type (provided via the config.wallarm.api.token parameter), which is the only token type that supports node grouping.
Default value: defaultIngressGroup
config.wallarm.api.existingSecret.enabled¶
Configures the Ingress Controller to use a Wallarm node token from an existing Kubernetes Secret, instead of setting config.wallarm.api.token directly. It is useful for environments with external secret management (e.g., when using an external secrets operator).
If true, you need to set:
config.wallarm.api.existingSecret.secretName- secret name that contains the tokenconfig.wallarm.api.existingSecret.secretKey- secret key that contains the token
To store the node token in Kubernetes Secrets and pull it to the Helm chart:
-
Create a Kubernetes secret with the Wallarm node token:
kubectl -n <KUBERNETES_NAMESPACE> create secret generic wallarm-api-token --from-literal=token=<WALLARM_NODE_TOKEN><KUBERNETES_NAMESPACE>is the Kubernetes namespace you have created for the Helm release with Wallarm Ingress controllerwallarm-api-tokenis the Kubernetes secret name<WALLARM_NODE_TOKEN>is the Wallarm node token value copied from the Wallarm Console UI
If using an external secret operator, follow its documentation.
-
Set the following configuration in
values.yaml:
Default value: false. Points to the Helm chart to get the Wallarm node token from config.wallarm.api.token.
config.wallarm.fallback¶
Controls fallback behavior when Wallarm data (for example, `proton.db or a custom rule set) cannot be downloaded.
Default value: "on"
config.wallarm.wstoreMaxConns¶
Maximum number of simultaneous connections to the wstore upstream. Do not change unless advised by Wallarm support.
Default value: 2
Wallarm wcli parameters¶
config.wcliPostanalytics.logLevel¶
Log level for the wcli Controller Postanalytics module, which runs in the postanalytics pod.
Possible values: DEBUG, INFO, WARN, ERROR, FATAL.
Default value: "WARN"
config.wcliPostanalytics.commands*¶
Per-command log level configuration for the wcli Controller Postanalytics module, which runs in the postanalytics pod.
You can set log levels individually for each command: blkexp, botexp, cntexp, cntsync, credstuff, envexp, jwtexp, mrksync, register, reqexp.
Possible values for each command: DEBUG, INFO, WARN, ERROR, FATAL.
config.wcliController.logLevel¶
Log level for the wcli Controller Postanalytics module, which runs in the controller pod.
Possible values: DEBUG, INFO, WARN, ERROR, FATAL.
Default value: "WARN"
config.wcliController.commands*¶
Per-command log level configuration for the wcli Controller, which runs in the controller pod.
You can set log levels individually for each command: apispec, envexp, ipfeed, iplist, metricsexp, register, syncnode
Possible values for each command: DEBUG, INFO, WARN, ERROR, FATAL.
API Specification Enforcement parameters¶
Controls the configuration of API Specification Enforcement.
By default, it is enabled and configured as shown below. If you are using this feature, it is recommended to keep these values unchanged.
config:
apiFirewall:
### Enable or disable API Firewall functionality (true|false)
###
enabled: true
### Per-connection buffer size (in bytes) for requests' reading. This also limits the maximum header size.
### Increase this buffer if your clients send multi-KB RequestURIs and/or multi-KB headers (for example, BIG cookies)
readBufferSize: 8192
### Per-connection buffer size (in bytes) for responses' writing.
###
writeBufferSize: 8192
### Maximum request body size (in bytes). The server rejects requests with bodies exceeding this limit.
###
maxRequestBodySize: 4194304
### Whether to disable keep-alive connections. The server will close all the incoming connections after sending
## the first response to client if this option is set to 'true'
###
disableKeepalive: false
### Maximum number of concurrent client connections allowed per IP. '0' means unlimited
###
maxConnectionsPerIp: 0
### Maximum number of requests served per connection. The server closes connection after the last request.
### 'Connection: close' header is added to the last response. '0' means unlimited
###
maxRequestsPerConnection: 0
### Maximum number of errors limiting apiFirewall response size
### to prevent it from exceeding the configured subrequest threshold.
###
maxErrorsInResponse: 3
## API Firewall configuration
config:
mainPort: 18081
healthPort: 18082
specificationUpdatePeriod: "1m"
unknownParametersDetection: true
#### Log level RACE|DEBUG|INFO|WARNING|ERROR
logLevel: "INFO"
### Log format TEXT|JSON
logFormat: "TEXT"
...
The table below describes the API Specification Enforcement parameters:
| Setting | Description |
|---|---|
readBufferSize | Per-connection buffer size for request reading. This also limits the maximum header size. Increase this buffer if your clients send multi-KB RequestURIs and/or multi-KB headers (for example, BIG cookies). |
writeBufferSize | Per-connection buffer size for response writing. |
maxRequestBodySize | Maximum request body size. The server rejects requests with bodies exceeding this limit. |
disableKeepalive | Disables the keep-alive connections. The server will close all the incoming connections after sending the first response to the client if this option is set to true. |
maxConnectionsPerIp | Maximum number of concurrent client connections allowed per IP. 0 = unlimited. |
maxRequestsPerConnection | Maximum number of requests served per connection. The server closes the connection after the last request. The Connection: close header is added to the last response. 0 = unlimited. |
maxErrorsInResponse | Maximum number of errors included in an API Specification Enforcement response. |
Wallarm container metrics parameters¶
controller.wallarm.metrics.enabled¶
This switch toggles information and metrics collection. If Prometheus is installed in the Kubernetes cluster, no additional configuration is required.
Default value: false
controller.wallarm.metrics.port¶
Port on which the Wallarm metrics endpoint listens. This is separate from NGINX metrics.
Default value: 18080
controller.wallarm.metrics.portName¶
The name assigned to the metrics endpoint port.
Default value: "wallarm-metrics"
controller.wallarm.metrics.endpointPath¶
HTTP path at which the Wallarm metrics endpoint is exposed.
Default value: "/wallarm-metrics"
controller.wallarm.metrics.service.annotations¶
Annotations to attach to the metrics service.
Default value: {}
controller.wallarm.metrics.service.labels¶
Custom labels to attach to the metrics service.
Default value: {}
controller.wallarm.metrics.service.externalIPs¶
List of external IP addresses that can access the metrics service.
Default value: []
controller.wallarm.metrics.service.loadBalancerSourceRanges¶
List of CIDRs allowed to access the metrics service when using a load balancer.
Default value: []
controller.wallarm.metrics.service.servicePort¶
Port exposed by the metrics service.
Default value: 18080
controller.wallarm.metrics.service.type¶
Type of Kubernetes service for the metrics endpoint.
Possible values: ClusterIP, NodePort, LoadBalancer.
Default value: ClusterIP
controller.wallarm.metrics.serviceMonitor*¶
If you are using the Prometheus Operator (e.g., as part of the kube-prometheus-stack), you can configure the chart to automatically create a ServiceMonitor resource for scraping Wallarm container metrics.
Configuration options with default values:
controller:
wallarm:
metrics:
...
serviceMonitor:
enabled: false
additionalLabels: {}
annotations: {}
namespace: ""
namespaceSelector: {}
scrapeInterval: 30s
# honorLabels: true
targetLabels: []
relabelings: []
metricRelabelings: []
Wallarm CLI in the controller pod¶
controller.wallarm.wcli.resources¶
Kubernetes resource requests and limits for the Wallarm CLI controller container running in the controller pod.
Default value: {}
controller.wallarm.wcli.securityContext¶
Kubernetes security context for the Wallarm CLI controller container.
Default value: {}
controller.wallarm.wcli.metrics.enabled¶
Enables or disables metrics collection for the Wallarm CLI controller.
Default value:true
controller.wallarm.wcli.metrics.port¶
Port on which the Wallarm CLI controller metrics endpoint listens.
Default value: 9004
controller.wallarm.wcli.metrics.portName¶
Name assigned to the metrics endpoint port.
Default value: "wcli-ctrl-mtrc"
controller.wallarm.wcli.metrics.endpointPath¶
HTTP path at which the metrics endpoint is exposed.
If empty, the default path defined by the Wallarm CLI controller is used.
Default value: ""
controller.wallarm.wcli.metrics.host¶
IP address and/or port on which the metrics endpoint binds.
Default value: ":9004"
controller.wallarm.wcli.metrics.service.annotations¶
Annotations to attach to the metrics service.
Default value: {}
controller.wallarm.wcli.metrics.service.labels¶
Custom labels to attach to the metrics service.
Default value: {}
controller.wallarm.wcli.metrics.service.externalIPs¶
List of external IP addresses that can access the metrics service.
Default value: []
controller.wallarm.wcli.metrics.service.loadBalancerSourceRanges¶
List of CIDR ranges allowed to access the metrics service when using a load balancer.
Default value: []
controller.wallarm.wcli.metrics.service.servicePort¶
Port exposed by the metrics service.
Default value: 9004
controller.wallarm.wcli.metrics.service.type¶
Type of Kubernetes service for the metrics endpoint.
Possible values: ClusterIP, NodePort, LoadBalancer.
Default value: ClusterIP
controller.wallarm.wcli.metrics.serviceMonitor¶
If you are using the Prometheus Operator (e.g., as part of the kube-prometheus-stack), you can configure the chart to automatically create a ServiceMonitor resource for scraping the Wallarm CLI controller metrics.
Configuration options with default values:
controller:
wallarm:
wcli:
metrics:
...
serviceMonitor:
enabled: false
additionalLabels: {}
annotations: {}
namespace: ""
namespaceSelector: {}
scrapeInterval: 30s
# honorLabels: true
targetLabels: []
relabelings: []
metricRelabelings: []
API Specification Enforcement metrics parameters¶
The API Specification Enforcement module can expose Prometheus-compatible metrics.
When enabled, metrics are available by default at http://<host>:9010/metrics.
| Setting | Description |
|---|---|
enabled | Enables Prometheus metrics for the API Specification Enforcement module. By default: false (disabled). |
port | Defines the port on which the API Specification Enforcement module exposes metrics. If you change this value, also update controller.wallarm.apiFirewall.metrics.service.servicePort.Default: 9010. |
portName | Name assigned to the metrics port. By default: apifw-metrics. |
endpointPath | Defines the HTTP path of the API Specification Enforcement metrics endpoint By default: /metrics. |
host | IP address and port for binding the metrics server. By default: :9010 (all interfaces on port 9010). |
Configuration options with default values:
controller:
wallarm:
apiFirewall:
metrics:
## Enable metrics collection
enabled: false
## Port for metrics endpoint
port: 9010
## Port name for metrics endpoint
portName: apifw-metrics
## Path at which the metrics endpoint is exposed
endpointPath: "/metrics"
## IP address and/or port for the metrics endpoint
host: ":9010"
service:
annotations: {}
# prometheus.io/scrape: "true"
# prometheus.io/port: "9010"
labels: {}
# clusterIP: ""
externalIPs: []
# loadBalancerIP: ""
loadBalancerSourceRanges: []
servicePort: 9010
type: ClusterIP
# externalTrafficPolicy: ""
# nodePort: ""
controller.wallarm.apiFirewall.metrics.serviceMonitor*¶
If you are using the Prometheus Operator (e.g., as part of the kube-prometheus-stack), you can configure the chart to automatically create a ServiceMonitor resource for scraping the API Specification Enforcement metrics.
Configuration options with default values:
controller:
wallarm:
apiFirewall:
metrics:
...
serviceMonitor:
enabled: false
additionalLabels: {}
annotations: {}
namespace: ""
namespaceSelector: {}
scrapeInterval: 30s
# honorLabels: true
targetLabels: []
relabelings: []
metricRelabelings: []
postanalytics (wstore)¶
postanalytics.kind¶
Type of Postanalytics (wstore) installation.
Possible values: Deployment or DaemonSet.
Default value: "Deployment"
postanalytics.replicaCount¶
Number of Postanalytics replicas to run.
Applies only when postanalytics.kind is set to Deployment.
Default value: 1
postanalytics.imagePullSecrets¶
List of Kubernetes image pull secrets used to pull Postanalytics container images.
The secrets must already exist in the same namespace as the Helm release.
Default value: []
postanalytics.arena¶
Specifies the amount of memory in GB allocated for postanalytics service. It is recommended to set up a value sufficient to store request data for the last 5-15 minutes.
Default value: "2.0"
postanalytics.serviceAddress¶
Specifies the address and port on which wstore accepts incoming connections.
Default value: "0.0.0.0:3313"
postanalytics.serviceProtocol¶
Specifies the protocol family that wstore uses for incoming connections.
Possible values:
-
tcp- dual-stack mode (listens on both IPv4 and IPv6) -
tcp4- IPv4 only -
tcp6- IPv6 only
Default value: "tcp4".
postanalytics.tls*¶
Configures TLS and mutual TLS (mTLS) settings to allow secure connection to the Postanalytics module (optional).
Configuration options with default values:
postanalytics:
tls:
enabled: false
# certFile: "/root/test-tls-certs/server.crt"
# keyFile: "/root/test-tls-certs/server.key"
# caCertFile: "/root/test-tls-certs/ca.crt"
# mutualTLS:
# enabled: false
# clientCACertFile: "/root/test-tls-certs/ca.crt"
| Parameter | Description | Required? |
|---|---|---|
enabled | Enables or disables SSL/TLS for the connection to the postanalytics module. By default, false (disabled). | Yes |
certFile | Specifies the path to the client certificate used by the the Filtering Node to authenticate itself when establishing an SSL/TLS connection to the postanalytics module. | Yes if mutualTLS.enabled is true |
keyFile | Specifies the path to the private key corresponding to the client certificate provided via certFile. | Yes if mutualTLS.enabled is true |
caCertFile | Specifies the path to a trusted Certificate Authority (CA) certificate used to validate the TLS certificate presented by the postanalytics module. | Yes if using a custom CA |
mutualTLS.enabled | Enables mutual TLS (mTLS), where both the Filtering Node and the postanalytics module verify each other's identity via certificates. By default, false (disabled). | No |
mutualTLS.clientCACertFile | Specifies the path to a trusted Certificate Authority (CA) certificate used to validate the TLS certificate presented by the Filtering Node. | Yes if using a custom CA |
Controller parameters¶
controller.nginxReloadTimeout¶
Timeout in milliseconds which the Ingress Controller will wait for a successful NGINX reload after a configuration change or at the initial start. Increase this value if you have a large number of Ingress resources that cause slow reloads.
Default value: 60000
controller.enableConfigSafety¶
Enables NGINX configuration validation before applying a reload. When enabled, the controller verifies the generated config with nginx -t prior to reloading, preventing broken configurations from being applied.
Default value: false
API Specification Enforcement container parameters¶
controller.wallarm.apiFirewall.resources¶
Kubernetes resource requests and limits for the API Specification Enforcement container running in the controller pod. Set these to ensure proper resource allocation in production.
Default value: {}
controller.wallarm.apiFirewall.securityContext¶
Kubernetes security context for the API Specification Enforcement container.
Default value: {}
controller.wallarm.apiFirewall.livenessProbeEnabled¶
Enables the liveness probe for the API Specification Enforcement container. When enabled, Kubernetes periodically checks the health endpoint and restarts the container if it becomes unresponsive.
Default value: false
controller.wallarm.apiFirewall.readinessProbeEnabled¶
Enables the readiness probe for the API Specification Enforcement container. When enabled, the container is only marked as ready after passing the readiness check.
Default value: false
Extra environment variables for containers¶
You can pass additional environment variables to any Wallarm container. This is useful for configuring proxy settings, custom logging, or injecting secrets.
The following containers support the extraEnvs parameter:
| Parameter | Container |
|---|---|
controller.wallarm.initContainer.extraEnvs | Init container (node registration) in the controller pod |
controller.wallarm.wcli.extraEnvs | Wallarm CLI container in the controller pod |
controller.wallarm.apiFirewall.extraEnvs | API Specification Enforcement container in the controller pod |
postanalytics.extraEnvs | Wstore container in the postanalytics pod |
postanalytics.initContainer.extraEnvs | Init container in the postanalytics pod |
postanalytics.wcli.extraEnvs | Wallarm CLI container in the postanalytics pod |
postanalytics.appstructure.extraEnvs | Appstructure container in the postanalytics pod |
Example — passing proxy settings to the init container:
Postanalytics security context¶
postanalytics.securityContext¶
Kubernetes security context for the wstore container. Use this to configure security constraints in restricted environments (e.g., OpenShift, Pod Security Standards).
Default value: {}
Extended Prometheus metrics parameters¶
prometheusExtended.shmSize¶
Shared memory zone size for the VTS (Virtual Host Traffic Status) module. Controls how much data can be stored for extended metrics collection. Increase this if you have a large number of virtual hosts or upstreams and see VTS-related errors in NGINX logs.
Examples: "1m", "10m", "32m".
Default value: "10m" (if not set)
prometheusExtended.detailedCodes¶
Specifies which HTTP status codes to track in detail. By default, VTS aggregates response codes into classes (2xx, 3xx, etc.). This parameter allows tracking individual status codes.
Examples: "all", "200 301 302 400 403 404 500 502 503".
Default value: not set (codes are aggregated by class)
Annotation validation¶
NGINX Ingress Controller validates annotations by itself. If an Ingress has invalid annotation values, the controller rejects/ignores that Ingress configuration and reports it via Kubernetes Events (for example, a Rejected event). See "Advanced configuration with Annotations".
controller.enableSnippets¶
Controls whether custom snippets are allowed in Ingress/VirtualServer resources.
When enabled, it allows using snippet-style annotations such as nginx.org/server-snippets/nginx.org/location-snippets (and related snippet mechanisms supported by the NGINX Ingress Controller).
Default value: false
Security note
Snippet support can widen the attack surface in multi-tenant clusters. Keep it disabled unless you fully trust who can create/update Ingress resources.
Global Controller settings¶
Implemented via ConfigMap.
Besides the standard ones, the following additional parameters are supported. You can set them via the Helm value controller.config.entries:
Supported Wallarm Ingress annotations¶
In this section, you can see the Wallarm-specific Ingress annotations supported by the Wallarm Ingress Controller based on the F5 NGINX Ingress Controller.
Besides the Wallarm-specific annotations described below, standard NGINX Ingress Controller annotations are also supported.
Annotation prefix
In the F5-based controller, annotations use the nginx.org/* prefix instead of nginx.ingress.kubernetes.io/*. This applies to both general NGINX annotations and Wallarm-specific annotations. See more details.
| Annotation | Description |
|---|---|
nginx.org/wallarm-mode | Traffic filtration mode: monitoring (default), safe_blocking, block or off. |
nginx.org/wallarm-mode-allow-override | Manages the ability to override the wallarm_mode values via settings in the Cloud: on (default), off or strict. |
nginx.org/wallarm-fallback | Wallarm fallback mode : on (default) or off. |
nginx.org/wallarm-application | Wallarm application ID. |
nginx.org/wallarm-block-page | Blocking page and error code to return to blocked requests. |
nginx.org/wallarm-unpack-response | Whether to decompress compressed data returned in the application response: on (default) or off. |
nginx.org/wallarm-parse-response | Whether to analyze the application responses for attacks: on (default) or off. Response analysis is required for vulnerability detection during passive detection and threat replay testing. |
nginx.org/wallarm-parse-websocket | Wallarm has full WebSockets support. By default, the WebSockets' messages are not analyzed for attacks. To force the feature, activate the API Security subscription plan and use this annotation: on or off (default). |
nginx.org/wallarm-parser-disable | Allows to disable parsers. The directive values correspond to the name of the parser to be disabled, e.g. json. Multiple parsers can be specified, dividing by semicolon, e.g. json;base64. |
nginx.org/wallarm-partner-client-uuid | Partner client UUID for multi-tenant setups. |
Applying annotation to the Ingress resource¶
These annotations are applied to Kubernetes Ingress resources processed by the controller.
To set or update an annotation, use:
kubectl annotate --overwrite ingress <YOUR_INGRESS_NAME> -n <YOUR_INGRESS_NAMESPACE> <ANNOTATION_NAME>=<VALUE>
-
<YOUR_INGRESS_NAME>is the name of your Ingress -
<YOUR_INGRESS_NAMESPACE>is the namespace of your Ingress -
<ANNOTATION_NAME>is the name of the annotation from the list above -
<VALUE>is the value of the annotation from the list above
Annotation examples¶
Configuring the blocking page and error code¶
The annotation nginx.org/wallarm-block-page is used to configure the blocking page and error code returned in the response to the request blocked for the following reasons:
-
Request contains malicious payloads of the following types: input validation attacks, vpatch attacks, or attacks detected based on regular expressions.
-
Request containing malicious payloads from the list above is originated from graylisted IP address and the node filters requests in the safe blocking mode.
-
Request is originated from the denylisted IP address.
For example, to return the default Wallarm blocking page and the error code 445 in the response to any blocked request:
kubectl annotate ingress <YOUR_INGRESS_NAME> -n <YOUR_INGRESS_NAMESPACE> nginx.org/wallarm-block-page="&/usr/share/nginx/html/wallarm_blocked.html response_code=445 type=attack,acl_ip,acl_source"
More details on the blocking page and error code configuration methods →
Managing libdetection mode¶
You can control the libdetection mode by passing the wallarm_enable_libdetection directive into the generated NGINX configuration:
- (Per‑Ingress annotation) Requires
controller.enableSnippets: true:
kubectl annotate --overwrite ingress <YOUR_INGRESS_NAME> -n <YOUR_INGRESS_NAMESPACE> \
nginx.org/server-snippets="wallarm_enable_libdetection off;"
- (Cluster‑wide) Uses the controller
ConfigMap(viacontroller.config.entries) to apply the setting globally to the Ingress Controller:
helm upgrade --reuse-values <INGRESS_CONTROLLER_RELEASE_NAME> wallarm/wallarm-ingress -n <KUBERNETES_NAMESPACE> \
--set-string controller.config.entries.server-snippets="wallarm_enable_libdetection off;"
Libdetection values
Available values of wallarm_enable_libdetection are on/off.
Wallarm policy custom resource definition (CRD)¶
The F5-based controller supports Custom Resource Definitions as an alternative to standard Ingress resources for advanced routing (canary deployments, traffic splitting, header-based routing). All standard F5 NGINX Ingress Controller CRDs are available.
When using CRDs, Wallarm settings are configured via the Policy resource instead of annotations. Wallarm patches the upstream Policy CRD to add an optional spec.wallarm block — an alternative to Wallarm annotations that provides the same set of settings through a dedicated resource. The Policy is then referenced from VirtualServer or VirtualServerRoute routes.
Wallarm-provided CRDs
If you plan to use the Wallarm Policy CRD (spec.wallarm), apply the Wallarm-provided CRDs instead of the upstream F5 CRDs. The Wallarm-provided CRDs include the patched Policy schema with the wallarm block.
Policy fields:
| Field | Description | Values | Default |
|---|---|---|---|
mode | Wallarm filtration mode. | off, monitoring, safe_blocking, block | — |
modeAllowOverride | Whether Wallarm Cloud settings can override the local mode. | on, off, strict | on |
fallback | Behavior when proton.db or custom ruleset cannot be loaded. | on, off | on |
application | Application ID used to separate traffic in Wallarm Cloud. | Positive integer | — |
blockPage | Custom block page (file path, named location, URL, or variable). | String | — |
parseResponse | Analyze responses from the application. | on, off | on |
unpackResponse | Decompress responses before analysis. | on, off | on |
parseWebsocket | Analyze WebSocket messages. | on, off | off |
parserDisable | Parsers to disable. | List: cookie, zlib, htmljs, json, multipart, base64, percent, urlenc, xml, jwt | — |
partnerClientUUID | Partner client UUID for multi-tenant setups. | UUID | — |
Example — two policies with different modes referenced by routes:
apiVersion: k8s.nginx.org/v1
kind: Policy
metadata:
name: wallarm-block
spec:
wallarm:
mode: block
application: 42
fallback: "on"
---
apiVersion: k8s.nginx.org/v1
kind: Policy
metadata:
name: wallarm-monitoring
spec:
wallarm:
mode: monitoring
---
apiVersion: k8s.nginx.org/v1
kind: VirtualServer
metadata:
name: my-app
spec:
host: my-app.example.com
upstreams:
- name: backend
service: backend-svc
port: 80
routes:
- path: /api
policies:
- name: wallarm-block
action:
pass: backend
- path: /internal
policies:
- name: wallarm-monitoring
action:
pass: backend
In this example, /api traffic is processed in block mode while /internal traffic is in monitoring mode — each route references a different Wallarm Policy.