Deploying the Native Node with Helm Chart¶
The Wallarm Native Node, which operates independently of NGINX, is designed for deployment with some connectors. You can run the Native Node on as a separate service or as a load balancer in your Kubernetes cluster using the Helm chart.
Use cases¶
Deploy the Native Node with Helm chart in the following cases:
-
When you deploy a Wallarm connector and require the node to be self-hosted. This is ideal if you are already using Kubernetes management platforms like OpenShift, Amazon EKS, Azure AKS, or Google GKE. The node is set up as a load balancer with a public IP for easy traffic routing.
Use the Node in
connector-servermode. -
When you need a gRPC-based external processing filter for APIs managed by Istio. The node is set up either as a load balancer with a public IP or as a service inside your Kubernetes cluster.
Use the Node in
envoy-external-filtermode. -
When you deploy a Wallarm connector for Kong API Gateway. The node is deployed with the clusterIP type for internal traffic, without exposing a public IP.
Use the Node in
connector-servermode.
Requirements¶
The Kubernetes cluster for deploying the Native Node with the Helm chart must meet the following criteria:
-
Helm v3 package manager installed.
-
Inbound access from your API gateway or CDN where your APIs are running.
-
Outbound access to:
https://charts.wallarm.comto download the Wallarm Helm charthttps://hub.docker.com/r/wallarmto download the Docker images required for the deploymenthttps://us1.api.wallarm.comorhttps://api.wallarm.comfor US/EU Wallarm Cloud-
IP addresses and their corresponding hostnames (if any) listed below. This is needed for downloading updates to attack detection rules and API specifications, as well as retrieving precise IPs for your allowlisted, denylisted, or graylisted countries, regions, or data centers
-
A domain and a trusted SSL/TLS certificate for the Native Node.
-
In addition to the above, you should have the Administrator role assigned in Wallarm Console.
Limitations¶
-
A trusted SSL/TLS certificate is required for the Node instance domain. Self-signed certificates are not yet supported.
-
Custom blocking page and blocking code configurations are not yet supported.
-
Rate limiting by the Wallarm rule is not supported.
Deployment¶
1. Prepare Wallarm token¶
To install node, you will need a token for registering the node in the Wallarm Cloud. To prepare a token:
-
Open Wallarm Console → Settings → API tokens in the US Cloud or EU Cloud.
-
Find or create API token with the
Node deployment/Deploymentusage type. -
Copy this token.
2. Add the Wallarm Helm chart repository¶
3. Prepare the configuration file¶
Deploying the native Wallarm node as a LoadBalancer with a public IP allows you to route traffic from MuleSoft, Cloudflare, Amazon CloudFront, Broadcom Layer7 API Gateway, Fastly to this IP for security analysis and filtration.
- Register a domain for the load balancer.
- Obtain a trusted SSL/TLS certificate.
-
Create the
values.yamlconfiguration file with the following minimal configuration. Choose the tab for your preferred method of applying the certificate:If you use
cert-managerin your cluster, you can generate the SSL/TLS certificate with it.config: connector: mode: connector-server certificate: enabled: true certManager: enabled: true issuerRef: # The name of the cert-manager Issuer or ClusterIssuer name: letsencrypt-prod # If it is Issuer (namespace-scoped) or ClusterIssuer (cluster-wide) kind: ClusterIssuer processing: service: type: LoadBalancerYou can pull SSL/TLS certificate from an existing Kubernetes secrets in the same namespace.
The
customSecretconfiguration allows you to define a certificate directly as base64-encoded values.
When deploying Wallarm as a connector for Kong API Gateway connector, you deploy the Native Node for this connector with the ClusterIP type for internal traffic, without exposing a public IP.
Create the values.yaml configuration file with the following minimal configuration:
Deploying the native Wallarm node as a LoadBalancer with a public IP allows you to route traffic from Istio Ingress to this IP for security analysis and filtration.
- Register a domain for the load balancer.
- Obtain a trusted SSL/TLS certificate.
-
Create the
values.yamlconfiguration file with the following minimal configuration. Choose the tab for your preferred method of applying the certificate:If you use
cert-managerin your cluster, you can generate the SSL/TLS certificate with it.config: connector: mode: envoy-external-filter certificate: enabled: true certManager: enabled: true issuerRef: # The name of the cert-manager Issuer or ClusterIssuer name: letsencrypt-prod # If it is Issuer (namespace-scoped) or ClusterIssuer (cluster-wide) kind: ClusterIssuer processing: service: type: LoadBalancerYou can pull SSL/TLS certificate from an existing Kubernetes secrets in the same namespace.
The
customSecretconfiguration allows you to define a certificate directly as base64-encoded values.
When deploying Wallarm as an Istio connector service inside your Kubernetes cluster, the Native Node runs as an internal component (ClusterIP service type) without exposing a public IP.
- Define a DNS name that resolves to the Wallarm Node service inside your cluster.
- Obtain a trusted SSL/TLS certificate for that domain.
-
Create the
values.yamlconfiguration file with the following minimal configuration. Choose the tab for your preferred method of applying the certificate:If you use
cert-managerin your cluster, you can generate the SSL/TLS certificate with it.config: connector: mode: envoy-external-filter certificate: enabled: true certManager: enabled: true issuerRef: # The name of the cert-manager Issuer or ClusterIssuer name: letsencrypt-prod # If it is Issuer (namespace-scoped) or ClusterIssuer (cluster-wide) kind: ClusterIssuer processing: service: type: ClusterIPYou can pull SSL/TLS certificate from an existing Kubernetes secrets in the same namespace.
The
customSecretconfiguration allows you to define a certificate directly as base64-encoded values.
4. Deploy the Wallarm service¶
5. Configure DNS access to the Wallarm Node¶
-
Get the external IP for the Wallarm load balancer:
Find the external IP for the
native-processingservice. -
Create an A record in your DNS provider, pointing your domain to the external IP.
After the DNS propagates, you can access the service via the domain name.
The Wallarm Node does not have a public IP, so it must be accessible internally through a DNS rewrite.
Create a CoreDNS rewrite rule to map your public domain (used in the certificate) to the Node's internal service address:
# This assumes you installed the Native Node in the default namespace: wallarm
# Replace <DOMAIN_NAME> with the domain name used in your certificate
# Example: wallarm-node.corp.com -> native-processing.wallarm.svc.cluster.local
kubectl patch configmap coredns -n kube-system --patch='
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
rewrite name <DOMAIN_NAME> native-processing.wallarm.svc.cluster.local
prometheus :9153
forward . /etc/resolv.conf {
max_concurrent 1000
}
cache 30
loop
reload
loadbalance
}
'
This configuration ensures that all in-cluster requests to <DOMAIN_NAME> resolve to the Wallarm Node's internal ClusterIP service while keeping traffic entirely within the cluster.
6. Apply Wallarm code to an API management service¶
After deploying the node, the next step is to apply the Wallarm code to your API management platform or service in order to route traffic to the deployed node.
-
Contact sales@wallarm.com to obtain the Wallarm code bundle for your connector.
-
Follow the platform-specific instructions to apply the bundle on your API management platform:
Upgrade¶
To upgrade the node, follow the instructions.