Kubernetes deployment based on Helm charts¶
These instructions provide you with the steps to deploy Wallarm as the K8s sidecar container in the Helm chart-based K8s environment.
Prerequisites¶
-
Local or cloud (EKS, GKE, AKE, etc) cluster running any version of Kubernetes
-
Application packaged as a Helm chart
-
Pod exposed to the public Internet or other potential sources of malicious web and API attacks
-
Kubernetes Ingress controller or external load balancer (like AWS ELB or ALB) to add the HTTP request header
X-Forwarded-For
, which contains the real public IP address of the connecting client -
Username and password of the user with the Deploy role added to your company's Wallarm account. To add a new user, please follow these instructions
Installation¶
-
Create Wallarm ConfigMap.
-
Update the definition of the
Deployment
object in Kubernetes. -
Update the definition of the
Service
object in Kubernetes. -
Update the Helm chart configuration file.
-
Test the Wallarm sidecar container.
If you deploy several Wallarm nodes
All Wallarm nodes deployed to your environment must be of the same versions. The postanalytics modules installed on separated servers must be of the same versions too.
Before installation of the additional node, please ensure its version matches the version of already deployed modules. If the deployed module version is deprecated or will be deprecated soon (4.0
or lower), upgrade all modules to the latest version.
The version of deployed Wallarm filtering node image is specified in the Helm chart configuration file → wallarm.image.tag
.
Step 1: Creating Wallarm ConfigMap¶
Go to the Helm chart directory → the templates
folder and create a wallarm-sidecar-configmap.yaml
template with the following content:
apiVersion: v1
kind: ConfigMap
metadata:
name: wallarm-sidecar-nginx-conf
data:
default: |
geo $remote_addr $wallarm_mode_real {
default {{ .Values.wallarm.mode | quote }};
# IP addresses and rules for US cloud scanners
104.237.151.202 off;104.237.155.105 off;172.104.208.113 off;172.104.21.210 off;172.104.22.150 off;173.255.193.92 off;192.155.82.205 off;192.155.92.134 off;192.81.135.28 off;23.239.30.236 off;23.239.4.41 off;23.92.30.204 off;34.94.218.5 off;35.236.1.4 off;35.236.118.146 off;35.236.20.89 off;45.33.15.249 off;45.33.16.32 off;45.33.43.225 off;45.33.65.37 off;45.33.79.18 off;45.33.86.254 off;45.56.113.41 off;45.56.114.24 off;45.56.122.184 off;45.56.71.221 off;45.56.72.191 off;45.79.10.15 off;45.79.115.178 off;45.79.143.18 off;45.79.186.159 off;45.79.194.128 off;45.79.216.187 off;45.79.75.59 off;45.79.75.91 off;45.79.93.164 off;50.116.11.251 off;50.116.42.181 off;66.175.222.237 off;69.164.202.55 off;72.14.184.100 off;74.207.237.202 off;96.126.124.141 off;96.126.127.23 off;88.80.188.20 off;198.58.123.222 off;192.155.84.216 off;23.92.18.193 off;23.92.18.191 off;170.187.207.244 off;198.58.123.201 off;192.155.84.159 off;88.80.188.16 off;170.187.207.246 off;
# IP addresses and rules for European cloud scanners
104.200.29.36 off;104.237.151.23 off;139.162.130.123 off;139.162.130.66 off;139.162.132.87 off;139.162.144.202 off;139.162.145.238 off;139.162.146.245 off;139.162.151.10 off;139.162.151.155 off;139.162.153.16 off;139.162.156.102 off;139.162.157.131 off;139.162.158.79 off;139.162.159.137 off;139.162.159.244 off;139.162.162.71 off;139.162.163.61 off;139.162.164.41 off;139.162.166.202 off;139.162.167.19 off;139.162.167.48 off;139.162.167.51 off;139.162.168.17 off;139.162.170.84 off;139.162.171.141 off;139.162.171.208 off;139.162.172.35 off;139.162.174.220 off;139.162.174.26 off;139.162.175.71 off;139.162.176.169 off;139.162.177.83 off;139.162.178.148 off;139.162.179.214 off;139.162.180.37 off;139.162.182.156 off;139.162.182.20 off;139.162.184.225 off;139.162.184.33 off;139.162.185.243 off;139.162.186.129 off;139.162.186.136 off;139.162.187.138 off;139.162.188.246 off;139.162.190.165 off;139.162.190.22 off;139.162.190.86 off;139.162.191.89 off;172.104.128.103 off;172.104.128.67 off;172.104.139.37 off;172.104.146.90 off;172.104.150.243 off;172.104.151.59 off;172.104.152.244 off;172.104.152.28 off;172.104.152.96 off;172.104.154.128 off;172.104.157.26 off;172.104.229.59 off;172.104.233.100 off;172.104.240.115 off;172.104.241.162 off;172.104.250.27 off;172.104.252.112 off;172.105.64.135 off;172.105.65.182 off;173.230.130.253 off;173.230.138.206 off;173.230.156.200 off;173.230.158.207 off;173.255.192.83 off;173.255.200.80 off;173.255.214.180 off;23.239.11.21 off;23.92.18.13 off;34.90.114.30 off;34.91.133.93 off;34.91.54.247 off;35.204.60.30 off;45.33.105.35 off;45.33.115.7 off;45.33.33.19 off;45.33.41.31 off;45.33.64.71 off;45.33.72.81 off;45.33.73.43 off;45.33.80.65 off;45.33.81.109 off;45.33.88.42 off;45.33.97.86 off;45.33.98.89 off;45.56.102.9 off;45.56.104.7 off;45.56.119.39 off;45.56.69.211 off;45.79.16.240 off;50.116.23.110 off;50.116.35.43 off;50.116.43.110 off;66.228.58.101 off;72.14.181.105 off;72.14.191.76 off;85.90.246.120 off;85.90.246.49 off;
}
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
server_name localhost;
root /usr/share/nginx/html;
index index.html index.htm;
wallarm_mode $wallarm_mode_real;
# wallarm_instance 1;
{{ if eq .Values.wallarm.enable_ip_blocking "true" }}
wallarm_acl default;
{{ end }}
set_real_ip_from 0.0.0.0/0;
real_ip_header X-Forwarded-For;
location / {
proxy_pass http://localhost:{{ .Values.wallarm.app_container_port }};
include proxy_params;
}
}
Step 2: Updating the Deployment object in Kubernetes¶
-
Return to the Helm chart directory → the
templates
folder and open the template defining theDeployment
object for the application. A complex application can have severalDeployment
objects for different components of the application - please find an object which defines pods which are actually exposed to the Internet. For example:apiVersion: apps/v1 kind: Deployment metadata: name: myapp spec: selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: # Definition of your main app container - name: myapp image: <Image> resources: limits: memory: "128Mi" cpu: "500m" ports: # Port on which the application container accepts incoming requests - containerPort: 8080
-
Copy the following elements to the template:
- The
checksum/config
annotation to thespec.template.metadata.annotations
section to update the running pods after a change in the previously created ConfigMap object - The
wallarm
sidecar container definition to thespec.template.spec.containers
section - The
wallarm-nginx-conf
volume definition to thespec.template.spec.volumes
section
An example of the template with added elements is provided below. Elements for copying are indicated by the
Wallarm element
comment.apiVersion: apps/v1 kind: Deployment metadata: annotations: # Wallarm element: annotation to update running pods after changing Wallarm ConfigMap checksum/config: '{{ include (print $.Template.BasePath "/wallarm-sidecar-configmap.yaml") . | sha256sum }}' name: myapp spec: selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: # Wallarm element: definition of Wallarm sidecar container - name: wallarm image: {{ .Values.wallarm.image.repository }}:{{ .Values.wallarm.image.tag }} imagePullPolicy: {{ .Values.wallarm.image.pullPolicy | quote }} env: - name: WALLARM_API_HOST value: {{ .Values.wallarm.wallarm_host_api | quote }} - name: DEPLOY_USER value: {{ .Values.wallarm.deploy_username | quote }} - name: DEPLOY_PASSWORD value: {{ .Values.wallarm.deploy_password | quote }} - name: DEPLOY_FORCE value: "true" - name: WALLARM_ACL_ENABLE value: "true" - name: TARANTOOL_MEMORY_GB value: {{ .Values.wallarm.tarantool_memory_gb | quote }} ports: - name: http # Port on which the Wallarm sidecar container accepts requests # from the Service object containerPort: 80 volumeMounts: - mountPath: /etc/nginx/sites-enabled readOnly: true name: wallarm-nginx-conf # Definition of your main app container - name: myapp image: <Image> resources: limits: memory: "128Mi" cpu: "500m" ports: # Port on which the application container accepts incoming requests - containerPort: 8080 volumes: # Wallarm element: definition of the wallarm-nginx-conf volume - name: wallarm-nginx-conf configMap: name: wallarm-sidecar-nginx-conf items: - key: default path: default
- The
-
Update the
ports.containerPort
value in sidecar container definition following the code comments.
Step 3: Updating the Service object in Kubernetes¶
-
Return to the Helm chart directory → the
templates
folder and open the template defining theService
object that points toDeployment
modified in the previous step. For example:apiVersion: v1 kind: Service metadata: name: myapp spec: selector: app: myapp ports: - port: {{ .Values.service.port }} # Wallarm sidecar container port; # the value must be identical to ports.containerPort # in definition of Wallarm sidecar container targetPort: 8080
-
Change the
ports.targetPort
value to point to the Wallarm sidecar container port (ports.containerPort
defined in the Wallarm sidecar container). For example:... - port: {{ .Values.service.port }} # Wallarm sidecar container port; # the value must be identical to ports.containerPort # in definition of Wallarm sidecar container targetPort: 80
Step 4: Updating the Helm chart configuration file¶
-
Return to the Helm chart directory and open the
values.yaml
file. -
Copy the
wallarm
object definition provided below tovalues.yaml
and update parameter values following the code comments.wallarm: image: repository: wallarm/node tag: 2.18.1-5 pullPolicy: Always # Wallarm API endpoint: # "api.wallarm.com" for the EU Cloud # "us1.api.wallarm.com" for the US Cloud wallarm_host_api: "api.wallarm.com" # Username of the user with the Deploy role deploy_username: "username" # Password of the user with the Deploy role deploy_password: "password" # Port on which the container accepts incoming requests, # the value must be identical to ports.containerPort # in definition of your main app container app_container_port: 80 # Request filtration mode: # "off" to disable request processing # "monitoring" to process but not block requests # "block" to process all requests and block the malicious ones mode: "block" # Amount of memory in GB for request analytics data tarantool_memory_gb: 2 # Set to "true" to enable the IP Blocking functionality enable_ip_blocking: "false"
-
Make sure the
values.yaml
file is valid using the following command:helm lint
-
Deploy the modified Helm chart in the Kubernetes cluster using the following command:
helm upgrade <RELEASE> <CHART>
<RELEASE>
is the name of an existing Helm chart<CHART>
is the path to the Helm chart directory
NetworkPolicy object in Kubernetes
If the application also uses the NetworkPolicy
object it should be updated to reflect the Wallarm sidecar container port specified above.
Step 5: Testing the Wallarm sidecar container¶
-
Get the list of pods using the following command:
kubectl get pods
The number of containers in the pod should increase, and the status of the pod should be Running.
NAME READY STATUS RESTARTS AGE mychart-856f957bbd-cr4kt 2/2 Running 0 3m48s
-
Go to Wallarm Console → Nodes via the link below and make sure that a new node is displayed. This created node is used to filter requests to your application.
-
Send a test malicious request to the application as described in these instructions.
-
Go to Wallarm Console → Events via the link below and make sure that an attack is displayed in the list: