Skip to content

Kubernetes deployment based on Helm charts

These instructions provide you with the steps to deploy Wallarm as the K8s sidecar container in the Helm chart-based K8s environment.

Prerequisites

  • Local or cloud (EKS, GKE, AKE, etc) cluster running any version of Kubernetes

  • Application packaged as a Helm chart

  • Pod exposed to the public Internet or other potential sources of malicious web and API attacks

  • Kubernetes Ingress controller or external load balancer (like AWS ELB or ALB) to add the HTTP request header X-Forwarded-For, which contains the real public IP address of the connecting client

  • Wallarm account in the EU Cloud or US Cloud

  • Username and password of the user with the Deploy role added to your company's Wallarm account. To add a new user, please follow these instructions

Installation

  1. Create Wallarm ConfigMap.

  2. Update the definition of the Deployment object in Kubernetes.

  3. Update the definition of the Service object in Kubernetes.

  4. Update the Helm chart configuration file.

  5. Test the Wallarm sidecar container.

If you deploy several Wallarm nodes

All Wallarm nodes deployed to your environment must be of the same versions. The postanalytics modules installed on separated servers must be of the same versions too.

Before installation of the additional node, please ensure its version matches the version of already deployed modules. If the deployed module version is deprecated or will be deprecated soon (4.0 or lower), upgrade all modules to the latest version.

The version of deployed Wallarm filtering node image is specified in the Helm chart configuration file → wallarm.image.tag.

Step 1: Creating Wallarm ConfigMap

Go to the Helm chart directory → the templates folder and create a wallarm-sidecar-configmap.yaml template with the following content:

apiVersion: v1
kind: ConfigMap
metadata:
  name: wallarm-sidecar-nginx-conf
data:
  default: |
      server {
          listen 80 default_server;
          listen [::]:80 default_server ipv6only=on;
          server_name localhost;
          root /usr/share/nginx/html;
          index index.html index.htm;
          wallarm_mode {{ .Values.wallarm.mode | quote }};
          # wallarm_application 1;
          set_real_ip_from 0.0.0.0/0;
          real_ip_header X-Forwarded-For;
          location / {
                  proxy_pass http://localhost:{{ .Values.wallarm.app_container_port }};
                  include proxy_params;
          }
      }

Step 2: Updating the Deployment object in Kubernetes

  1. Return to the Helm chart directory → the templates folder and open the template defining the Deployment object for the application. A complex application can have several Deployment objects for different components of the application - please find an object which defines pods which are actually exposed to the Internet. For example:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: myapp
    spec:
      selector:
        matchLabels:
          app: myapp
      template:
        metadata:
          labels:
            app: myapp
        spec:
          containers: 
          # Definition of your main app container
          - name: myapp 
            image: <Image>
            resources:
              limits:
                memory: "128Mi"
                cpu: "500m"
            ports:
            # Port on which the application container accepts incoming requests
            - containerPort: 8080 
    
  2. Copy the following elements to the template:

    • The checksum/config annotation to the spec.template.metadata.annotations section to update the running pods after a change in the previously created ConfigMap object
    • The wallarm sidecar container definition to the spec.template.spec.containers section
    • The wallarm-nginx-conf volume definition to the spec.template.spec.volumes section

    An example of the template with added elements is provided below. Elements for copying are indicated by the Wallarm element comment.

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      annotations:
        # Wallarm element: annotation to update running pods after changing Wallarm ConfigMap
        checksum/config: '{{ include (print $.Template.BasePath "/wallarm-sidecar-configmap.yaml") . | sha256sum }}'
      name: myapp
    spec:
      selector:
        matchLabels:
          app: myapp
      template:
        metadata:
          labels:
            app: myapp
        spec:
          containers:
            # Wallarm element: definition of Wallarm sidecar container
            - name: wallarm
              image: {{ .Values.wallarm.image.repository }}:{{ .Values.wallarm.image.tag }}
              imagePullPolicy: {{ .Values.wallarm.image.pullPolicy | quote }}
              env:
              - name: WALLARM_API_HOST
                value: {{ .Values.wallarm.wallarm_host_api | quote }}
              - name: DEPLOY_USER
                value: {{ .Values.wallarm.deploy_username | quote }}
              - name: DEPLOY_PASSWORD
                value: {{ .Values.wallarm.deploy_password | quote }}
              - name: DEPLOY_FORCE
                value: "true"
              - name: TARANTOOL_MEMORY_GB
                value: {{ .Values.wallarm.tarantool_memory_gb | quote }}
              ports:
              - name: http
                # Port on which the Wallarm sidecar container accepts requests
                # from the Service object
                containerPort: 80
              volumeMounts:
              - mountPath: /etc/nginx/sites-enabled
                readOnly: true
                name: wallarm-nginx-conf
            # Definition of your main app container
            - name: myapp
              image: <Image>
              resources:
                limits:
                  memory: "128Mi"
                  cpu: "500m"
              ports:
              # Port on which the application container accepts incoming requests
              - containerPort: 8080 
          volumes:
          # Wallarm element: definition of the wallarm-nginx-conf volume
          - name: wallarm-nginx-conf
            configMap:
              name: wallarm-sidecar-nginx-conf
              items:
                - key: default
                  path: default
    
  3. Update the ports.containerPort value in sidecar container definition following the code comments.

Step 3: Updating the Service object in Kubernetes

  1. Return to the Helm chart directory → the templates folder and open the template defining the Service object that points to Deployment modified in the previous step. For example:

    apiVersion: v1
    kind: Service
    metadata:
      name: myapp
    spec:
      selector:
        app: myapp
      ports:
      - port: {{ .Values.service.port }}
        # Wallarm sidecar container port; 
        # the value must be identical to ports.containerPort
        # in definition of Wallarm sidecar container
        targetPort: 8080
    
  2. Change the ports.targetPort value to point to the Wallarm sidecar container port (ports.containerPort defined in the Wallarm sidecar container). For example:

    ...
      - port: {{ .Values.service.port }}
        # Wallarm sidecar container port; 
        # the value must be identical to ports.containerPort
        # in definition of Wallarm sidecar container
        targetPort: 80
    

Step 4: Updating the Helm chart configuration file

  1. Return to the Helm chart directory and open the values.yaml file.

  2. Copy the wallarm object definition provided below to values.yaml and update parameter values following the code comments.

    wallarm:
      image:
         repository: wallarm/node
         tag: 3.6.2-1
         pullPolicy: Always
      # Wallarm API endpoint: 
      # "api.wallarm.com" for the EU Cloud
      # "us1.api.wallarm.com" for the US Cloud
      wallarm_host_api: "api.wallarm.com"
      # Username of the user with the Deploy role
      deploy_username: "username"
      # Password of the user with the Deploy role
      deploy_password: "password"
      # Port on which the container accepts incoming requests,
      # the value must be identical to ports.containerPort
      # in definition of your main app container
      app_container_port: 80
      # Request filtration mode:
      # "off" to disable request processing
      # "monitoring" to process but not block requests
      # "safe_blocking" to block malicious requests originated from graylisted IPs
      # "block" to process all requests and block the malicious ones
      mode: "block"
      # Amount of memory in GB for request analytics data
      tarantool_memory_gb: 2
    
  3. Make sure the values.yaml file is valid using the following command:

    helm lint
    
  4. Deploy the modified Helm chart in the Kubernetes cluster using the following command:

    helm upgrade <RELEASE> <CHART>
    
    • <RELEASE> is the name of an existing Helm chart
    • <CHART> is the path to the Helm chart directory

NetworkPolicy object in Kubernetes

If the application also uses the NetworkPolicy object it should be updated to reflect the Wallarm sidecar container port specified above.

Step 5: Testing the Wallarm sidecar container

  1. Get the list of pods using the following command:

    kubectl get pods
    

    The number of containers in the pod should increase, and the status of the pod should be Running.

    NAME                       READY   STATUS    RESTARTS   AGE
    mychart-856f957bbd-cr4kt   2/2     Running   0          3m48s
    
  2. Go to Wallarm Console → Nodes via the link below and make sure that a new node is displayed. This created node is used to filter requests to your application.

  3. Send a test malicious request to the application as described in these instructions.

  4. Go to Wallarm Console → Attacks via the link below and make sure that an attack is displayed in the list: