Kubernetes Deployment Based on Helm Charts

Prerequisites

  • Local or cloud (EKS, GKE, AKE, etc) cluster running any version of Kubernetes
  • Application packaged as a Helm chart
  • Pod exposed to the public Internet or other potential sources of malicious web and API attacks
  • Kubernetes ingress controller or external load balancer (like AWS ELB or ALB) adds HTTP request header X-Forwarded-For which contains the real public IP address of connecting client
  • Wallarm account in the EU cloud or US cloud
  • Username and password of the user with the Deploy role added to your Wallarm account. To add a new user, please follow the instructions

Installation

  1. Create Wallarm ConfigMap.
  2. Update the definition of the Deployment object in Kubernetes.
  3. Update the definition of the Service object in Kubernetes.
  4. Update the Helm chart configuration file.
  5. Test Wallarm sidecar container.

Step 1: Creating Wallarm ConfigMap

Go to the Helm chart directory > the templates folder and create the wallarm-sidecar-configmap.yaml template with the following content:

apiVersion: v1
kind: ConfigMap
metadata:
  name: wallarm-sidecar-nginx-conf
data:
  default: |
    map $remote_addr $wallarm_mode_real {
      default {{ .Values.wallarm.mode | quote }};
      # IP addresses and rules for US cloud scanners
      23.239.18.250 off;104.237.155.105 off;45.56.71.221 off;45.79.194.128 off;104.237.151.202 off;45.33.15.249 off;45.33.43.225 off;45.79.10.15 off;45.33.79.18 off;45.79.75.59 off;23.239.30.236 off;50.116.11.251 off;45.56.123.144 off;45.79.143.18 off;172.104.21.210 off;74.207.237.202 off;45.79.186.159 off;45.79.216.187 off;45.33.16.32 off;96.126.127.23 off;172.104.208.113 off;192.81.135.28 off;35.236.51.79 off;35.236.75.97 off;35.236.111.124 off;35.236.108.88 off;35.236.16.246 off;35.236.61.185 off;35.236.110.91 off;35.236.14.198 off;35.235.124.137 off;35.236.48.47 off;35.236.100.176 off;35.236.18.117 off;35.235.112.188 off;35.236.55.214 off;35.236.126.84 off;35.236.3.158 off;35.236.127.211 off;35.236.118.146 off;35.236.20.89 off;35.236.1.4 off;
      # IP addresses and rules for European cloud scanners
      139.162.130.66 off;139.162.144.202 off;139.162.151.10 off;139.162.151.155 off;139.162.156.102 off;139.162.157.131 off;139.162.158.79 off;139.162.159.137 off;139.162.159.244 off;139.162.163.61 off;139.162.164.41 off;139.162.166.202 off;139.162.167.19 off;139.162.167.51 off;139.162.168.17 off;139.162.170.84 off;139.162.171.141 off;139.162.172.35 off;139.162.174.220 off;139.162.174.26 off;139.162.175.71 off;139.162.176.169 off;139.162.178.148 off;139.162.179.214 off;139.162.180.37 off;139.162.182.156 off;139.162.182.20 off;139.162.184.225 off;139.162.185.243 off;139.162.186.136 off;139.162.187.138 off;139.162.188.246 off;139.162.190.22 off;139.162.190.86 off;139.162.191.89 off;85.90.246.120 off;104.200.29.36 off;104.237.151.23 off;173.230.130.253 off;173.230.138.206 off;173.230.156.200 off;173.230.158.207 off;173.255.192.83 off;173.255.193.92 off;173.255.200.80 off;173.255.214.180 off;192.155.82.205 off;23.239.11.21 off;23.92.18.13 off;23.92.30.204 off;45.33.105.35 off;45.33.33.19 off;45.33.41.31 off;45.33.64.71 off;45.33.65.37 off;45.33.72.81 off;45.33.73.43 off;45.33.80.65 off;45.33.81.109 off;45.33.88.42 off;45.33.97.86 off;45.33.98.89 off;45.56.102.9 off;45.56.104.7 off;45.56.113.41 off;45.56.114.24 off;45.56.119.39 off;50.116.35.43 off;50.116.42.181 off;50.116.43.110 off;66.175.222.237 off;66.228.58.101 off;69.164.202.55 off;72.14.181.105 off;72.14.184.100 off;72.14.191.76 off;172.104.150.243 off;139.162.190.165 off;139.162.130.123 off;139.162.132.87 off;139.162.145.238 off;139.162.146.245 off;139.162.162.71 off;139.162.171.208 off;139.162.184.33 off;139.162.186.129 off;172.104.128.103 off;172.104.128.67 off;172.104.139.37 off;172.104.146.90 off;172.104.151.59 off;172.104.152.244 off;172.104.152.96 off;172.104.154.128 off;172.104.229.59 off;172.104.250.27 off;172.104.252.112 off;45.33.115.7 off;45.56.69.211 off;45.79.16.240 off;50.116.23.110 off;85.90.246.49 off;172.104.139.18 off;172.104.152.28 off;139.162.177.83 off;172.104.240.115 off;172.105.64.135 off;139.162.153.16 off;172.104.241.162 off;139.162.167.48 off;172.104.233.100 off;172.104.157.26 off;172.105.65.182 off;178.32.42.221 off;46.105.75.84 off;51.254.85.145 off;188.165.30.182 off;188.165.136.41 off;188.165.137.10 off;54.36.135.252 off;54.36.135.253 off;54.36.135.254 off;54.36.135.255 off;54.36.131.128 off;54.36.131.129 off;
    }
    server {
        listen 80 default_server;
        listen [::]:80 default_server ipv6only=on;
        server_name localhost;
        root /usr/share/nginx/html;
        index index.html index.htm;
        wallarm_mode $wallarm_mode_real;
        # wallarm_instance 1;
        {{ if eq .Values.wallarm.enable_ip_blocking "true" }}
        wallarm_acl default;
        {{ end }}
        set_real_ip_from 0.0.0.0/0;
        real_ip_header X-Forwarded-For;
        location / {
                proxy_pass http://localhost:{{ .Values.wallarm.app_container_port }};
                include proxy_params;
        }
    }

Step 2: Updating the Deployment Object in Kubernetes

  1. Return to the Helm chart directory > the templates folder and open the template defining the Deployment object for the application. A complex application can have several Deployment objects for different components of the application - please find an object which defines pods which are actually exposed to the Internet. For example:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
spec:
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers: 
      # Definition of your main app container
      - name: myapp 
        image: <Image>
        resources:
          limits:
            memory: "128Mi"
            cpu: "500m"
        ports:
        # Port on which the application container accepts incoming requests
        - containerPort: 8080
  1. Copy the following elements to the template:
    • the checksum/config annotation to the spec.template.metadata.annotations section to update the running pods after a change in the previously created ConfigMap object,
    • the wallarm sidecar container definition to the spec.template.spec.containers section,
    • the wallarm-nginx-conf volume definition to the spec.template.spec.volumes section.
    An example of the template with added elements is provided below. Elements for copying are indicated by the Wallarm element comment.
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    # Wallarm element: annotation to update running pods after changing Wallarm ConfigMap
    checksum/config: {{ include (print $.Template.BasePath "/wallarm-sidecar-configmap.yaml") . | sha256sum }}
  name: myapp
spec:
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      # Wallarm element: definition of Wallarm sidecar container
      - name: wallarm
        image: {{ .Values.wallarm.image.repository }}:{{ .Values.wallarm.image.tag }}
        imagePullPolicy: {{ .Values.wallarm.image.pullPolicy | quote }}
        env:
        - name: WALLARM_API_HOST
          value: {{ .Values.wallarm.wallarm_host_api | quote }}
        - name: DEPLOY_USER
          value: {{ .Values.wallarm.deploy_username | quote }}
        - name: DEPLOY_PASSWORD
          value: {{ .Values.wallarm.deploy_password | quote }}
        - name: DEPLOY_FORCE
          value: "true"
        - name: TARANTOOL_MEMORY_GB
          value: {{ .Values.wallarm.tarantool_memory_gb | quote }}
        ports:
        - name: http
          # Port on which the Wallarm sidecar container accepts requests
          # from the Service object
          containerPort: 80
        volumeMounts:
        - mountPath: /etc/nginx/sites-enabled
          readOnly: true
          name: wallarm-nginx-conf
      # Definition of your main app container
      - name: myapp
        image: <Image>
        resources:
          limits:
            memory: "128Mi"
            cpu: "500m"
        ports:
        # Port on which the application container accepts incoming requests
        - containerPort: 8080 
      volumes:
      # Wallarm element: definition of the wallarm-nginx-conf volume
      - name: wallarm-nginx-conf
        configMap:
          name: wallarm-sidecar-nginx-conf
          items:
            - key: default
              path: default
  1. Update the ports.containerPort value in sidecar container definition following the code comments.

Step 3: Updating the Service Object in Kubernetes

  1. Return to the Helm chart directory > the templates folder and open the template defining the Service object which points to Deployment modified in the previous step. For example:
apiVersion: v1
kind: Service
metadata:
  name: myapp
spec:
  selector:
    app: myapp
  ports:
  - port: {{ .Values.service.port }}
    # Wallarm sidecar container port; 
    # the value must be identical to ports.containerPort
    # in definition of Wallarm sidecar container
    targetPort: 8080
  1. Make sure the ports.targetPort value is identical to ports.containerPort from the definition of Wallarm sidecar container.

Step 4: Updating the Helm Chart Configuration File

  1. Return to the Helm chart directory and open the values.yaml file.
  1. Copy the wallarm object definition provided below to values.yaml and update parameter values following the code comments.
wallarm:
  image:
     repository: wallarm/node
     tag: 2.14
     pullPolicy: Always
  # Wallarm API endpoint: 
  # "api.wallarm.com" for the EU cloud
  # "us1.api.wallarm.com" for the US cloud
  wallarm_host_api: "api.wallarm.com"
  # Username of the user with the Deploy role
  deploy_username: "username"
  # Password of the user with the Deploy role
  deploy_password: "password"
  # Port on which the container accepts incoming requests, 
  # the value must be identical to ports.containerPort
  # in definition of your main app container
  app_container_port: 80
  # Request filtering mode:
  # "off" to disable request processing
  # "monitoring" to process but not block requests
  # "block" to process all requests and block the malicious ones
  mode: "block"
  # Amount of memory in GB for request analytics data, 
  # recommended value is 75% of the total server memory
  tarantool_memory_gb: 2
  1. Make sure the values.yaml file is valid using the following command:
# helm lint
  1. Deploy the modified Helm chart in the Kubernetes cluster using the following command:
# helm upgrade RELEASE CHART
  • RELEASE is the name of an existing Helm chart,
  • CHART is the path to the Helm chart directory.

NetworkPolicy Object in Kubernetes

If the application also uses the NetworkPolicy object it should be updated to reflect the Wallarm sidecar container port specified above.

Step 5: Testing Wallarm Sidecar Container

  1. Get the list of pods using the following command:

    # kubectl get pods
    

    The number of containers in the pod should increase and the status of the pod should be "Running".

     NAME                       READY   STATUS    RESTARTS   AGE
     mychart-856f957bbd-cr4kt   2/2     Running   0          3m48s
    
  2. Go to your Wallarm account > Nodes by the link below and make sure that a new node is displayed. The created node is used to filter requests to your application.
  3. Send a malicious test attack request to the application as described in this instruction.
  4. Go to your Wallarm account > Events by the link below and make sure that an attack is displayed in the list:

results matching ""

    No results matching ""