Wallarm OOB for Traffic Mirrored by NGINX, Envoy and Similar¶
This article explains how to deploy Wallarm as the OOB solution if you choose to produce a traffic mirror by your NGINX, Envoy or similar solution.
Traffic mirroring can be implemented by configuring a web, proxy or similar server to copy incoming traffic to the Wallarm services for analysis. With this approach, typical traffic flow looks as follows:
Deployment procedure¶
To deploy and configure Wallarm to analyze a traffic mirror, you need to:
-
Deploy the Wallarm node to your infrastructure by one of the following methods:
- On a machine with a Linux OS from the all-in-one installer
- To AWS using the Terraform module
- To AWS using the Machine Image
- To GCP using the Machine Image
- To a container-based environment using the NGINX-based Docker image
Support of mirrored traffic analysis
Only NGINX-based Wallarm nodes support mirrored traffic filtration.
-
Configure Wallarm to analyze the traffic copy - the instructions above are equiped with the required steps.
-
Configure your infrastructure to produce a copy of your incoming traffic and send the copy to a Wallarm node as to an additional backend.
For configuration details, we recommend to refer to the documentation of components being used in your infrastructure. Below we give configuration examples for some popular solutions like NGINX, Envoy and similar but the real configuration depends on the pecularities of your instrastructure.
Configuration examples for traffic mirroring¶
Belowe are the examples on how to configure NGINX, Envoy, Traefik, Istio to mirror incoming traffic to Wallarm nodes as to an additional backend.
NGINX¶
Starting with NGINX 1.13 you can mirror the traffic to an additional backend. For NGINX to mirror the traffic:
-
Configure the
ngx_http_mirror_module
module by setting themirror
directive in thelocation
orserver
block.The example below will mirror requests received at
location /
tolocation /mirror-test
. -
To send the mirrored traffic to the Wallarm node, list the headers to be mirrored and specify the IP address of the machine with the node in the
location
themirror
directive points.
location / {
mirror /mirror-test;
mirror_request_body on;
root /usr/share/nginx/html;
index index.html index.htm;
}
location /mirror-test {
internal;
#proxy_pass http://111.11.111.1$request_uri;
proxy_pass http://222.222.222.222$request_uri;
proxy_set_header X-SERVER-PORT $server_port;
proxy_set_header X-SERVER-ADDR $server_addr;
proxy_set_header HOST $http_host;
proxy_set_header X-Forwarded-For $realip_remote_addr;
proxy_set_header X-Forwarded-Port $realip_remote_port;
proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
proxy_set_header X-Scheme $scheme;
proxy_set_header X-Request-ID $request_id;
}
Review the NGINX documentation
Envoy¶
This example configures traffic mirroring with Envoy via the single listener
listening to port 80 (without TLS) and having a single filter
. Addresses of an original backend and additional backend receiving mirrored traffic are specified in the clusters
block.
static_resources:
listeners:
- address:
socket_address:
address: 0.0.0.0
port_value: 80
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
stat_prefix: ingress_http
codec_type: AUTO
route_config:
name: local_route
virtual_hosts:
- name: backend
domains:
- "*"
routes:
- match:
prefix: "/"
route:
cluster: httpbin # <-- link to the original cluster
request_mirror_policies:
- cluster: wallarm # <-- link to the cluster receiving mirrored requests
runtime_fraction:
default_value:
numerator: 100
http_filters:
- name: envoy.filters.http.router
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router
clusters:
### Definition of original cluster
###
- name: httpbin
type: STRICT_DNS
lb_policy: ROUND_ROBIN
load_assignment:
cluster_name: httpbin
endpoints:
- lb_endpoints:
- endpoint:
address:
### Address of the original endpoint. Address is DNS name
### or IP address, port_value is TCP port number
###
socket_address:
address: httpbin # <-- definition of the original cluster
port_value: 80
### Definition of the cluster receiving mirrored requests
###
- name: wallarm
type: STRICT_DNS
lb_policy: ROUND_ROBIN
load_assignment:
cluster_name: wallarm
endpoints:
- lb_endpoints:
- endpoint:
address:
### Address of the original endpoint. Address is DNS name
### or IP address, port_value is TCP port number. Wallarm
### mirror schema can be deployed with any port but the
### default value is TCP/8445 for Terraform module, and
### the default value for other deployment options should be 80.
###
socket_address:
address: wallarm
port_value: 8445
Review the Envoy documentation
Istio¶
For Istio to mirror the traffic, you can configure VirtualService
for mirroring routes either to the internal endpoint (internal for Istio, e.g. hosted in Kubernetes) or to the external endpoint with ServiceEntry
:
-
To enable mirroring of in-cluster requests (e.g. between pods), add
mesh
to.spec.gateways
. -
To enable mirroring of external requests (e.g. via LoadBalancer or NodePort service), configure the Istio
Gateway
component and add the name of the component to.spec.gateways
ofVirtualService
. This option is presented in the example below.
---
### Configuration of destination for mirrored traffic
###
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: wallarm-external-svc
spec:
hosts:
- some.external.service.tld # mirroring destination address
location: MESH_EXTERNAL
ports:
- number: 8445 # mirroring destination port
name: http
protocol: HTTP
resolution: DNS
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: httpbin
spec:
hosts:
- ...
gateways:
### Name of istio `Gateway` component. Required for handling traffic from
### external sources
###
- httpbin-gateway
### Special label, enables this virtual service routes to work with requests
### from Kubernetes pods (in-cluster communication not via gateways)
###
- mesh
http:
- route:
- destination:
host: httpbin
port:
number: 80
weight: 100
mirror:
host: some.external.service.tld # mirroring destination address
port:
number: 8445 # mirroring destination port
---
### For handling external requests
###
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: httpbin-gateway
spec:
selector:
istio: ingress
app: istio-ingress
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "httpbin.local"
Review the Istio documentation
Traefik¶
The following configuration example is based on the dynamic configuration file
approach. Traefik also supports other configuration modes, and you can easily adjust the provided one to any of them as they have a similar structure.
### Dynamic configuration file
### Note: entrypoints are described in static configuration file
http:
services:
### This is how to map original and wallarm `services`.
### In further `routers` configuration (see below), please
### use the name of this service (`with_mirroring`).
###
with_mirroring:
mirroring:
service: "httpbin"
mirrors:
- name: "wallarm"
percent: 100
### The `service` to mirror traffic to - the endpoint
### that should receive the requests mirrored (copied)
### from the original `service`.
###
wallarm:
loadBalancer:
servers:
- url: "http://wallarm:8445"
### Original `service`. This service should receive the
### original traffic.
###
httpbin:
loadBalancer:
servers:
- url: "http://httpbin:80/"
routers:
### The router name must be the same as the `service` name
### for the traffic mirroring to work (with_mirroring).
###
with_mirroring:
entryPoints:
- "web"
rule: "Host(`mirrored.example.com`)"
service: "with_mirroring"
### The router for the original traffic.
###
just_to_original:
entryPoints:
- "web"
rule: "Host(`original.example.local`)"
service: "httpbin"