Loading...

Monitor Kibana queries with Packetbeat

:heavy_exclamation_mark: This post is older than a year. Consider some information might not be accurate anymore. :heavy_exclamation_mark:

Used:   packetbeat v6.2.4  kibana v6.2.4  elasticsearch v6.2.4 

If you are using X-Pack Monitoring you have a good overview of your Kibana performance. Sometimes it is necessary to know more. Packetbeat can monitor your http traffic between Kibana and the Elasticsearch node.

Customize Docker image

I run packetbeat in Docker. We can extend from the official image and add customizations to our docker image.

FROM docker.elastic.co/beats/packetbeat:6.2.4
COPY packetbeat.yml /usr/share/packetbeat/packetbeat.yml
COPY packetbeat.keystore /usr/share/packetbeat/packetbeat.keystore
COPY ca.crt /usr/share/packetbeat/ca.crt
USER root
RUN chown packetbeat /usr/share/packetbeat/packetbeat.yml
RUN chown packetbeat /usr/share/packetbeat/packetbeat.keystore
RUN chown packetbeat /usr/share/packetbeat/ca.crt
RUN chmod go-wrx /usr/share/packetbeat/packetbeat.keystore
USER packetbeat
  • packetbeat.yml contains the packetbeat configuration
  • packetbeat.keystore contains the password used in above configuration
  • ca.crt contains the root certificate authority of the secured Elasticsearch cluster

Packetbeat Configuration

Packetbeat will capture far more, than you might be interested in, therefore ensure you filter only the Kibana searches. Kibana uses the Multisearch API (path = /msearch) as POST request to the Elasticsearch node. Filtering only for GET requests, will get you nowhere.

The http.request.body contains the interesting part, whereas the http.response.body will give you the answer from Elasticsearch. Since a lot of text is stored, I remove the http.request.body before ingestion.

processors:    
  - drop_event.when.not.contains:
      query: "search"
  - drop_event.when.equals:
      query: "POST /.kibana/_search"
  - drop_event.when.equals:
      query: "POST /.reporting-*/_search"
  - drop_event.when.equals:
      query: "POST /.reporting-*/esqueue/_search"
  # not interested in the response (or how long it took), duration is given by packetbeat
  - drop_fields:
      when.equals:
        query: "POST /_msearch"
      fields: ["http.response.body"]
  # filter out load balancer
  - drop_event.when:
      and:
      - contains:
          http.response.headers.server: "nginx"
      - equals:
          ip: "10.22.22.221"

Above snippet contains only the filtering. The whole packetbeat configuration can be found on this gist.

The overall configuration contains an output to Apache Kafka, where Logstash reads and ships it to my monitoring cluster. The important part is, ingest the packetbeat data capture in another cluster, than the monitored one! You will create an infinite loop otherwise, if you review the results in Kibana instance. A dedicated cluster like a monitoring cluster is recommended.

Ansible Deployment

In the ansible playbook.yml the docker module contains all the run args.

---

- hosts: kibana-host
  any_errors_fatal: True
  serial: 1
  vars:
    container_name: packetbeat
    image_name: cinhtau/packetbeat:6.2.4

  tasks:
    - assert:
        that:
          - image_name is not none

    - name: remove old container
      command: docker rm -f "" warn=no
      ignore_errors: true

    - name: install new container
      docker:
        name: ""
        image: ""
        state: started
        pull: always
        log_driver: json-file
        log_opt:
          max-size: 10m
        env:
          TZ: Europe/Zurich
        net: host
        cap_add: NET_ADMIN
        restart_policy: on-failure
        restart_policy_retry: 10

Examine Results

In the Kibana instance of the monitoring cluster you can examine for instance all requests that took longer than 2 seconds.

Please remember the terms for blog comments.