The breakdown of the workshop will be as follows:
We will look at a dockerized solution and create a Dockerfile that deploys a docker container for Kafka. We will look at simple Dashboard provided by Lenses.We will also walk through sample terraform and ansible scripts that deploy a Kafka cluster as EC2 instances in AWS. Finally we will configure Kafka and zookeeper settings for systemd service configurations and look at example Kafka-to-S3 connector that is deployed in AWS that integrates with S3 Buckets.
For an overall monitoring solution, we will start by looking at Kafka logging and discuss the different logs that are configured with a simple deployment. We will look at broker logs, zookeeper logs, server logs, connect logs, and discuss how these logs can be made available to support teams. Additional topics covered in this section include:
- Monitoring each component of a Kafka Cluster: service, application, data flows, brokers, zookeeper, schema reg, connect, network monitoring, System - cpu, disk space, memory
- JMX_Exporter and Kafka metrics reported by JMX_Exporter
- Sample Kafka dashboards provided by Grafana
- Splunk Connect for Kafka - a sink connector that allows a Splunk software administrator to subscribe to a Kafka topic and stream the data to the Splunk HTTP event collector
- Elasticsearch Metricbeat Kafka module and how it integrates with Jolokia
We will look at configuring Kafka related alerts related to: InSync Replicas, Partitions, Broker Connection, Zookeeper Connection, Health Status, latency, bandwidth, throughput, consumer lag, and preventing loss of messages in production. Finally we will wrap up by discussing notification options including email, Slack, Service-Now, and PagerDuty.