Collection of deployment files for projects deployed in our long-running scenario.
One of the used projects is Strimzi project. You can find the examples of our deployment configuration in strimzi folder.
In our examples we deploy 2 Strimzi operators to manager Main and Mirror Kafka clusters.
Between the clusters there is a KafkaMirrorMaker2
established to mirror data from one cluster to another.
Overall we use 3 Kafka clusters:
main
cluster with connected producers, consumers, mirror maker, and managed topics via TopicOperatormirror
cluster that receives all data from main cluster, topics are mirrored and not managed by TopicOperatoroauth
cluster configured with OAuth authentication and authorization via Keycloak
To simulate production env properly we also deploy Strimzi Drain Cleaner tool for proper Kafka pods migration during OpenShift cluster updates.
Note that all Kafka clusters are already in KRaft mode.
We deploy several instances of Kroxylicious for demo purposes. Currently, we have 3 instances connected to different Kafka clusters:
kroxy-simple
with just traffic forwarding tomain
Kafka clusterkroxy-filters
withencryption
andschema enforcement
filters connected tomain
clusterkroxy-oauth
withencryption
andoauthbearer
filters connected tooauth
cluster
Each Kroxy instance has its own clients to demonstrate functionality of the project.
The examples also contains Console deployment that contains connection to all existing Kafka clusters. To gathering metrics we use internal Prometheus provided by OpenShift. Instance is also connected to Keycloak instance we have available in our deployment.
To make Kroxylicous schema enforcement
working properly, we need some Schema Registry.
In our examples we use Apicurio Registry version 2.x as version 3.x is not production ready yet.
Examples of simple deployment are available in streams/schema-registry.
Note that debezium deployment wasn't revisited for a couple of months and is not updated to latest versions
Next to Strimzi we also use a deployment of Debezium. Debezium is here used as a load generator from databases into Kafka. Databases are filled with data from static generator.
Every deployment and application is monitored by Prometheus and Vector. Because the most of the monitoring stack is deployed from automation-hub we are not storing the deployment files here. However, you can find there the example dashboards for the projects and our custom ones for proper monitoring of deployed scenario.
As part of our monitoring stack we use:
- Prometheus
- Grafana
- Thanos
- Loki
- Vector