🗄️ DATABASES & MESSAGING
Data Layer
Managed PostgreSQL, Redis-compatible cache, distributed KV store, Kafka streaming, and multi-model databases.
DATABASE ENGINES
PostgreSQL
Relational · CloudNativePG
HA clustering, automated failover, PITR, connection pooling
Used by: Keycloak, Backstage, Matomo, Harbor, Supabase
Dragonfly
Key-Value (Redis) · Dragonfly Operator
Redis-compatible, superior performance, modern algorithms
Used by: Caching, sessions, rate-limiting
TiKV
Distributed KV · TiDB Operator
ACID transactions, Raft consensus, horizontal scaling
Used by: SurrealDB backend storage
SurrealDB
Multi-Model · Native
Document + graph + KV, SQL-like queries, TiKV backend
Used by: Applications needing flexible data models
Qdrant
Vector · Native
Similarity search, high-dimensional indexing, AI embeddings
Used by: Semantic search, recommendation, AI/ML workloads
Apache Kafka
Streaming · Strimzi Operator
Event streaming, CRD-based management, TLS + SASL auth
Used by: Async communication, event-driven architecture
All Components
CloudNativePG
productionKubernetes operator for PostgreSQL with HA clustering, automated failover, and point-in-time recovery.
Role: Manages PostgreSQL clusters for 5+ applications (Keycloak, Backstage, Matomo, etc.)
Dragonfly
productionRedis-compatible in-memory data store with superior performance through modern algorithms.
Role: High-performance caching layer replacing Redis
Strimzi (Apache Kafka)
productionKubernetes operator for Apache Kafka with native CRD-based management.
Role: Event streaming platform for asynchronous communication
TiKV
productionDistributed transactional key-value store with ACID transactions and Raft consensus.
Role: Backend storage engine for SurrealDB with strong consistency
SurrealDB
productionMulti-model database supporting document, graph, and key-value data models.
Role: Flexible database for applications needing graph + document queries
Qdrant
productionVector database for similarity search, powering semantic search and AI applications.
Role: Vector embeddings store for AI/ML workloads