What youll be doing
**Work with other engineering teams to gather business requirements regarding data volumes, throughput, latency, availability and translate them to technical requirements for the platform**
**Building and maintaining a large-scale Kafka platform (including components from the wider Kafka ecosystem) to support a range of big data streaming applications**
**Build and integrate platform with other systems for provisioning, monitoring, and alerting**
**Automate the deployment of infrastructure and application for data systems such as Kafka**
**Support the rapid growth of the platform, by expanding its strategy to deploy into an OpenShift environment and AWS Cloud environment (EKS/GKE)**
**Write and review technical documents, including design, requirements, and process documentation**
**Advocate for a culture of platform automation with obsession for everything as-a-code approach**
**What we are looking for:**
**7+ years of relevant experience, with strong fundamentals in distributed systems design & build**
**Experience administering distributed data systems such as any of the following: Kafka, Hadoop, Spark, Flink or similar**
**Proficiency with Linux systems (both Physical & VMs) and experience deploying and managing fault-tolerant applications on Linux**
**Solid working experience with containers (Docker/Kubernetes/OpenShift/EKS/GKE or similar)**
**Strong Networking and Storage fundamentals with ability to trace network and I/O bottlenecks**
**Experience designing and implementing multiple automated deployment pipelines at both applications and infrastructure level. Ideally, you would have experience with Ansible and Terraform on multiple projects**
**Experience in an agile development environment with programming languages such as any of the following: Python, Golang, Java, Kotlin, Scala or similar**
**What gives you an edge:**
**Experience administering Apache Kafka clusters in production, ideally Confluent Platform**
**Good knowledge of the wider Kafka ecosystem (Kafka Connect, ksqlDB, REST Proxy, Schema Registry, Confluent Control Center etc.)**
**Good understanding of Multi-Region Kafka Clusters**
**Experience deploying applications and infrastructure into the cloud**
**Experience working with the Hashicorp tool set, specifically Vault for secrets management and Consul for service discovery**
Job ID: 122305
Meta is embarking on the most transformative change to its business and technolo...
Deloitte’s Enterprise Performance professionals are leaders in optimizing...
Job Duties/Responsibilities:Determine the acceptability of specimens for testing...
• JOB TYPE: Direct Hire Position (no agencies/C2C - see notes below)â€Â...