MLOps Engineer / AI Infrastructure Engineer (GPU + Kubernetes)
Centraprise
full-remotemidpermanentdevopsdatabackend United States Today via LinkedIn
150,000+ USD/annual
See how well this job matches your profile
Sign up to get an AI match score and generate a tailored application in seconds.
Get your match scoreTags
MLOpsGPU InfrastructureKubernetesCUDANVIDIA A100NVIDIA H200InfiniBandRDMAMLflowKubeflowTriton
About the role
Role Overview
Hands-on MLOps / AI Infrastructure Engineer to build and operate GPU clusters and Kubernetes environments supporting advanced AI workloads.
Responsibilities
- Build and operate high-performance GPU clusters (NVIDIA A100/H200)
- Own end-to-end MLOps pipelines and model deployment
- Build bare-metal Kubernetes for GPU workloads
- Work with high-performance networking, including InfiniBand/RDMA
- Implement and run MLflow, Kubeflow, and Triton model serving
Requirements
- Deep expertise in GPU infrastructure and Kubernetes
- Strong MLOps experience, especially around model deployment
Nice-to-Haves
- Experience with InfiniBand/RDMA and high-performance networking
- Hands-on experience with MLflow / Kubeflow / Triton
About Centraprise
Centraprise is hiring an MLOps/AI infrastructure engineer to build and operate high-performance GPU clusters and Kubernetes environments for advanced AI workloads. The role focuses on end-to-end MLOps pipelines, high-performance GPU networking, and model serving infrastructure.
Scraped 4/9/2026