xelys jobs xelys jobs

Senior AI Engineer – LLMOps & MLOps

Motion Recruitment

full-remoteseniorpermanentbackenddataproduct-management United States 2 days ago via LinkedIn

See how well this job matches your profile

Sign up to get an AI match score and generate a tailored application in seconds.

Get your match score

Tags

LLMOpsMLOpsRAGAWS SageMakerAzure OpenAIAzure AI Document IntelligenceTerraformObservabilityPromptOpsVector Databases

About the role

Role Overview

Senior AI Engineer responsible for end-to-end ownership of AI initiatives for a claims processing domain. This is an execution-focused role in the company’s AI Transformation Office, focused on bridging legacy insurance data systems with modern cloud AI services.

Mission

  • Build the automated infrastructure connecting legacy data systems to AWS and Azure AI services.
  • Own the “Ops” of AI: deploy, observe, and scale LLM applications, RAG pipelines, and traditional ML models in a multi-cloud environment.

Key Responsibilities

  • Multi-cloud pipeline execution: Build and maintain automated CI/CD and Continuous Training (CT) pipelines across AWS (SageMaker/Bedrock) and Azure (AI Studio).
  • LLMOps / RAG infrastructure: Implement RAG infrastructure, including vector database management (OpenSearch, Pinecone, or Azure AI Search) and semantic index optimization.
  • Legacy data connectivity: Create secure ingestion and data movement “pipes” from Mainframes, SQL Server, and other on-prem databases into cloud-native MLOps workflows.
  • Automated model evaluation: Implement evaluation frameworks for LLMs (LLM-as-a-judge, ROUGE, METEOR) and validation for traditional ML before deployment.
  • Observability & monitoring: Add real-time monitoring for model drift, hallucinations, latency, and token consumption to manage quality and cost.
  • Infrastructure as Code: Manage AI resources with Terraform or CloudFormation, following Privacy by Design.
  • Advanced analytics integration: Work with teams using Palantir, Databricks, or Snowflake to ensure high-fidelity data flow into production models.
  • IT & security collaboration: Partner with IT/Security on IAM, VPC peering, and firewall configurations.
  • Scalable inference engineering: Optimize serving endpoints for low latency/high throughput, using Docker/Kubernetes and serverless architectures as appropriate.
  • Prompt & model versioning (PromptOps): Ensure auditability with rigorous version control for prompts, model weights, and data snapshots.
  • Data science engineering enablement: Automate feature stores, feature engineering pipelines, and productionize notebooks into hardened microservices.
  • Security & compliance hardening: Implement automated scanning and guardrails (examples mentioned, text cut off).

Requirements

  • Strong end-to-end ownership of production AI/ML lifecycle, especially LLMOps/MLOps in multi-cloud environments.

Nice-to-haves (implied by responsibilities)

  • Experience deploying RAG systems with vector databases and semantic indexing.
  • Experience with Terraform/CloudFormation, observability, and evaluation frameworks for LLMs.
  • Familiarity with cloud security/IAM networking patterns (e.g., VPC peering, firewall rules).
  • Experience with inference optimization and container/serverless serving.
  • Experience integrating enterprise analytics platforms (e.g., Snowflake/Databricks/Palantir).

About Motion Recruitment

Motion Recruitment is recruiting on behalf of a global technology-enabled insurance risk and benefits solutions company. The client operates in the insurance/claims domain and is building AI capabilities to make claims processing more efficient, leveraging cloud AI services across AWS and Azure.

Scraped 4/4/2026

xelys jobs xelys jobs

Built for remote job seekers. Powered by AI.