Data Engineer (remote)
Claritev
full-remotemidpermanentbackend United States 2 days ago via LinkedIn
See how well this job matches your profile
Sign up to get an AI match score and generate a tailored application in seconds.
Get your match scoreTags
Data EngineeringSQLPythonPySparkSparkAzure Data FactoryData ModelingData PipelinesData GovernanceHIPAA
About the role
Role Overview
Data Engineer (Remote) at Claritev, working on healthcare data infrastructure to support business processes, reporting, and advanced analytics while meeting high compliance standards (including HIPAA).
Responsibilities
- Understand business processes and how they are modeled across enterprise systems.
- Collaborate with business users, technology teams, and executives to define data needs and deliver solutions.
- Build and maintain scalable data pipelines, workflows, and integrations between enterprise platforms.
- Implement data engineering best practices for the future state of the data infrastructure.
- Design and maintain data warehouse/database structures, tables, SQL queries, and ingestion pipelines.
- Write complex SQL to transform raw data into accessible models for reporting and downstream analysis.
- Prepare data for predictive and prescriptive modeling; identify data patterns.
- Improve data reliability, efficiency, and quality; triage and analyze end-to-end pipeline issues.
- Partner with analytics, data science, and engineering teams to automate analysis/visualization and advise on data model population.
- Coordinate across departments to communicate clearly and deliver data infrastructure improvements.
- Ensure compliance with HIPAA and associated data governance/security requirements (role treated as high-risk/privileged due to PHI exposure).
Requirements
- Minimum high school diploma + 4 years related experience; 3 years must include:
- Object-oriented programming (OOP)
- SQL
- Schema design and data modeling
- Designing/building/maintaining data processing systems
- Strong communication skills (verbal, listening, written).
- Experience with advanced analytics tools including Python and PySpark.
- Experience with SQL, Spark, and Azure Data Factory (ADF).
- Experience with data governance/data quality and data security teams to move pipelines into production with appropriate standards.
- Ability to build/manage pipelines covering transformation, models, schemas, metadata, and workload management.
Nice to Have
- Databricks
- SSIS
- Big data development exposure with Hive, Impala, Spark
- Familiarity with Kafka
- Exposure to ML/data science/computer vision/AI/statistics/applied mathematics
About Claritev
Claritev is a healthcare-focused organization aiming to bend the cost curve in healthcare through technology, data, and innovation. The team emphasizes service excellence, accountability, innovation, and strong collaboration across internal and external stakeholders.
Scraped 4/12/2026