Quality Assurance Engineer
DataAnnotation
full-remotemidcontractqaother New Jersey, United States Today via LinkedIn
Up to 60 USD/daily
See how well this job matches your profile
Sign up to get an AI match score and generate a tailored application in seconds.
Get your match scoreTags
Quality AssuranceAI Model EvaluationPythonJavaScriptAlgorithmsData StructuresDebuggingRemoteIndependent ContractCoding Challenges
About the role
Quality Assurance Engineer (AI Model Training)
Join the team to train and evaluate AI models by measuring chatbot progress, assessing logic, and solving problems to improve overall quality.
Responsibilities
- Provide AI chatbots coding challenges and evaluate their outputs
- Assess AI-generated responses for correctness and performance
- Identify issues and help improve model quality through debugging and problem-solving
Requirements
- Proficiency in at least one programming language, specifically Python and/or JavaScript (also acceptable: JavaScript, Python, C#, C++, HTML, SQL, Swift)
- Ability to solve coding problems (e.g., LeetCode, HackerRank-style)
- Ability to explain how your solution solves each coding problem
- Fluent English (native or bilingual)
- Detail-oriented approach
- Experience with algorithms, data structures, and debugging workflows
- Bachelor’s degree preferred (current, in progress, or completed)
Contract & Work Details
- Remote position (independent contract)
- Choose which projects to work on and work on your own schedule
- Projects are paid hourly up to $60 USD/hour, with bonuses for high-quality and high-volume work
- Payment via PayPal
- Applicants must be located in the United States
About DataAnnotation
DataAnnotation is an organization that leverages AI and human evaluation to improve AI model quality. The role focuses on training and assessing AI chatbots to ensure their responses are correct, performant, and reliable. This work sits at the intersection of AI development, quality assurance, and applied programming tasks.
Scraped 4/1/2026