Quality Assurance Engineer
DataAnnotation
See how well this job matches your profile
Sign up to get an AI match score and generate a tailored application in seconds.
Get your match scoreTags
About the role
Role Overview
Quality Assurance Engineer (AI Model Training & Evaluation)
You will help train AI models by evaluating the progress and output quality of AI chatbots, including their logic and correctness. The work involves reviewing chatbot performance and solving coding problems related to the challenges.
Responsibilities
- Create and/or administer coding challenges for AI chatbots and evaluate their responses
- Assess AI model outputs for correctness and performance
- Debug and resolve issues to improve the quality of each model
- For each coding problem, explain how your solution solves the problem
Requirements
- Proficiency in at least one programming language: Python and/or JavaScript (plus ability to work in one of: JavaScript, Python, C#, C++, HTML, SQL, Swift)
- Ability to solve coding problems (e.g., LeetCode/HackerRank-style)
- Fluency in English (native or bilingual)
- Detail-oriented approach
- Experience with algorithms, data structures, and debugging workflows
- Bachelor’s degree preferred (current/in-progress/completed)
Work Details
- Remote position (US only)
- Choose which projects you want to work on; flexible schedule
- Paid hourly up to $60 USD/hour with bonuses for high-quality/high-volume work
- Payment via PayPal (no payments requested from applicants)
- Independent contract position
Nice to Have
- Bachelor’s degree (current/in-progress/completed)
About DataAnnotation
DataAnnotation is a company that provides work involving training and evaluating AI models, including AI chatbot quality assessment. The role focuses on measuring model performance, logic, and coding-challenge outputs to improve overall quality. The work is centered around AI model evaluation and QA for AI systems.
Scraped 4/10/2026