QA Software Test Engineer Code Review - Remote
YO IT Consulting
full-remotemidcontractqaother Atlanta, GA Yesterday via LinkedIn
See how well this job matches your profile
Sign up to get an AI match score and generate a tailored application in seconds.
Get your match scoreTags
QASoftware TestingCode ReviewDebuggingTechnical WritingPythonJavaScriptJavaC++Dataset Creation
About the role
Role Overview
Independent contractor role supporting high-impact AI research collaborations with leading AI labs. You’ll help build evaluation datasets composed of chat-style code question-and-answer scenarios to assess AI reasoning, explanation quality, and technical judgment (not executable correctness).
Responsibilities
- Craft realistic developer prompts across multiple categories, including:
- Code review
- Debugging
- Error diagnosis
- Configuration
- and other coding-related scenarios
- Source and adapt content from real PRs to create authentic situations
- Write clear, technically accurate model responses that demonstrate strong reasoning and explanation quality
Requirements (Ideal Qualifications)
- 2+ years of experience in software engineering, technical research, or educational content development
- Bachelor’s minimum in Software Engineering, Computer Science, or a related field (advanced degree preferred)
- Strong proficiency in at least one of: Python, JavaScript, Java, C++
- Experience with debugging, testing, and validating code
- Comfortable with technical writing and high attention to detail
Project Details
- Start date: Immediate
- Duration: 1–2 months (extend/shorten/terminate based on needs and performance)
- Commitment: Part-time 15–25 hours/week (flexible up to 40 hours/week)
- Work mode: Fully remote
Interview & Onboarding
- Upload resume
- 15-minute AI interview (conversational)
- Follow-up with next steps and onboarding details
About YO IT Consulting
YO IT Consulting is an IT consulting firm that supports engineering-focused initiatives and research collaborations. In this role, it engages contractors to help create evaluation datasets for AI labs, focusing on coding-related reasoning and explanation quality.
Scraped 4/8/2026