Lead Software Development Engineer in Test
RadarFirst
hybridleadpermanentqabackend United States 6 days ago via LinkedIn
See how well this job matches your profile
Sign up to get an AI match score and generate a tailored application in seconds.
Get your match scoreTags
Test AutomationSeleniumPlaywrightCypressTypeScriptPythonAI in QALLM EvaluationCI/CDSecurity Testing
About the role
Role Overview
Lead Software Development Engineer in Test (Lead SDET) responsible for architecting and evolving the organization’s quality engineering strategy in an AI-driven development environment. You will lead automated testing frameworks and introduce responsible AI-assisted QA practices to improve quality maturity across functional, performance, security, accessibility, and AI-enabled product capabilities.
Responsibilities
- AI-Driven QA Strategy & Framework Ownership
- Own and evolve automated testing frameworks and QA tooling ecosystem
- Define best practices for AI-assisted test case generation, test data generation, and coverage discovery
- Implement responsible AI guardrails for QA (validation, hallucination mitigation, structured outputs, review standards)
- Introduce intelligent test selection and regression analysis to reduce cycle time
- Ensure AI-augmented tests are deterministic, maintainable, and production-ready
- Test Architecture & Automation Excellence
- Architect scalable automation across unit, integration, API, UI, contract, performance, and security testing
- Apply modern testing principles (e.g., test pyramid / risk-based testing)
- Integrate automation into CI/CD with enforceable quality gates and release-readiness criteria
- Lead root-cause analysis for systemic quality issues and recurring defects
- AI System Testing & Evaluation
- Design evaluation frameworks for AI-assisted features (ground-truth testing, deterministic validation, prompt robustness)
- Define quality metrics (e.g., regression consistency, precision/recall where applicable, drift detection)
- Add traceability for AI outputs (logging, version-aware regression suites)
- Implement human-in-the-loop validation where appropriate
- Mentorship & Engineering Enablement
- Mentor engineers on modern automation and AI-augmented workflows
- Train teams on prompt engineering for QA validation
- Set review standards for AI-assisted code contributions
- Quality Governance & Continuous Improvement
- Define and track KPIs (defect escape rate, automation coverage by risk, test stability index)
- Provide data-backed release readiness recommendations (Go/No-Go)
- Audit automation ROI and retire low-value tests
- Cross-Functional Partnership
- Partner with Engineering, Product, Security, and DevOps to embed quality early (shift-left)
- Influence roadmaps and represent quality in architectural discussions
- Ensure accessibility, security, and privacy testing are integrated into the SDLC
Requirements
- 8+ years in software quality engineering, test automation, or SDET roles
- 3+ years leading automation strategy or acting as a senior technical QA authority
- Hands-on experience using AI tools for test generation/refactoring/maintenance
- Experience evaluating LLM-based systems (prompt robustness, structured outputs, drift detection, versioned regression)
- Proficiency in at least one programming language: TypeScript, Java, or Python
- Experience with modern test frameworks such as Playwright, Cypress, or Selenium
- Demonstrated experience designing/scaling test automation frameworks and CI/CD quality gates
- Strong understanding of distributed systems, HTTP lifecycle, and database querying (SQL/NoSQL)
- Experience mentoring engineers and leading technical QA efforts
Nice-to-haves
- Experience with accessibility/security/privacy testing integration into SDLC (explicitly valued in responsibilities)
Scraped 4/14/2026