Back to all jobs
IT

Senior Swift Software Engineer (AI Evaluation)

United States Contract

About the Role

A structured AI evaluation initiative focused on improving the reliability, reasoning accuracy, and clarity of conversational systems in software engineering contexts, particularly within Swift and mobile development environments. The work emphasizes how models generate, interpret, and explain code across varying levels of complexity.

This opportunity is ideal for experienced Swift engineers with strong problem-solving skills and a deep understanding of modern iOS development practices. It suits individuals who can independently validate code, identify subtle issues, and assess the quality of technical explanations.

The work involves reviewing AI-generated Swift code, executing and validating outputs, and providing structured feedback on correctness and clarity, where precision and consistency are essential to improving system performance.

What You'll Do

  • Evaluate AI-generated responses to coding and software engineering problems
  • Execute and validate Swift code to ensure correctness and performance
  • Identify logical errors, inefficiencies, and edge case failures
  • Annotate outputs with detailed feedback on strengths and weaknesses
  • Assess code readability, maintainability, and architectural soundness
  • Perform fact-checking using reliable technical references
  • Apply standardized evaluation frameworks and scoring criteria
  • Ensure outputs align with expected engineering and conversational standards

Requirements

  • 5+ years of professional experience in software engineering or related fields
  • Strong expertise in Swift programming language
  • Ability to solve medium to hard algorithmic problems independently
  • Experience executing, testing, and debugging production-level code
  • Strong understanding of data structures, algorithms, and system design principles
  • Familiarity with iOS development frameworks and application architecture
  • High attention to detail in reviewing technical reasoning and outputs
  • Fluent English communication skills
  • Experience using LLMs in coding workflows and understanding their limitations
  • Ability to follow structured evaluation frameworks and guidelines
  • Bachelor’s degree or higher in Computer Science or related discipline
  • Experience contributing to open-source projects with accepted contributions
  • Familiarity with additional programming languages or ecosystems (preferred)
  • Experience in model evaluation, RLHF, or data annotation (preferred)
  • Background in competitive programming or technical assessments (preferred)
  • Experience reviewing code in production environments (preferred)
  • Ability to explain complex technical concepts clearly to varied audiences (preferred)
Application Note: By submitting your profile for this partnered position, our team can quickly review your background and reach out to present you with this specific opportunity or match you with similar AI Training projects.