Please submit your CV in English and indicate your level of English proficiency. Mindrift connects specialists with project-based AI opportunities for leading tech companies, focused on testing, evaluating, and improving AI systems. Participation is project-based, not permanent employment.\nWhat This Opportunity Involves\nWe're building a dataset to evaluate AI coding agents — how well a model handles real-world developer tasks. You'll create challenging tasks and evaluation criteria within realistic simulated environments: Build virtual companies following a high-level plan - codebase, infrastructure, and context (conversations, documentation, tickets) that form a realistic environment with development history\nAssemble and calibrate tasks from intermediate states of the virtual company: craft the prompt, define evaluation criteria, and ensure the task is solvable and the evaluation is fair\nDesign tasks set in isolated environments - emulations of a developer's workstation: a Linux machine with development tools (terminal, CLI), MCP servers (repository, task tracker, messenger, documentation, etc.), and a real web application codebase\nWrite tests that accept all correct solutions and reject incorrect ones - neither too strict (breaking on valid approaches) nor too lenient (passing bad ones)\nIterate with an AI agent on tests - verifying they catch real problems, don't miss bad solutions, and don't break on good ones\nReview code written by agents, analyze why an agent failed or succeeded, and design edge cases and adversarial scenarios\nIterate based on feedback from expert QA reviewers who score your work on quality criteria What This Is NOT Not data labeling\nNot prompt engineering\nNot writing code from scratch - the agent writes most of the code; you guide and evaluate A significant part of the work is done together with AI - it's very hard to create tasks that challenge frontier models without using frontier models.\nWhat We Look For\nThis opportunity is a good fit for experienced developers, software engineers, and/or test automation specialists open to part-time, non-permanent projects. Ideally, contributors will have: Degree in Computer Science, Software Engineering, or related fields\n5+ years in software development, primarily Python (FastAPI, pytest, async/await, subprocess, file operations)\nBackground in full-stack development, with experience building React-based interfaces (JavaScript/TypeScript) and robust back-end systems\nExperience writing tests (functional, integration — not just running them)\nDocker containers, and familiarity with infrastructure tools (Postgres, Kafka, Redis)\nCI/CD understanding (GitHub Actions as a user: triggers, labels, reading results)\nEnglish proficiency - B2 You don't need to be an expert in every item, but you should be comfortable reading and reasoning about code across the stack. Why this is hard Frontier models are already good at coding. Creating a task that genuinely challenges the best models is non-trivial. You need to deeply understand where models fail and what scenarios reveal the difference between a good and a bad solution\nTasks have many valid solutions. Writing tests that accept all correct solutions and reject incorrect ones is harder than it sounds How It Works Apply → Pass qualification(s) → Join a project → Complete tasks → Get paid Effort estimate Tasks for this project are estimated to take 20 hours to complete, depending on complexity. This is an estimate and not a schedule requirement; you choose when and how to work.\nTasks must be submitted by the deadline and meet the listed acceptance criteria to be accepted.\nCompensation On this project, contributors can earn up to $17 per hour equivalent, depending on their level and pace of contribution.\nCompensation varies across projects depending on scope, complexity, and required expertise. Please note that other projects on the platform may offer different earning levels based on their requirements.