Portfolio: AI Engineering Assessment Concept
Scenario: "AI Research Lab Anomaly" - Candidates investigate mysterious model behavior at a fictional AI research facility.
Challenge Design
• Debug a "haunted" neural network exhibiting unexplained behaviors
• Trace data lineage through corrupted training logs
• Collaborate with "AI researchers" (other candidates) via simulated Slack
• Present findings in a mock incident review
What We Measure
• Systematic debugging approach vs. random experimentation
• Ability to form and test hypotheses
• Communication clarity under pressure
• Willingness to acknowledge uncertainty
• Collaboration vs. competition dynamics
Pilot Results
Tested with 12 senior ML engineers. The narrative framework increased completion rates from 41% (traditional take-home) to 92% (ARG format). Candidates reported the experience as "actually fun" and "more relevant than coding challenges."