Speak

Speak

Machine Learning Engineer, Assessments

Company

Speak

Role

Machine Learning Engineer, Assessments

Job type

Full-time

Posted

7 hours ago

Salary

$150 - $150/hourly

Job description

ABOUT US

Our mission is to reinvent the way people learn, starting with language.

Learning a language can change a life by opening doors to new cultures, careers, and communities. Two billion people around the world are actively trying to learn a language, but the best way to learn (one-on-one tutoring) is hard to access at scale and hasn’t been meaningfully improved in decades. Speak is building a human-level, AI-powered tutor in your pocket: a conversation-first experience that lets learners actually speak, get instant feedback, and progress through carefully designed lessons. The result is a complete path from beginner to confident speaker across multiple languages.

Speak first launched in South Korea in 2019, where Speak has now become the number one language learning app, and we now serve learners across many markets and 15+ languages. Speak is one of the world’s leading AI companies, with over $150m raised in venture investment from OpenAI, Accel, Founders Fund, Khosla Ventures, and more, with a distributed team across San Francisco, Seoul, Tokyo, Taipei, and Ljubljana.

ABOUT THIS ROLE

We’re hiring an ML Engineer, Assessments to help build best-in-class assessment systems across multiple products (Speak for Business, B2C, and new surfaces). You will work in a tight loop with our Assessment Design Lead (Content/Learning Design), Machine Learning, Product, and Engineering to turn assessment constructs and rubrics into reliable, scalable scoring + feedback systems.

This role owns the implementation, deployment, and ongoing quality of our assessment algorithms and ML systems. While there is immediate need to improve and expand production assessments, this work is also building a platform capability that can be reused across the app.

WHAT YOU’LL BE DOING

  • Ship and own assessment ML systems end-to-end
  • Build, deploy, and maintain scoring models/pipelines (feature extraction → model training → inference → feedback generation)
  • Own monitoring, regression tests, and ongoing iteration to maintain accuracy targets
  • Define and operationalize evaluation
  • Implement validation/evaluation frameworks for assessments, including metrics, test sets, and offline/online analysis
  • Translate assessment requirements into measurable acceptance criteria and guardrails
  • Partner deeply with the Assessment Design Lead
  • Co-develop the strategy, together with the Content team, to grow assessments into a core platform at Speak
  • Work in a tight weekly loop to deliver incremental improvement
  • Drive near-term delivery across products
  • Stand up or improve summative assessments (spoken language ability) and bring them reliably to production
  • Prototype and validate formative assessment approaches to measure improvement over weeks/months
  • Support data and labeling strategy
  • Help define data needs for training/evaluation (including psychometric measurement needs)
  • Build or improve pipelines that support label collection and analysis (especially for efficacy studies)

WHAT WE’RE LOOKING FOR

  • Domain expertise in spoken language proficiency assessment (linguistics, applied linguistics, pedagogy, or equivalent experience)
  • Strong experience designing and running evaluation + validation for assessment/scoring systems, and tailoring approaches to a specific product use case
  • 4+ years building automatic proficiency assessment systems (or equivalent depth in closely related scoring/evaluation domains)
  • PhD is helpful but not required
  • Proven ability to ship ML models to production (not only research), including reliability, monitoring, and iteration
  • Strong generalist ML/analysis skills (statistics, Python, PyTorch/model training)
  • Ability to operate cross-functionally and communicate clearly with non-technical partners (Content/LD, PM, leadership)

NICE TO HAVE

  • Experience with speech/audio ML
  • Experience with psychometrics concepts (reliability/validity, calibration)

HOW WE WORK (COLLABORATION EXPECTATIONS)

This role is designed to be highly collaborative with the Assessment Design Lead. Success depends on a tight loop where constructs/rubrics and model outputs co-evolve — not a sequential handoff.

WHY WORK AT SPEAK

  • Join a fantastic, tight-knit team at the right time: we're growing very quickly, we've most recently raised our Series C from some of the top investors in the valley, and we've achieved product-market fit in our initial markets. You'd join at a magical time when a single person could significantly change the course of the company.
  • Do your life's work with people you’ll love working with: we care strongly about our craft and want every person at Speak to feel like they're growing every day. We believe in the idea that working with people you both enjoy and have respect for makes everything better. We hire thoughtfully and only work with people we admire deeply.
  • Global in nature: We're live in over 40 countries and launching in a number of new markets soon. We have dedicated offices in San Francisco, Ljubljana, Seoul, and Tokyo, and you’ll have the opportunity to talk to users in each of these regions on a regular basis as well as travel.
  • Impact people's lives in a major way: Learning a language is one of the single most life-changing skills one can learn, and right now 99% of people never achieve their goal because the process is broken. We’re helping millions of people achieve their goals and improve their lives.

Speak does not discriminate based upon race, religion, color, national origin, gender (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics.

Resume ExampleCover Letter Example

Explore more

Similar jobs