Project · ROAR
ROAR — what I lead at Stanford
A curated walkthrough of the platform itself and the engineering work behind it: videos, live demos, and the parts I personally own.
About ROAR
ROAR is an academically-licensed platform for the assessment of foundational reading skills, developed by the Stanford Reading & Dyslexia Research Program. It is browser-based, free to schools, available in English and Spanish, and integrated with Clever and ClassLink for single sign-on. Currently deployed in 309 K-12 school districts and 2,700+ schools, with 873K+ assessment runs to date.
The Foundational Literacy suite — letter knowledge, phonological awareness, single word recognition, and sentence reading efficiency — fits inside a thirty-minute literacy block per student. Beyond that, we have a ROAR-Comp suite for reading comprehension, a ROAM suite for math, and assessments for visual attention and executive functioning. Score reports return same-day, broken down by skill area and color-coded by support level.
See ROAR in action
Try the assessments yourself
Live demos of individual ROAR assessments. Free, no signup — roar.stanford.edu/#demo . This is the most direct way to feel what a student feels.
How students experience ROAR
- Guide to the Student Dashboard : what a student sees on log in and during an assessment.
How educators use it
- Guide to Logging In to Your Educator Account
- Guide to the Group Score Report : navigating classroom-, grade-, or district-level results.
- Guide to the Individual Score Report : interpreting one student's result and the recommended next steps.
The bigger picture
- Introduction to ROAR : the overview if you want one video to watch.
- ROAR in Schools : how districts and teachers actually use it day to day.
- ROAR: Breaking Barriers for Older Struggling Readers : a specific use case for older students.
More on the platform and the underlying research at roar.stanford.edu ; the dashboard itself lives at roar.education .
What I lead
- The engineering organization. Built from one engineer to six, plus QA. I set the hiring bar, the operating cadence, and the standards for code review, release, and incident response.
- Platform modernization. Leading the backend migration from Firestore to PostgreSQL/CloudSQL — data model redesign, schema work, staged rollout against active production traffic. Moving fragile research-era logic into explicit, testable services.
- Data governance and youth safety. Technical owner for FERPA-aligned audit logging and data lifecycle architecture, anchored to SOC 2 readiness. Accessibility (VPAT), least-privilege access, vendor security review, incident response.
- Reliability foundations. The monitoring, error tracking, deployment, and on-call practices that took ROAR from research-cadence software to a platform districts schedule around for classroom use.
- AI-accelerated team practice. Coding agents (Claude Code), AI code review on every PR, generative testing on highest-risk paths — backed by a ~70-page domain-knowledge document that grounds every AI-assisted change.
More detail on any of these in the short resume or the long-form CV .
On what you'd see today
ROAR is mid-migration on the backend (Firestore → PostgreSQL/CloudSQL), so the production build the videos and demos show is the version we're refactoring, not the version we'd ship today. That gap — the choices we made the first time and the ones we're making now — is the most interesting part of an engineering-leader conversation about ROAR. Happy to walk through it.
More
- How I think about engineering more broadly — about me
- Other projects (pyAFQ, Cloudknot, Groupyr) — projects
- Resume — short version or long-form CV