Case Studies

Overview

These case studies demonstrate product judgment, technical depth, and execution leadership from my tenure at Montai, a drug discovery AI startup. Each follows the format: Context → Ownership → Decision Frame → Outcome → Reflection.

Focus: Decision systems, not achievements. What options existed, what I chose, why, and what I learned.


Scaling AI-Driven Drug Nominations

Product Strategy + Technical Architecture | 2023-2024

Scaled compound nominations 26× (250 → 6,500+ per program) while improving hit-to-lead rates from 5% to 27%. Built end-to-end data pipeline automating ML predictions, designed phased rollout strategy balancing speed and quality.

Key decisions: Phased scaling (prove → scale → refine) vs immediate optimization, quality gates to prevent stakeholder trust erosion, balancing exploration and exploitation.


From Data Crisis to Data Culture

Execution Leadership + Incident Response | 2025

Led response to critical data integrity failure (STAT6 predictions missing, 6-week program delay). Established postmortem process and data governance framework preventing recurrence (3 incidents in 2024 → 0 in 2026).

Key decisions: Targeted fix + governance uplift vs quick patch or comprehensive rebuild, blameless postmortem culture, proactive monitoring investment post-crisis.


Learning Agendas: Research Rigor for Product Decisions

Product Strategy + PhD Transfer | 2025

Designed decision framework reducing R&D cycle time 20% (~10 weeks → ~8 weeks) by pre-defining success criteria and pivot triggers. Applied academic experimental design to product/strategy decisions.

Key decisions: Lightweight one-page format vs heavyweight docs, enforcing pre-commitment to decision criteria, balancing rigor and pragmatism for scientist adoption.


Build vs Buy: Strategic Analysis for Analog Generation

Strategic Analysis + Executive Communication | Q4 2025

Quantitative analysis guiding $250k+ partnership decision: XtalPi external compounds vs improving internal generative model. ROI modeling revealed internal model needed 50× accuracy improvement; recommended hybrid approach (external for near-term + internal investment for long-term).

Key decisions: Build vs buy rarely binary (hybrid optimal), quantitative framing transforms opinions into evidence, strategic patience requires near-term wins.


Standardizing Montai’s App Ecosystem with R Shiny

Technical Architecture + Developer Experience | 2024

Converged fragmented app development (Python/Streamlit/Dash + R/Shiny + notebooks) onto single framework, cutting development time 66% (3 weeks → 1 week). Built proof-of-concept Nomination App and reusable component library.

Key decisions: R Shiny (team skills + data-heavy use case) vs Python frameworks vs multi-framework flexibility, trading long-term flexibility for near-term velocity, standardization as social + technical choice.


Additional Case Studies

Preventing Metric Theater in Drug Discovery ML

Product Strategy + Evaluation Frameworks | 2024

Designed combined offline + online evaluation framework de-risking $2M+ compound decisions by catching model degradation 3 months earlier. Established monitoring infrastructure reused by 3+ model teams.


Reducing Pipeline Latency for Decision Velocity

Technical Architecture + Data Engineering | 2023-2024

Automated data pipeline reducing latency from 2-3 days to same-day updates. Enabled real-time analytics dashboards used for investor updates and program decisions.


Themes Across Case Studies

Decision Systems Over One-Off Analyses

  • Learning Agenda framework (reusable structure, not one project)
  • Build vs Buy quantitative analysis (template for future decisions)
  • Data Quality governance (postmortem process, not one incident fix)

Product Judgment + Technical Depth

  • Strategic analysis (XtalPi ROI modeling) + technical execution (Shiny framework)
  • Quality gates (nomination filtering) + pipeline automation (dbt implementation)
  • PhD rigor (experimental design) + startup pragmatism (lightweight formats)

Candid Reflection on Tradeoffs

  • Phased scaling delays (perfect vs done tension)
  • Framework lock-in risks (Shiny standardization)
  • Crisis response delays (monitoring investment should’ve been proactive)