About
From Bench Science to Decision Systems
I didn’t plan to leave academic research. I loved the rigor—designing experiments where the question itself is ambiguous, building evaluation frameworks from scratch because no playbook exists. But somewhere between optimizing multi-omics pipelines and presenting at lab meetings, I realized the most valuable skill I’d developed wasn’t technical depth. It was designing decision systems under resource constraints.
In my PhD, I wasn’t just analyzing data. I was making judgment calls: Which experiments maximize learning per dollar? How do we design validation that catches false positives before they waste months of work? What counts as “sufficient evidence” when stakeholders disagree on the underlying question? These weren’t biology problems—they were decision-architecture problems.
When I transitioned to drug discovery ML, I brought that research-grade rigor with me. The questions changed (Which model architecture balances precision vs. recall for this specific drug target? How do we evaluate “model quality” when ground truth won’t exist for 2 years?), but the decision-making muscle stayed the same: First-principles thinking about what counts as evidence, explicit tradeoffs, and systems that prevent “metric theater.”
Most data science leaders I meet are strong on either product intuition (prioritization, stakeholder alignment) or technical depth (architecture, evaluation frameworks), but rarely both. My PhD gave me something different: the ability to design evaluation frameworks for problems where the right answer isn’t obvious. That’s the bridge between “PhD-trained in multi-omics analysis” and “product-tested in drug discovery ML.”
What I Bring
Research-Grade Rigor: I design evaluation systems from first principles, not templates. I know how to ask “what counts as evidence here?” and build measurement systems that answer it.
Product Judgment + Technical Depth: Most people are 2 out of 3 (product + leadership, or technical + leadership). I’m all three—I can frame the problem, design the architecture, and ship the outcome.
Decision Evidence, Not Achievements: I document decision systems, not activity. My case studies show the problem context, the decision frame, the tradeoffs, and the measurable outcomes. No “metric theater.”
Why This Matters
In a world of AI-generated insights and “data-driven” buzzwords, the bottleneck isn’t generating analyses—it’s making good decisions under uncertainty. The teams that win aren’t the ones with the most dashboards. They’re the ones with clear decision systems: North star + guardrails, explicit ownership, and repeatable evaluation frameworks that prevent false confidence.
That’s what I build. That’s what I bring. That’s what my decision portfolio documents.
Want to see how I think? Start with my case studies or read about my consulting services.