← All Case Studies

Learning Agendas: Bringing Research Rigor to Product Decisions

product strategy · organization · medium

Designed decision framework cutting R&D cycle time 20% by pre-defining success criteria and pivot triggers — applied academic experimental design to product strategy

Context

By 2025, Montai ran multiple concurrent R&D experiments — AI model iterations, assay validations, Anthrolog generation improvements. Each experiment had implicit goals but lacked explicit success criteria. The result: debates about “when to pivot” and “when to scale” became opinion-driven rather than evidence-backed.

Facts:

  • By 2025: Multiple concurrent experiments (AI models, assays, Anthrolog generations)
  • Problem: Unclear success criteria per experiment (when to pivot? when to scale?)
  • Example confusion: “AI model improved accuracy” but didn’t translate to better compound selection
  • Stakes: Wasted months on meandering experiments without clear learning goals

The core issue traced back to a fundamental principle from my PhD training: experiments without pre-defined hypotheses produce data, not learning. In academic research, you write your aims before running experiments. In biotech R&D, we were running experiments and retroactively deciding whether results “felt good enough.” This had to change.

Ownership

I owned:

  • Framework design (inspired by academic experimental design)
  • Template structure (hypothesis, metrics, decision gates)
  • Pilot with STAT6/OX40 programs
  • Dissemination (poster, Ops presentation, team feedback integration)

I influenced:

  • Program-specific agenda content (with Jake Ombach, scientists)
  • Leadership adoption (CTO + team feedback by 10/28/25 deadline)
  • Integration into quarterly planning (2026 OKRs)

Decision Frame

Problem statement:

Establish a lightweight decision framework that imposes research rigor on R&D experiments to reduce cycle time and increase pivot clarity, constrained by:

  • No pre-existing template (creating from scratch)
  • Risk of bureaucracy (scientists see as overhead, not value)
  • Need exec buy-in (not just bottoms-up adoption)

Options considered:

Option A: Continue informal learning (Slack + ad-hoc meetings)

  • Pros: No process overhead, flexible
  • Cons: Insights slip through cracks, repeated debates
  • Risk: Slow iteration, missed pivots

Option B: Heavyweight experimental design doc per project

  • Pros: Thorough, academic rigor
  • Cons: Time-consuming, likely ignored
  • Risk: Process theater, not actual use

Option C: Lightweight “Learning Agenda” (one-page)

  • Pros: Quick to create, forces clarity, actionable
  • Cons: May oversimplify complex experiments
  • Risk: Becomes checkbox exercise if not enforced

Decision: Chose Option C because:

[FACTS from archaeology p.12-13:]

  1. Concise format increases adoption (one page on Confluence/poster)
  2. Pre-planned decision gates reduce debate time (agreed criteria upfront)
  3. Visible in program reviews (not hidden in docs)
  4. Example: STAT6 agenda had enrichment thresholds - stopped underperforming screen 2 weeks early

Balance of rigor and pragmatism.

Constraints:

  • 1-month timeline to pilot (Q3 2025 urgency)
  • Team skepticism (scientists value science, not “frameworks”)
  • Need exec sponsorship (CTO feedback required)

Outcome

Primary outcome:

Cut decision cycle time 20% (~10 weeks → ~8 weeks) while increasing stakeholder clarity on project goals:

  • Adoption: All major programs (AHR, NRF2, STAT6, OX40) had agendas by late 2025
  • Usage: Team consulted agendas in decision meetings (not shelf-ware)
  • Example impact: Stopped underperforming analog screen 2 weeks earlier (learning agenda guardrails triggered pivot)

The cultural shift mattered more than the time savings. Learning Agendas moved the organization from opinion-driven debates (“I think this model is good enough”) to evidence-based pivots (“The agenda said we’d pivot if accuracy didn’t reach X, and it didn’t reach X”). Pre-commitment to decision criteria eliminated retrospective rationalization and made failure a legitimate outcome rather than a political liability.

Metrics:

  • Decision cycle time: ~10 weeks → ~8 weeks (20% reduction, major decisions)
  • Stakeholder clarity: 4.5/5 “understand project goals” (vs 3.8/5 before, internal survey)
  • Adoption rate: 100% of programs in quarterly reviews (Q1 2026)

Guardrails maintained:

  • Agendas stayed lightweight (1 page, not doc sprawl)
  • Flexibility preserved (could update questions if strategy changed)
  • No blame culture (failed experiments = learning, not failure)

Second-order effects:

  • Template for other teams (engineering adopted for tech experiments)
  • Influenced 2026 planning (every initiative needed clear success criteria)
  • Became interview artifact (showed org maturity to candidates)

Limitations acknowledged:

  • Upfront time investment (kickoff slightly slower, saved time later)
  • Some scientists initially felt constrained (“locked-in” to metrics)
  • Framework only as good as enforcement (requires discipline)

Reflection

What I’d do differently:

The rollout exposed gaps in my change management approach:

  • Pilot with friendly team first (not announce org-wide immediately)
  • Create 2-3 example agendas before rollout (not just template)
  • Pair with decision-making workshop (teach framework, not just distribute)

The template alone wasn’t enough — teams needed examples and coaching to see how Learning Agendas applied to their specific experiments. By launching broadly without pilots, I created confusion and had to backfill with one-on-one sessions. A slower, example-driven rollout would have accelerated actual adoption.

What this taught me about decision-making:

This project validated a core thesis about PhD → Product skill transfer:

  • Academic experimental design translates directly to business decisions — the logic of hypothesis → test → pivot works whether you’re running gels or evaluating AI models
  • Pre-commitment to decision criteria reduces politics — when stakeholders agree on thresholds before seeing data, debates shift from “is this good enough?” to “did we hit the bar?”
  • Lightweight structure beats heavyweight docs — scientists adopted one-page agendas because they didn’t feel like bureaucracy; thoroughness without pragmatism kills adoption

How this informs future decisions:

Three principles now shape how I design decision systems:

  • Always define success criteria before starting work, not retroactively — I now refuse to approve projects without clear “we’ll pivot if X” statements
  • Decision frameworks are products — they need user research, design iteration, and adoption strategies, not just documentation
  • Cultural change requires artifacts plus enforcement — the Learning Agenda template worked because program reviews explicitly required agendas, not just because the doc existed

Factual Evidence Citations:

  • Project Inventory p.7 (Learning Agenda entry)
  • Decision Systems p.12-13 (detailed framework description)
  • Quantitative Outcomes (cycle time metric)
  • Conecta Ops notes (10/28/25 feedback deadline)