Go back
Go back
Published:  
Nov 29, 2025
Product Discovery

10 Situational patterns for product discovery workflows

These patterns are practical tools for improving how Claude interprets & executes complex discovery workflows. These patterns strengthen decision clarity and preserve execution quality when workflows introduce edge cases. We will cover the following examples:

1. Handling conflicting instructions

2. Disambiguation protocols

3. Priority and precedence rules

4. Acronym and abbreviation consistency

5. Cross-reference validation

6. Version and iteration tracking

7. Stakeholder context preservation

8. Source attribution clarity

9. Assumption documentation

10. Partial completion handling

1. Handling conflicting instructions

When two sections of a skill give contradictory instructions, specify which takes precedence. This situation appears when a discovery workflow requires Claude to satisfy two incompatible obligations, for example generating a comprehensive evidence base for user research while also adhering to strict length, formatting, or compliance constraints.

❌ Conflicting without resolution:

Section 2: Always include all user quotes in the report.
Section 5: Keep reports under 2000 words maximum.

What if all quotes exceed 2000 words?

✅ Explicit conflict resolution:

<conflict_resolution_rules>
IF instructions conflict THEN apply this precedence order:

1. HIGHEST: Safety and compliance requirements (GDPR, legal obligations)
2. HIGH: Explicit user directives in current conversation
3. MEDIUM: Critical requirements in <critical_requirements> section
4. LOW: General guidance and best practices

Example conflict:
- Section 2: “Include all user quotes” (MEDIUM priority - critical requirement)
- Section 5: “Keep under 2000 words” (LOW priority - best practice)
- Resolution: Include all quotes (MEDIUM beats LOW), 
note: “Report exceeds recommended length to preserve all evidence”

IF conflict involves same priority level THEN
- Ask user for clarification
- Document the conflict in output
- State which path was chosen and why
</conflict_resolution_rules>

✅ PM framework conflicts:

<prioritization_framework_conflicts>
IF multiple prioritization frameworks give different rankings THEN

Frameworks that might conflict:
- Opportunity Scoring (Importance vs Satisfaction gap)
- Cost of Delay (User-Business value over time)
- Value vs Effort matrix
- MoSCoW (Must, Should, Could, Won’t)

Resolution protocol:
1. Apply all frameworks to same feature set
2. Document rankings from each framework
3. Highlight where they disagree
4. Provide synthesis: “Feature X scores high in Opportunity 
(8.2 importance, 3.1 satisfaction) but shows low Cost of Delay 
($12K/month vs Feature Y at $85K/month)”
5. Recommend: Let stakeholders choose based on strategic priorities
6. NEVER hide conflicts - make them visible for decision-making
</prioritization_framework_conflicts>

2. Disambiguation protocols

Define default assumptions when information is ambiguous, especially for common discovery workflows. This situation often occurs when discovery tasks reference ambiguous terms such as “recent feedback,” “users,” or “priority,” and Claude must adopt consistent defaults so the output does not drift or reinterpret context mid-workflow.

❌ No disambiguation guidance:

Analyze user feedback from the discovery phase to identify patterns 
that could inform our roadmap prioritization. 

- Search Google Drive for interview transcripts, survey responses, 
and support ticket summaries. 
- Extract key themes around user pain points, feature requests, 
and workflow friction. 
- Cross-reference findings with our current product strategy to 
assess strategic alignment. 
- Synthesize insights into a Confluence page that highlights 
opportunities for the next planning cycle, with emphasis on 
feedback that validates or contradicts our core product hypotheses. 
- Include direct quotes to support each theme and indicate pattern 
strength across the feedback sources.

✅ Explicit disambiguation protocol:

<disambiguation_defaults>
When information is ambiguous, use these defaults:

Timeframe ambiguity:
- “Recent feedback” = Last 30 days
- “Current quarter” = Q4 2025 (Oct-Dec)
- “This sprint” = Current 2-week sprint cycle

User/Stakeholder ambiguity:
- “Users” = End users (NOT internal stakeholders)
- “Customers” = Paying customers (NOT free tier users)
- “Team” = Product team (Designer, PM, Engineers)
- “Stakeholders” = Leadership + Product team

Document ambiguity:
- “Latest version” = Most recently modified file
- “Draft” = Documents in /drafts/ folder
- “Final” = Documents in /final/ or /published/ folder

IF defaults don’t match context THEN
- Ask for clarification before proceeding
- Document assumption in output: “Assumed ‘users’ means 
end users based on context”
</disambiguation_defaults>

✅ PM-specific disambiguations:

<pm_disambiguation_defaults>
Common PM ambiguities and defaults:

“Impact” ambiguity:
- Default: User-facing impact (NOT engineering complexity)
- Could mean: Revenue impact, user satisfaction, engagement metrics
- Clarify: “Impact measured by [specific metric]”

“Priority” ambiguity:
- Default: Business priority (NOT technical priority)
- Use: P0 (critical), P1 (high), P2 (medium), P3 (low)
- NOT: “important”, “urgent”, “ASAP” (too vague)

“User research” ambiguity:
- Default: Qualitative interviews or usability tests
- Could include: Surveys, analytics, A/B tests, customer support tickets
- Clarify: “User research refers to [specific research type]”

“Feature” vs “Epic” vs “Story”:
- Feature: User-facing capability (e.g., “Export to PDF”)
- Epic: Large body of work spanning multiple sprints
- User Story: Single implementable unit (e.g., “As a user, I can...”)
- Use consistent definitions throughout skill
</pm_disambiguation_defaults>

3. Priority and precedence rules

Specify which instruction wins when multiple could apply simultaneously. This is particularly relevant when discovery tasks span strategy, research, and communication, and Claude needs a stable hierarchy to decide whether audience needs, document purpose, or completeness requirements should take priority in moments of conflict.

❌ Multiple applicable instructions:

Instruction A: Include detailed technical implementation notes.
Instruction B: Keep analysis high-level for executive audience.
Instruction C: Document everything for future reference.

Which wins for a strategy document?

✅ Clear precedence hierarchy:

<precedence_hierarchy>
When multiple instructions could apply, use this hierarchy:

LEVEL 1 (Highest): Audience-specific requirements
- IF executive audience THEN high-level only (beats detail requirements)
- IF technical audience THEN include implementation details
- IF mixed audience THEN use layered approach (summary + appendix)

LEVEL 2: Document purpose requirements
- IF strategy document THEN focus on why and what (NOT how)
- IF requirements document THEN focus on what and how (NOT why)
- IF analysis document THEN focus on findings and evidence

LEVEL 3: Completeness requirements
- Include all required sections
- May abbreviate optional sections based on Level 1 and 2

LEVEL 4 (Lowest): Style preferences
- Formatting preferences
- Presentation preferences

Example application:
- Audience: Executive (Level 1)
- Document: Strategy (Level 2)
- Instruction conflict: “detailed notes” vs “high-level”
- Resolution: High-level wins (Level 1 beats Level 4 style preference)
</precedence_hierarchy>

✅ PM deliverable precedence:

<pm_deliverable_precedence>
When creating PM deliverables, apply precedence:

For OKRs:
1. HIGHEST: Measurable outcomes (must have metrics)
2. HIGH: Ambitious but achievable (stretch goals)
3. MEDIUM: Time-bound (quarterly or annual)
4. LOW: Inspirational language

Resolution: Metric clarity beats inspirational language
Example: “Increase user engagement by 25%” (clear metric) beats “Delight users with amazing features” (vague)

For User Stories:
1. HIGHEST: User value articulation (why user needs this)
2. HIGH: Acceptance criteria (how we know it’s done)
3. MEDIUM: Implementation notes (how we might build it)
4. LOW: Nice-to-have context

Resolution: User value beats implementation preferences
Example: Focus on user problem, not specific technical solution

For Prioritization:
1. HIGHEST: Strategic alignment (company OKRs)
2. HIGH: User impact (research-backed)
3. MEDIUM: Business metrics (revenue, retention)
4. LOW: Team preferences or technical interest

Resolution: Strategy beats team preferences
Example: Prioritize strategically aligned features even if team prefers different work
</pm_deliverable_precedence>

4. Acronym and abbreviation consistency

Define acronyms once and use them consistently throughout, especially for PM frameworks and methodologies.
This pattern becomes important when discovery workflows combine multiple frameworks, and inconsistent labels force Claude to guess whether different terms refer to the same model or entirely separate methodologies. It ensures Claude does not reinterpret or expand abbreviations mid-workflow, which prevents drift when synthesizing research or prioritization outputs.

❌ Inconsistent acronym usage:

Section 1: Use KANO model for prioritization.
Section 3: Apply the Kano satisfaction curve framework.
Section 5: Score features using K.A.N.O.
Section 7: Evaluate using the KANO analysis.

Are these referencing the same framework?

✅ Consistent acronym usage:

<acronym_definitions>
Define all acronyms at first use, then use consistently:

KANO (Kano Model: Must-have, Performance, and Delighter attributes): Prioritization framework for classifying features by expected user satisfaction impact.

Throughout skill, ALWAYS use: KANO (NOT “Kano satisfaction curve”, NOT “K.A.N.O.”, NOT “Kano model framework”)

Other PM acronyms to define consistently:
- OKR (Objectives and Key Results): Goal-setting framework
- KPI (Key Performance Indicator): Success metric
- MVP (Minimum Viable Product): Early product version used for validation
- MoSCoW (Must have, Should have, Could have, Won’t have): Prioritization method
- JTBD (Jobs To Be Done): User research framework
- TAM/SAM/SOM (Total, Serviceable, Obtainable Market): Market sizing structure
- NPS (Net Promoter Score): Satisfaction metric
- CSAT (Customer Satisfaction Score): Sentiment measure
- DAU/MAU (Daily/Monthly Active Users): Engagement metrics

First mention format:
“Use KANO (Kano Model: Must-have, Performance, Delighter attributes) to classify and prioritize features.”

Subsequent mentions:
“Apply KANO classification to the candidate features.”
NEVER: “Apply the Kano satisfaction curve scoring” or “Use the K.A.N.O. framework”
</acronym_definitions>

✅ PM-specific acronym consistency:

<pm_acronym_consistency>
Common PM acronyms that must be consistent:

Research Methods:
- FIRST use: “JTBD (Jobs To Be Done) framework”
- THEN use: “JTBD” (NOT “Jobs to be Done”, NOT “jobs-to-be-done”)

Metrics:
- FIRST use: “DAU (Daily Active Users) and MAU (Monthly Active Users)”
- THEN use: “DAU” and “MAU” (NOT “daily active users”, NOT “active users”)

Frameworks:
- FIRST use: “KANO (Kano Model: Must-have, Performance, Delighter attributes)”
- THEN use: “KANO” (NOT “Kano scoring”, NOT “Kano model”, NOT “K.A.N.O.”)

Product Types:
- FIRST use: “B2B (Business-to-Business) and B2C (Business-to-Consumer)”
- THEN use: “B2B” and “B2C” (NOT “business to business”, NOT “B-to-B”)

Document abbreviations throughout skill, NOT just at first use:
CORRECT: “Apply KANO classification to Feature A”
INCORRECT: “Apply the Kano satisfaction curve to Feature A” (too verbose and inconsistent)
</pm_acronym_consistency>

5. Cross-reference validation

Specify how to handle broken references to external files or sections that may not exist. This is particularly relevant when discovery processes evolve over time and older instructions still reference outdated sections or deprecated templates, making it necessary for Claude to verify whether the referenced material exists before using it.

❌ No validation for references:

Consult /examples/product-strategy-template.md for structure.
See Section 7.3 for detailed methodology.
Reference the OKR framework document.

What if these don’t exist?

✅ Cross-reference validation:

<cross_reference_validation>
Before referencing external files or sections, validate they exist:

File Reference Protocol:
1. Attempt to load file using: docker-desktop-mpc:obsidian_get_file_contents 
2. IF file exists THEN use as reference
3. IF file not found THEN
   - Log: “Reference file not found: [filepath]”
   - Use fallback: Generic best practices
   - Note in output: “Unable to access template - used standard structure”
   - DO NOT block workflow for missing reference files

Section Reference Protocol:
1. Verify section exists in current document
2. IF section exists THEN reference by name or header
3. IF section not found THEN
   - Log: “Section reference invalid: [section name]”
   - Rewrite instruction to be self-contained
   - DO NOT reference non-existent sections

Example handling:
“Consult /examples/product-strategy-template.md for structure”
→ Try loading file
→ IF fails THEN proceed with: “Use standard product strategy structure: 
Vision, Strategy, Roadmap, Success Metrics”
</cross_reference_validation>

✅ PM document reference validation:

<pm_document_references>
Common PM documents that might not exist:

Product Strategy Documents:
- TRY: /knowledge-base/product-strategy/vision-2025.md
- FALLBACK: Use standard vision format (Vision statement, Target market, Value proposition)

User Research:
- TRY: /user-research/persona-[name].md
- FALLBACK: Ask user for persona details or use generic user segment

Templates:
- TRY: /examples/doc-templates/okr-template.md
- FALLBACK: Use standard OKR format (Objective: [Qualitative goal], Key Results: [3-5 measurable outcomes])

Roadmaps:
- TRY: /roadmaps/q4-2025-roadmap.md
- FALLBACK: Request current roadmap from user or proceed without

NEVER block workflow if reference document missing.
ALWAYS provide reasonable fallback behavior.
</pm_document_references>

6. Version and iteration tracking

Include version information and document what changed between iterations. This situation often appears when product managers refine strategy documents, OKRs, or research plans over time, and Claude needs explicit version history to interpret changes correctly and maintain continuity across iterations.

❌ No version tracking:

---
Agent Name: Product Strategy Agent
---

✅ Version tracking with changelog:

---
Agent Name: Product Strategy Agent
Version: 2.3
Last Updated: 2025-11-22
Changelog:
- v2.3 (2025-11-22): Added competitive analysis section
- v2.2 (2025-11-15): Updated OKR success criteria
- v2.1 (2025-11-01): Fixed market sizing calculation
- v2.0 (2025-10-15): Major rewrite - added TAM/SAM/SOM framework
- v1.0 (2025-09-01): Initial version
---

Why this helps Claude:

* Understand evolution of the skill over time
* Know which features are new vs established
* Reference specific versions when debugging
* Track what changes improved/degraded performance

✅ PM deliverable versioning:

<pm_deliverable_versioning>
Version PM documents systematically:

Strategy Documents:
- Major version (1.0 → 2.0): Strategic direction change
- Minor version (1.1 → 1.2): Tactical adjustments
- Patch version (1.1.1 → 1.1.2): Typo fixes, clarifications

OKRs:
- Version by quarter: “OKRs Q4 2025 v1.0”
- Update version for mid-quarter adjustments: “OKRs Q4 2025 v1.1”

Roadmaps:
- Version by timeframe: “Product Roadmap 2025 H2 v2.0”
- Date each update: “Last updated: 2025-11-22”

User Stories:
- Version when acceptance criteria change significantly
- Track: “User Story v1.0 → v2.0: Added mobile acceptance criteria”

Document in changelog:
- What changed (feature added, requirement removed, metric adjusted)
- Why it changed (user feedback, business pivot, technical constraint)
- Impact (affects Q4 OKRs, requires roadmap update)
</pm_deliverable_versioning>

7. Stakeholder context preservation

Maintain context about who requested what and why, especially important for product decisions. This pattern becomes critical when discovery work involves multiple stakeholders with different goals, and Claude must maintain clarity on who initiated a request, the decision horizon, and the intended audience to avoid producing analysis that is accurate but misaligned.

❌ No stakeholder context:

Review the PRD for the MVP scope and produce an assessment of 
whether it should be accelerated into the next Qx product roadmap 
planning cycle.

Extract the user problem, success criteria, open assumptions, 
and known risks from the PRD. Identify any gaps based on recent 
interview transcripts in Google Drive and provide a concise 
recommendation on prioritization with a focus on product implications.

✅ Stakeholder context preserved:

<stakeholder_context>
Research Request Details:
- Requested by: Sarah Chen (VP Product)
- Request date: 2025-11-20
- Context: Board requested competitive analysis for Q1 planning
- Decision deadline: 2025-12-01
- Audience: Executive team (CEO, CTO, VP Product, VP Sales)
- Expected outcome: Go/no-go decision on feature investment
- Budget implications: $500K engineering investment if approved

Research Parameters:
- Feature X scope: [details]
- Analysis depth: Executive summary (high-level, NOT technical deep-dive)
- Format: Slide deck (NOT detailed report)

This context affects:
- Analysis depth (high-level for executives)
- Timeline urgency (2-week deadline)
- Recommendation format (clear go/no-go with justification)
</stakeholder_context>

✅ PM decision context:

<pm_decision_context>
Preserve context for product decisions:

Feature Prioritization:
- Requested by: [Product Manager name]
- Driver: [User feedback/Business goal/Technical debt]
- Strategic alignment: [Company OKR or initiative]
- Stakeholders: [Who needs to approve]
- Decision criteria: [What factors matter most]

User Research:
- Commissioned by: [PM or UX researcher]
- Research question: [Specific hypothesis to test]
- User segment: [Which users to study]
- Timeline: [When results needed]
- Use case: [How findings will be applied]

Roadmap Changes:
- Proposed by: [Team member]
- Reason: [Market change/Customer request/Technical discovery]
- Affected features: [What needs to shift]
- Trade-offs: [What gets delayed]
- Approval needed from: [Stakeholders]

WHY preserve context:
- Understand decision constraints
- Tailor analysis to audience
- Prioritize relevant information
- Explain recommendations appropriately
</pm_decision_context>

8. Source attribution clarity

Distinguish between primary sources, secondary sources, assumptions, and Claude’s synthesis. This pattern becomes essential when Claude needs to combine research, analytics, stakeholder input, and market data, and must avoid blending facts, assumptions, and internal opinions into a single undifferentiated narrative.

❌ Unclear attribution:

Users want faster load times, the market for high-performance 
tools is expanding quickly, and competitors are doubling down 
on speed improvements. Faster performance will likely drive 
better retention and justify premium pricing, so we should treat 
this as a top priority for the next cycle.

✅ Clear source attribution:

<source_attribution>
Attribute each claim to its source type:

PRIMARY SOURCES (direct evidence):
- User research: “Users want faster load times” 
[Source: User interview transcript, Participant 3, 14:32, 2025-11-15]
- Analytics data: “Load time averages 4.2 seconds” 
[Source: Google Analytics, Nov 1-15, 2025]
- Customer feedback: “27 support tickets mention slow performance” 
[Source: Zendesk export, Q4 2025]

SECONDARY SOURCES (reports, articles):
- Market research: “Market growing 23% annually” 
[Source: Gartner Report Q3 2025, p.12]
- Competitive analysis: “Competitors investing in performance” 
[Source: TechCrunch article, Nov 10, 2025]

ASSUMPTIONS (not backed by evidence):
- Assumption: “Users will pay premium for faster performance” 
[Basis: Industry best practices, NOT validated with our users]
- Assumption: “Performance improvements will increase retention” 
[Basis: Correlation observed, causation NOT proven]

SYNTHESIS (Claude’s analysis):
- Synthesis: “Given user feedback, market trends, and competitive 
pressure, performance optimization is high priority”
- Note: This is interpretation combining multiple sources, 
NOT a direct quote

ALWAYS distinguish these clearly in output.
NEVER present assumptions as facts.
</source_attribution>

✅ PM-specific attribution:

<pm_source_attribution>
Product Management source types:

User Evidence:
- “Users struggle with onboarding” 
[Source: 8 of 12 user interviews, 2025-11-10 to 2025-11-18]
- “NPS dropped from 42 to 38” 
[Source: Quarterly NPS survey, Q3 vs Q4 2025]

Business Data:
- “Revenue grew 15% QoQ” 
[Source: Finance dashboard, Q3 to Q4 2025]
- “Churn increased 3%” 
[Source: Customer success metrics, Oct-Nov 2025]

Market Intelligence:
- “Competitor launched similar feature” 
[Source: Product Hunt announcement, Nov 5, 2025]
- “Industry standard is 2-second load time” 
[Source: Nielsen Norman Group usability guidelines]

Internal Opinions:
- “Engineering estimates 6 weeks” 
[Source: Tech lead estimate, team planning meeting, Nov 18, 2025]
- “Sales team requests feature urgently” 
[Source: Sales kickoff feedback, Nov 12, 2025]

PM Judgment:
- Recommendation: “Prioritize performance over new features” 
[Basis: Synthesizing user evidence + business data + competitive pressure]
- Note: This is PM judgment, not guaranteed outcome

Attribution prevents:
- Treating opinions as facts
- Confusing correlation with causation
- Overstating confidence in uncertain claims
</pm_source_attribution>

9. Assumption documentation

Make implicit assumptions explicit, especially critical for hypothesis testing and product decisions. This is particularly relevant when discovery insights inform investment decisions, and Claude needs to surface all underlying assumptions to prevent hidden risks from propagating through strategy documents, roadmaps, or business cases.

❌ Hidden assumptions:

Test hypothesis: Feature X will increase engagement.
Expected outcome: projected 20 percent increase in DAU, 
which should make the feature viable for the next planning cycle.

Assumptions: What user behavior? What timeframe? What measurement?

✅ Explicit assumptions:

<assumption_documentation>
Document ALL assumptions underlying the work:

Hypothesis: Feature X will increase user engagement

EXPLICIT ASSUMPTIONS:
1. User behavior assumption:
   - Assumption: Users will discover feature organically (NOT requiring marketing push)
   - Risk: If users don’t find it, engagement won’t change
   - Validation: Test with small user group before full launch

2. Timeframe assumption:
   - Assumption: Impact measurable within 30 days
   - Risk: True impact might take 60-90 days to stabilize
   - Validation: Track metrics for 90 days, not just 30

3. Measurement assumption:
   - Assumption: DAU is best engagement metric
   - Risk: Users might engage differently (longer sessions but less frequently)
   - Validation: Also track session duration, feature usage frequency

4. Baseline assumption:
   - Assumption: Current DAU baseline will remain stable without feature
   - Risk: Seasonal variation or market changes could affect baseline
   - Validation: Compare to control group without feature

5. Causation assumption:
   - Assumption: Feature X directly causes engagement increase (NOT correlation)
   - Risk: Other factors (marketing campaign, bug fixes) might drive increase
   - Validation: A/B test with control group

DOCUMENT assumptions in hypothesis testing plan.
REVISIT assumptions after data collection.
ADJUST conclusions if assumptions violated.
</assumption_documentation>

✅ PM assumption documentation:

<pm_assumption_documentation>
Common PM assumptions to make explicit:

Market Sizing Assumptions:
- TAM assumption: “Total market is $5B” 
[Based on: Gartner report, assumes market definition includes X but excludes Y]
- SAM assumption: “Serviceable market is $500M” 
[Based on: Geographic reach limited to US/EU, assumes no regulatory barriers]
- SOM assumption: “Obtainable market is $50M Year 1” 
[Based on: 10% capture rate, assumes sales team capacity, no major competitors]

Pricing Assumptions:
- Willingness to pay: “Users will pay $99/month” 
[Based on: 15 user interviews, assumes value prop resonates, NOT validated at scale]
- Price elasticity: “10% price increase = 5% churn” 
[Based on: Industry benchmark, NOT our specific data]

Resource Assumptions:
- Engineering capacity: “2 engineers for 8 weeks” 
[Based on: Tech lead estimate, assumes no major blockers or scope creep]
- Designer availability: “1 designer shared 50%” 
[Based on: Current allocation, assumes no competing priorities]

Timeline Assumptions:
- Launch date: “Ship by Q1 2026” 
[Based on: Current roadmap, assumes no dependencies delayed, no scope changes]
- Adoption rate: “50% users adopt within 3 months” 
[Based on: Historical adoption patterns, assumes comparable feature complexity]

DOCUMENT assumptions in:
- Product briefs
- Business cases
- Roadmap plans
- Hypothesis tests
- Prioritization decisions
</pm_assumption_documentation>

10. Partial completion handling

Define acceptable partial outputs when full completion isn’t possible, especially for research and analysis tasks. This is particularly relevant when discovery timelines are tight and Claude needs to return the best possible output with the information available, while transparently noting limitations so decisions are not made on a false sense of completeness.

❌ No partial completion guidance:

Conduct a competitive analysis of the five primary vendors 
in our segment.  
- Pull product capabilities, 
- pricing models, 
- target segments, 
- go-to-market positioning, 
- and differentiation claims from public sources.  

Synthesize the findings into a comparative view and highlight 
strategic opportunities for us in the next planning cycle.

What if only 3 of 5 competitors are analyzed due to lack of data?

✅ Partial completion protocol:

<partial_completion_protocol>
When full completion is not possible, define minimum acceptable output:

Competitive Analysis Example:

IDEAL COMPLETION (100%):
- 5 competitors analyzed
- All 10 criteria evaluated for each
- Market positioning map created
- Recommendations provided

MINIMUM ACCEPTABLE (60%):
- At least 3 competitors analyzed
- At least 6 core criteria evaluated (product features, pricing, target market, strengths, weaknesses, differentiation)
- High-level comparison provided
- Recommendations with caveat: “Analysis incomplete due to limited data”

UNACCEPTABLE (<60%):
- Fewer than 3 competitors
- Fewer than 6 criteria
- No comparison or recommendations

Partial Completion Handling:
IF 100% completion not possible THEN
1. Assess what percentage is achievable (e.g., 80%)
2. IF >= 60% THEN
   - Proceed with partial completion
   - Clearly mark missing sections: “Data unavailable for Competitor D and E”
   - Note limitations: “Recommendations based on 3 of 5 competitors - may not represent full competitive landscape”
   - Suggest: “Complete analysis requires access to [specific data sources]”
3. IF < 60% THEN
   - Stop work
   - Report: “Insufficient data for meaningful analysis”
   - Recommend: “Acquire [specific data] before proceeding”

ALWAYS communicate completion percentage and limitations.
</partial_completion_protocol>

✅ PM partial deliverable handling:

<pm_partial_deliverable_handling>
PM deliverables with partial completion thresholds:

User Research:
- IDEAL: 12+ user interviews across 3 segments
- ACCEPTABLE: 6-8 interviews in 2 segments (note: “Limited to [segments], missing [segment]”)
- UNACCEPTABLE: <6 interviews (insufficient sample size)

OKR Development:
- IDEAL: 3-5 objectives, each with 3-5 measurable key results
- ACCEPTABLE: 2-3 objectives with 2-3 key results each (note: “Focused OKR set due to [constraint]”)
- UNACCEPTABLE: 1 objective or vague key results without metrics

Product Roadmap:
- IDEAL: 12-month roadmap with themes, epics, and rough timeline
- ACCEPTABLE: 6-month roadmap with epics (note: “Near-term focus, extended roadmap pending [dependency]”)
- UNACCEPTABLE: Feature list without timeline or themes

Market Analysis:
- IDEAL: TAM/SAM/SOM with supporting data, trends, competitive landscape
- ACCEPTABLE: TAM/SAM with estimates, high-level competition (note: “SOM pending validation, competitive intel limited to public sources”)
- UNACCEPTABLE: Market size without source or methodology

Business Case:
- IDEAL: Costs, benefits, risks, timeline, ROI calculation, sensitivity analysis
- ACCEPTABLE: Costs, benefits, basic ROI (note: “Sensitivity analysis pending [data]”)
- UNACCEPTABLE: Benefits without costs or timeline

Handling Partial Deliverables:
1. State completion percentage upfront
2. Highlight missing components
3. Explain why completion limited (time, data, access)
4. Provide path to full completion
5. Adjust confidence in recommendations accordingly
</pm_partial_deliverable_handling>

View all articles
View all articles