[R]
Back to blog

The Product Management Practice

The irreducible core of product management, and why most organizations get it backwards.

#product#strategy#engineering

TL;DR: Product management isn't about roadmaps, work tickets, or being the "CEO of the product." It's about understanding needs, forming hypotheses, and delivering outcomes. The rest is theater.

Product management has accumulated decades of cruft. Frameworks, certifications, elaborate rituals involving colored sticky notes. Strip away the methodology theater and you're left with three things:

  1. Deeply understand the needs and opportunities in serving users, the market, and the business.
  2. Establish hypotheses and curate prioritized initiatives to address those needs and opportunities.
  3. Deliver outcomes for the business and its customers.

That's it. Everything else is implementation detail.Alignment, logistics, and coordination are important parts of the PM job, but too many companies confuse Project Managers and Product Managers.

Getting to Deep Understanding

You have to survey the territory before you can chart a roadmap through it. That territory is the evolving space of needs and opportunities across users, the market, and the business. The product manager's job is to explore that space, become an expert in it, and synthesize a representation of the landscape that orients the rest of the organization as plans are made and evaluated.

This means engaging meaningfully and regularly with current and potential customers, exploring competitive products, and staying current with the evolving practices and trends of a given space.Note the word meaningfully. Reading NPS scores in a dashboard is not customer engagement. Watching session recordings while eating lunch is not customer engagement. Having actual conversations where you shut up and listen? That's customer engagement.

The critical shift: product managers need to focus on needs and opportunities over the initiatives or features that may address them. Own opportunity domains, not solutions or features.

The difference matters. "We need a notification system" is a solution. "Users are missing time-sensitive updates and it's costing them money" is an opportunity. One closes doors. The other opens them.

Structure Before Solutions

Before jumping to initiatives, a well-defined opportunity should include:

  1. A carefully quantified and qualified statement of the problem or opportunity
  2. A specification of the audience and their jobs to be done
  3. Metrics that will evaluate efficacy of any solution
  4. A narrowly scoped set of initial requirements
  5. An explicit statement of what is not in scope

That last one is underrated. Defining the negative space (what you're deliberately not addressing) is as important as defining what you are.If you can't articulate what you're not doing, you don't actually have scope. You have a direction and some aspirations.

Hypotheses, Not Feature Lists

Product managers must bring good product sense and judgment to bear in creating hypotheses about which needs and opportunities to address, and the qualitative and quantitative criteria for evaluating success."Good product sense" is doing a lot of work in that sentence. It's the part that can't be taught in a certification program, which is why certification programs don't mention it.

They must also work with customers and internal stakeholders to curate and prioritize initiatives that test those hypotheses. Prioritization balances the impact or value realized for customers and the business against the cost and strategic implications of delivery.

The hypothesis framing matters because a feature is either shipped or not shipped. A hypothesis can be validated, invalidated, or refined. One framing is binary and not tied to outcomes. The other is iterative and learning-oriented.

Outcomes Over Output

Product management is not about shipping features. It's about delivering outcomes. This is what product management owns, and it's what the practice should be evaluated by.If your performance review focuses on "features shipped" as a metric, your organization has confused activity with progress. This is extremely common. It is also why most products are mediocre.

Product managers need to understand the required outcomes for both customers and the business, and how to measure and track those outcomes relative to initiatives or features.

They also need to engage in the full scope of delivering an initiative to users: the logic of a feature, the experience of using it, and its go-to-market positioning. Shipping is not the finish line. Shipping is the starting line for measurement and evolution.

Product Execution

Understanding the what and why is necessary but not sufficient. You also need to execute, and execution has its own discipline.

Big Rocks First

Once the organization has a shared map of the opportunity space and focuses on achieving outcomes rather than just shipping features, tactical prioritization gets much easier.

The "big rocks, pebbles, and sand" analogy (popularized by Steven Covey) is useful here. You have a fixed amount of big rocks, pebbles, and sand, and a jar. If you add the sand first, then the pebbles, then the big rocks, they won't fit. If you start with the big rocks, then add the pebbles and let them settle, and finally add the sand, everything fits in that same jar.This analogy has been around long enough to feel clichéd, which is a shame, because it's genuinely useful. Don't let the motivational-poster energy distract you from the physics.

Applied to product work: the organization needs to identify its big rocks (the major strategic initiatives, the work that requires multiple teams to collaborate) and make sure those are estimated and planned on an org-wide basis first. Once the big rocks are placed, individual teams can align their remaining capacity against their potential initiatives to achieve agreed-upon outcomes.

This also means teams can reprioritize on the fly without escalating every change to senior management. They stay agile and effective rather than stuck in prioritization purgatory waiting for someone three levels up to bless a sequencing change.

Team-level prioritization should be a joint effort between Product and all relevant stakeholders.

Prioritization as a Practice

Prioritization is the ongoing process of making informed decisions about sequencing, managing relative trade-offs, and allocating finite resources. Prioritization processes should be balanced, transparent, repeatable, and lightweight enough to use regularly.Emphasis on lightweight. If your prioritization framework requires a spreadsheet with a ton of columns, you've built a procrastination engine, not a decision-making tool.

Prioritization must account for at least these elements:

Organizational strategy. The company's strategy and long-term plans are key guides to prioritization. There should be clear alignment between proposed projects and the company's strategy, including specifics like revenue targets.

Value delivered. The value proposition for users and the business, combined with revenue impact, clarifies the types of outcomes a given project or initiative might achieve.

Estimation of cost. The estimated effort, resources, ongoing cost to serve, and opportunity cost for an initiative must be considered. Critically, estimation and resource allocation must happen at a team or individual level. Teams and individuals are not fungible, and not all projects can or should be staffed by any team."We'll just move some engineers over from Team X" is almost always a fantasy. Context switching costs are real, team dynamics matter, and that mythical fungible engineer exists only in executive slide decks.

Organizational coordination cost. Projects that span multiple teams have costs above and beyond the direct effort required to deliver the work. As teams coordinate and interact around various pieces in flight, time and effort must be allocated more carefully, potentially fragmenting or displacing otherwise fungible blocks of delivery capacity.

Tactical implications. Prioritization decisions are moves played in a game that unfolds over time. The tactical value of a candidate project includes its impact on future conditions, not just its immediate value. If two projects have similar immediate value but one is a prerequisite for a high-value project queued up later, the enabling project has more tactical value. Projects that let you validate or falsify core business hypotheses quickly can also have higher tactical value than their immediate business impact would suggest.

Returning to Non-Scope

Remember the "explicit non-scope" from earlier? It's not just a product definition artifact. It's an active prioritization tool.

When someone proposes adding something mid-flight, the non-scope list is your first line of defense. Either the idea was already considered and deliberately excluded (point to the list), or it's genuinely new and needs to go through proper prioritization (put it in the backlog). Either way, you have a process rather than a negotiation."Not in scope for this phase" is doing real work. It acknowledges the idea has merit while creating space to focus. It's more honest than pretending the idea doesn't exist.

Estimation Is Not Scheduling

Estimation is more than building up a thesis of how long things will take. Yes, it's useful to know when things will be done, especially for market-facing deliverables. But the higher value in estimation is developing a repeatable practice of decomposing problems and building execution plans that accurately reflect teams' capacity and velocity.The real output of estimation isn't a date. It's a shared understanding of what "done" means and what has to happen to get there. The date is a side effect.

Abstract scales like Fibonacci story points are often more useful than time-based values like engineering weeks. Most teams are bad at granular time estimation, and time-based estimates come with false confidence and anxiety around unrealistic deadlines. Abstract scales stay anchored in something more symbolic, where consistency over multiple iterations matters more than hours or weeks estimated.

Estimation should be driven by the teams accountable for delivering the work. Projects that cross team boundaries need touchpoints with other teams as part of the estimation process.

The Operational Burden Trap

Every feature you ship comes with an operational burden. The size of that burden depends almost entirely on how carefully you shipped it.

Functionality delivered with rigor (hardened, tested, documented) tends to settle into an ongoing ops burden of around 30% of the original delivery effort over the following quarter or two. That's manageable.

Functionality shipped with undue haste (corners cut, documentation thin, edge cases hand-waved) can balloon to 50-60% of the original effort in ongoing support. These problems compound as systems interconnect.This is how velocity grinds to a halt. Teams ship fast for a few quarters, then spend the next year fighting fires in code they barely remember writing. Near-term speed becomes mid-term paralysis.

This isn't an argument against moving fast. It's an argument for understanding the actual cost of what you're shipping, including the cost of how you ship it.

Knowing What "Enough" Looks Like

Understanding what enough means is an underrated discipline.

When defining success criteria for an initiative (performance targets, feature completeness, quality bars), the temptation is to aim for the best possible outcome. But "best possible" often consumes disproportionate resources for marginal gains.Going from 100ms to 30ms response time sounds impressive. But if your competitors are at 150ms and your users can't perceive the difference below 80ms, you've spent engineering months on bragging rights.

The question isn't "how good can we make this?" It's "how good does this need to be for us to achieve the outcome we're after?" Those are very different questions, and only one respects finite resources.

This applies recursively: enough security, enough availability, enough performance, enough polish. The discipline is knowing when you've crossed from "necessary" to "nice to have" and having the conviction to stop.

Innovation Tokens

Dan McKinley wrote something of a manifesto many years ago, and it's been an immensely valuable lens for building technology products. The core argument: use the most boring thing that works for your needs, where "boring" means tested, well understood, widely used, and unlikely to be a source of instability or risk.Every startup that chose a cutting-edge database because it was "webscale" and then spent six months debugging replication issues instead of building product features is nodding along right now. You know who you are.

The framework also provides guidance on investing your "innovation tokens" (your capacity to build, maintain, and operate novel technologies) only in areas where it will provide meaningful differentiation for your product or business.

You have a limited number of these tokens. Spend them where they matter. Spend them where they create competitive advantage. Don't spend them on your build system.

Appendix: A Go-Live Checklist

The "delivery with care" vs "delivery with haste" tradeoff can be operationalized. This checklist, when followed, tends to produce that 30% ops burden rather than the 60% one:

This isn't bureaucracy. It's the difference between shipping something you'll be proud of in six months and shipping something you'll be apologizing for.

Further Reading