Back To Blog
Product - Quality Improvements vs. New Features: Part II

Following my blog post, "Product: Quality Improvements vs. New Features" [link], a friend and former Google colleague of mine wrote in the following: 


Nelson, thanks for writing this post. As a PM, I've often struggled with quality improvements vs. new features. Using a notion of value is good, but have you seen good approaches to quantifying value? I personally haven't seen many. And typically find that it boils down to a "gut feeling" judgement of the group of people working on the product or maybe some attempt at determining financial value, which can also be pretty loose. (Maybe a follow-up blog post?) Is it even important to do so much analysis in fast moving markets?


Unfortunately, I've done quite a bit of financial forecasting in my life. When I was an intern at Reader's Digest, I devised the core financial model / projections used for (which the finance department seemed to have adopted). At, forecasting seemed to permeate every ounce of their DNA. Finance was a dominant group at the company -- and not because of any sort of force of personality -- but just because they depended very heavily on forecasting for guidance in terms of which initiatives to pursue and not pursue. Thus, for nearly every project I was working on -- nearly the first thing I did was to create a set of financials. 

I broadly think forecasting is terrible. It's generally terribly done and rife for corruption. So why is it terribly done? Forecasting requires three distinctly different skills sets. 1) core understanding of finance / analytics. 2) creativity / intuitive gut feel for numbers and how data / markets work. 3) Simplicity in the display and construction of information. (without this last one, forecasting then boils down to a black box / "just trust me" -- the execs *should* understand exactly how you arrived at that number and that process, along with the number, should be able to stand up to their scrutiny. I always strived for my models to be understood to a lay person. The assumptions are clearly laid out and it's something that -- with some guidance -- someone else can understand how the model works and why I constructed it that way.)

Think about this combination of traits though -- it's rare (but not impossible) to find a really great finance person who also has a great intuitive feel for product, how products grow, and to display information in a simplistic way. (BTW, some of the *worst* models I've seen come from consultants. They're invariably good at #1. Decent at #2. Terrible at #3. If you've seen a slide deck on this type of material from a consultant and the room's eyes glaze over -- you know what I'm talking about.) The alternative is to take someone who is good at product and then have them build a model. The problem here is the PM suffers from both a lack of time to do this well and potentially little to no experience with respect to finance.

Ignoring all this -- let's say you've come up with a forecast. Maybe you even think it's a good forecast -- then what's the problem? The problem is that forecasts are frequently abused. When resources are allocated based on forecasts -- what will happen? Well, self-interested individuals (which we all are) will then artifically (either consciously or subconsciously) inflate their numbers so they're more likely to get resources. Then when those numbers don't pan out, maybe they get a slap on the wrist -- but rarely is the penalty for having an incorrect forecast to be anything more than mild. Also -- most corporations don't work as a dictatorship, and in some ways, I think this type of work should work that way. Let me explain. Resources are typically allocated in a series of meetings, or at the minimum, in a meeting where there's a group of execs weighing in. Then when a set of forecast(s) are shown -- then there easily could be open warfare in terms of the legitimacy of each individual forecast (with obvious self-interests at play such as which group will get resources.) It's a colossal waste of time. The way it could work well would be if the CEO basically commissions each of the forecasts. Either from the same individual / group or from a set of individuals / groups that he trusts. Then he can refine and filter that information in the context of other information. But there's the big difference. The forecasts there are actually being used for the purposes of allocation of resources -- not for political manueverings for resoures. The CEO can then ask the question, "X and Y seem pretty close in discounted lifetime value. I'm concerned about your assumption in Y. Why did you choose to go with that number instead of a lower number?" The discourse is around getting to the best answer without, hopefully, ancillary considerations -- or if there are ancillary considerations, those are acknowledged to everyone involved.

So is forecasting a waste of time? Not at all. I actually found forecasting to be an enormously useful part of my work and background. The reason is that when you build a model -- you understand all the sensitivies of the business and which metrics you *have* to hit. When I was in college, my friend and I developed a business plan for Princeton's Business Plan Competition for a startup called "Grocers Solution". Whereas Web Van, Home Grocer, and other folks would be a one-stop shop (buy groceries online and have it delivered) -- we would partner with existing supermarkets to deliver groceries. Seemed like a decent idea. Here was the problem. When I built the financial model -- there was absolutely no way I could make it profitable by eating into the margin of the groceries. So even when I somewhat artifically tweaked the margins upwards to see what happened -- it wasn't close. However, when I tacked on a $4.95 delivery charge -- all good. I'll note that WebVan / Home Grocer went out of business, all those types of businesses generally went out of business -- and the one that has persisted (Fresh Direct) -- has thrived, with a delivery charge. I'm not saying I was right / prescient -- I'm just pointing out that that's the value of doing forecasting. It very simply said, "If you want to charge the same price as those found in a supermarket -- which you probably have to for psychological reasons -- you can't offer this service without charging for it."

This type of work would allow us to focus our efforts (overcoming customer resistance to a delivery charge) and potentially not even start the company if we thought that (or getting supermarkets to pay a service fee) were non-starters.

So getting back to Wayne's original questions -- the boiling down of this work to "gut" feel and the necessity of doing this work in fast moving markets.

In a world where the analysis process is corrupted -- this analysis is broadly a waste of time. But let's say it's a world where the process is not corrupted -- and I want to emphasize process. The process is critical. How it's commissioned, how it's done, and how it's reviewed. So that involves getting people you trust, having a single exec sponsor who reviews, critiques, and then approves, and then using that as a significant (but not sole) determining factor in the allocation of resources. In this world, I think the analysis is absolutely critical.

1) It tells you how the business works. Maybe you realize the business needs 10 million users before it's profitable. Maybe you realize the business needs customers to actually pay you money (i.e. can't monetize sufficiently via ads). Maybe you realize that the support costs are so high that the business is unprofitable on a fundamental basis. Gut feeling gets you so far -- it guides you in terms of where you're at and what you need to look at and, frankly, you may even be right 80-90% of the time. But this is just work. It's work that gets you just a little bit more information -- but potentially critical information. There's nothing quite like staring at a model and thinking, "I don't see how this business makes sense with less than 10 million users..."

2) It *is* good for comparative purposes. Of course, this is when you trust the people doing the analysis and the process -- but if you run two pieces of analysis, and one projects $10 million in revenue and the other $50 million -- that's a clear difference and something that gut feel will not necessarily tell you. I want to emphasize this. One of my favorite interview questions was to ask a candidate to estimate Wal-Mart's annual revenue in the United States. Few people had an intuitive (and accurate) gut feel for this number. In other words, after they came up with a projection -- a lot of folks didn't know if their projection felt high or low. That's a projection for a store that probably all of them have been to and in an industry that's readily understandable (brick and mortar retail). Imagine then trying to project the benefit of a new feature for a mobile ads product? Or the benefit of expanding ZipCar to Des Moines, Iowa. Or the revenue from digging a new oil well in Western Canada? My point here is that all of them requires a lot of specialized knowledge to have a good intuitive sense -- and it's rare, even for people who work in that indusry, to get to that stage. At the minimum, the way I think about it is, why guess? Why guess when the alternative is just work?

3) When the analysis is good, it properly frames the conversation. Is it about revenue? Is it about number of customers? Is it about cost savings? When you stack product A vs. product B and product A has higher revenue -- then someone goes, "But wait, there's XYZ benefit to this too." That's great. Then you can have a larger conversation about what you (as a company) truly care about. It's not blind adherance to a stack ranking in terms of revenue. Maybe there are ancillary benefits. Maybe you're keeping a partner happy. Maybe it's an expected value type equation -- here's what we project in $ but there's a 15% chance of a huge pay day. Maybe it's competitive -- you need to invest in an area to keep a competitor distracted or investing in an area they don't want to. You no longer have a mish mash of ideas with unknowable value. Here's where you put a stake in the ground and say -- you want this valued? Here's the value.