While on the subject of Stanley Milgram, there is another very interesting experimental technique he pioneered which I think is directly relevant to product managers and marketers. The issue is: how can you determine how people truly feel about something? Asking them (e.g. via surveys or focus groups) can be problematic and all sorts of biases may be introduced. For instance, how you frame the question has a strong impact on the results (nice little example of framing bias by Paul Kedrosky here).
While market analysts can adjust for these biases in some cases, things become particularly troublesome when the questions are more personal and/or sensitive. People tend to respond in accordance with a social desireability bias, which basically means they will tell you what they believe to be a more socially acceptable response, rather than what they really think. How do you estimate the amount of popular support for white supremacy or neo-Nazi groups? Almost no one who holds such a view will own up to it. Election polls in many countries severely understate the support for extremist political parties for the same reason (including the famous 2002 French election in which the extreme right winger Jean-Marie Le Pen came second in the preliminary round of voting - a shocking result. In the United States, a rough analogy would be if David Duke won the Republican primary).
Milgram devised a particularly clever experimental technique that attempted to deal with these types of issues. It was called the "lost-letter" technique, and it worked as follows. Milgram would address letters to a fictitious group whose affiliation would be clear from its name (e.g. "Society For The Advancement of White Supremacy"). The letters would be stamped and addressed to a Post Office box that Milgram set up ahead of time. Hundreds of these letters would then be placed at selected locations, looking like they had somehow got lost and just needed to be placed in a mailbox to be sent along its way. He then monitored the Post Office Box to see how many letters came in.
Milgram guessed that a degree of sympathy for the organization mentioned on the envelope would make it more likely that someone would actually send the letter on its way rather than ignore it. And the anonymous and indirect nature of the transaction would make it a lot easier to act on your true feelings about an issue, not just a socially accepted feeling. To factor out the "noise" from non-responses and other random events, Milgram would do the same thing with another set of letters, this time addressed to a completely neutral sounding organization (e.g. "Industrial Corp, Inc."). This would be the control data against which the responses from the sensitive letter could be measured. The greater the response, the more the support for a particular viewpoint.
The responses to the various sets of letters he tested with were not particularly revelatory. It's the technique that was his real accomplishment, not the results of the experiment.
Showing posts with label product management. Show all posts
Showing posts with label product management. Show all posts
Thursday, May 3, 2007
Sunday, April 22, 2007
The Deployment Post Mortem
Your product just released. Customers are buying your product in droves and they love it. It saves them time and money, makes them better at their work, and it even slices bread!
That's wonderful and these moments are what we Product Managers live for. However, things don't always go according to plan. What happens when things don't go so well? Inevitably, a customer deployment will go poorly. They won't be happy with your product, and may even uninstall it. What can you do in a situation like this? One approach is what I will call the Leo Tolstoy approach. The great writer said: "All happy families resemble one another, each unhappy family is unhappy in its own way." In other words, treat the failure as an outlier, an anomaly, rely on your customer support to patch things up as best they can, and move on.
A better approach is to look at the failure as an early-warning mechanism. This is your opportunity to understand whether the failure is indeed an anomaly (as some will inevitably turn out to be) or whether it points to a systemic problem that you need to take steps to address. As Product Managers, we are attuned to issues in the field and actively search for patterns or trends that point to a broader problem. Depending on the type of problem, PM's should have a checklist handy to make sure they get the information they need. The two most common types of issues I have come across are:
Failure in the field. The software caused an outage in the customer's IT environment. A PM checklist should include:
Mismatched expectations. The product works as specified, but it doesn't do what the customer thought it would do. They don't see value in the product. A PM checklist should include:
That's wonderful and these moments are what we Product Managers live for. However, things don't always go according to plan. What happens when things don't go so well? Inevitably, a customer deployment will go poorly. They won't be happy with your product, and may even uninstall it. What can you do in a situation like this? One approach is what I will call the Leo Tolstoy approach. The great writer said: "All happy families resemble one another, each unhappy family is unhappy in its own way." In other words, treat the failure as an outlier, an anomaly, rely on your customer support to patch things up as best they can, and move on.
A better approach is to look at the failure as an early-warning mechanism. This is your opportunity to understand whether the failure is indeed an anomaly (as some will inevitably turn out to be) or whether it points to a systemic problem that you need to take steps to address. As Product Managers, we are attuned to issues in the field and actively search for patterns or trends that point to a broader problem. Depending on the type of problem, PM's should have a checklist handy to make sure they get the information they need. The two most common types of issues I have come across are:
Failure in the field. The software caused an outage in the customer's IT environment. A PM checklist should include:
- What was the problem exactly? Get a detailed explanation.
- Was it due to an unexpected software/hardware configuration?
- Was the configuration explicitly not supported? If so, why was it installed? Where did our internal process fail?
- Was the configuration explicitly supported? If so, go back to problem determination. What must be done to include this use-case in future QA plans?
- Was the configuration neither explicitly supported nor unsupported? If so, is this a configuration we did not expect to see and did not plan for? Is it an unusual combination that we're unlikely to see again? If it is likely we will see it again we should add to QA plans. In the case that this is a fairly common configuration that was not part of the QA plan, we need to go back and revisit the entire QA process. How closely does the QA test matrix map to what is encountered in the field? Do we need to do more research to ensure adequate QA coverage?
- In the problem determination phase, as we uncover the root cause, can we generalize the effects? I.e. Could the same problem manifest itself in other ways? How to we prevent those other problems as well?
- Once we understand the root cause, can we generalize a "graceful degradation" principle from it? Can we build in checks so that when faced with an analogous unknown situation, the software is able to adhust or "degrade" its functionality rather than cause an outage?
Mismatched expectations. The product works as specified, but it doesn't do what the customer thought it would do. They don't see value in the product. A PM checklist should include:
- Did we oversell or overpromise product functionality? If we did, this could point to at least two different problems:
- If we oversold, was it because the sales force was not trained adequately to know exactly what the product does and does not do? We will then need to work out a more comprehensive training plan for the sales team.
- If we oversold, was it because the sales team felt cornered into "improvising" in the field? This could be an early signal that the product is not meeting the needs of the target market very well, and the sales team feels pressured into setting unrealistic expectations.
- Figuring out whether it is a training issue or a product mismatch issue is extremely important - one can be fixed relatively easily, the other may require major realignment.
- Did we demonstrate value? Even if the product is implemented and works flawlessly, did the customer realize the benefits and/or return on investment they were looking for?
- If the customer did realize benefits, but is not aware of them or unable to articulate them, then we need to showcase these benefits better and make it easy for the customer to see it. Perhaps this needs additional reports, a dashboard, or some set of running statistics that enables customers to quantify the value they have received.
- If they did not realize their expected benefits, why not? If the software was meant to replace some manual activity, is the manual activity still being carried out? Is there duplication of effort because some component was not correctly or adequately integrated?
Product Bytes Newsletter
I've been a fan of Rich Mironov's newsletter, Product Bytes, for a couple of years now and wanted to provide a link to it for anyone who may be interested. It's a refreshingly BS-free take on the art and science of Product Management, and I always learn something new in every issue. It's a bit skewed towards enterprise software, reflecting Rich's long experience in that area, but I think the basic ideas apply quite broadly.
Subscribe to:
Posts (Atom)