In the past decade, the “lean startup” movement has had, arguably, a greater impact on the approach that product designers, engineers, and entrepreneurs take than any other. The fundamental concept is undeniably compelling: craft a version of your product (and company) that requires the least amount of time and effort to validate whether the problem you’re solving is an important one that real customers will pay for or use. This “minimum viable product” will help you learn faster, iterate faster, and survive longer on less money. It’s a powerful way to overcome many of the problems that plague (and often prematurely kill) young companies and new product efforts.
But it also leads people to create a lot of crappy, barely useful products. I learned this the hard way.
A cautionary tale
In late 2014 and early 2015, I worked with big data and data science teams at Moz, my marketing software company, in order to design a minimum viable product (MVP) that would help people identify websites that Google might consider to be spam. Our customers wanted to know this because getting linked to from sites labeled “spam” could be harmful for SEO. We decided to build a “spam score” that would show them how likely it was that a particular website was considered by Google to be spam.
We didn’t have the focus or bandwidth to build an exceptional version of the product. It took five employees three months worth of work to launch a minimally viable version of the feature. We knew it was less than perfect, but at launch, we figured, despite these issues, our MVP would still help a lot of people and, like all good MVPs, it would help us learn more about our customers and what they wanted from a spam- identifying product long term. We figured that something to help our customers and community was better than nothing.
Spam Score launched on March 30, 2015, and while we did receive a good bit of positive feedback, we also got a lot of criticism, confusion, and questions. The score’s design was suboptimal. The percentage risk model wasn’t intuitive. These were things we knew would happen in the design and construction phase but pushed to the back burner in favor of a faster release.
Marie Haynes, one of the world’s foremost experts in the field of web spam and Google penalty issues, left a comment in the launch blog post that summed up a lot of the sentiment around the release:
I wanted to like this tool, but I am really concerned that it could do more harm than good. Perhaps I have misunderstood its purpose.
We’d talked to Marie during the development of the metric. We knew her concerns. We knew she was massively influential in the space and that her approval and support (and others like her) were a great barometer for our success at solving the problem, but we chose to launch while we were still “embarrassed” by our first version of the product, rather than waiting until we could develop something better. Perfect is the enemy of done, right?
Six months after launch, which looking at our product performance metrics, we noted that spam score had become mildly popular with a small group of our customers (about 5% of the folks who regularly used Open Site Explorer visited the spam score section), but it had no observable impact on free trials, on vesting rate, on retention, or on growth of subscriptions overall. In other words, we’d probably have seen exactly the same performance in our customer base and growth rate if we’d never launched Spam Score.
Great use of (at least) $500,000 in data collection, research, and engineering time, eh? Thank god I’m the founder … otherwise I might have been shown the door.
Do MVPs have to be so minimally viable?
The problem with MVPs is that if you launch to a large customer base or a broad community, you build brand association with that first version. To expect your initial users (who are often the most influential, early-adopter types—the same ones who’ll amplify the message about what you’ve put out to everyone else in your field) to perceive an MVP as an MVP is unrealistic.
In my experience, our customers (and potential customers) don’t see new things and think: “Oh, this must be their initial stab, and while it’s not exactly what I want or need, I can see that it’s a product I should pay attention to and help support, because eventually I can imagine it getting to the place where it really is useful and helpful to me.”
Instead, they (usually) see new things and think: “Is this interesting? Does it do what I need? Is it way better than what I already use? Is it worth the hassle of learning something new and switching away from what I’ve always done?” and if the answer to those questions is a “no,” or even “Well, maybe, but I’m not quite sure,” your product is unlikely to have substantive impact.
Worse, I’ve found that when we launch MVPs, the broad community of marketers and SEOs who follow my company perceive our quality to be shoddy and our products to be inferior. I’ve termed this brand reputation that follows an initially incomplete, minimally viable product’s launch the “MVP hangover.” It seems to follow the product and even the broader brand around for years, long after we’ve iterated and improved to make the product truly exceptional and best-in-class.
My theory about MVPs applies differently to different stages of your organization, based mostly on reach:
For an early-stage company with little risk of brand damage and a relatively small following and low expectations, the MVP model can work wonderfully. You launch something as early as possible, you test your assumptions, you learn from your small but passionate audience, and then you iterate until you’ve got something extraordinary. Along the way, your (tiny) organization is associated with an ever-improving product, and by the time large groups of influencers and potential customers hear about you, you’re in great shape to be perceived as a leader and innovator.
Had Slack, for instance, become publicly associated with its initial MVP group chat product (which wasn’t nearly as good, as feature-rich, or as compelling in user experience as HipChat or Yammer), it’s likely that it never would have achieved the great success it did. But because Slack started small, with an internal-only product that slowly spread until it was truly exceptional and ready to earn broad adoption and reach, the model of iterating internally on a minimum viable product but waiting until that product was exceptional before launching worked wonders.
Conversely, if you already have a big following with high expectations, publicly launching a traditional MVP (one that leans more to the “minimum” side of the acronym than the “viable” side) can be disastrous. If you’ve reached a certain scale (which could vary depending on the reach of your organization versus the size of your field), perception and reputation are huge parts of your current and future success. A not-up-to-par product launch can hurt that reputation in the market and be perceived as a reason to avoid your company/product by potential customers. It can carry an MVP hangover for years, even if you do improve that product. And it can even drag down perception of your historic or currently existing products by association.
Tesla’s a great example of an early-stage company that could not afford to launch an MVP. Prior to producing its first mass-market-available vehicle (the Tesla S in 2008), its reputation and reach was already so vast by virtue of Elon Musk’s fame and the media surrounding the formation, growth, and, later, government loans to the company that anything less than extraordinary would have sundered its public perception and perhaps shuttered the organization.
The alternative: An exceptional viable product
My proposal is that we embrace the reality that MVPs are ideal for some circumstances but harmful in others, and that organizations of all sizes should consider their market, their competition, and their reach before deciding what is “viable” to launch. I believe it’s often the right choice to bias to the EVP, the “exceptional viable product,” for your initial, public release.
It is absolutely the right move to first build an MVP. Developing every feature to perfection before you have anything people can test in the real world can be devastating. But depending on your brand’s size and reach, and on the customers and potential customers you’ll influence with a launch, I’d urge you to consider whether a private launch of that MVP, with lots of testing, learning, and iteration to a smaller audience that knows they’re beta testing, could be the best path.
I think that’s my biggest lesson from the many times I’ve launched MVPs over my career. Sometimes, something is better than nothing. Surprisingly often, it’s not.
This article has been adapted from the book “Lost and Founder: A Painfully Honest Field Guide to the Startup World.”