My team had a huge, annoying debate about MVP vs MLP, and it didn't take me long to notice that it doesn't matter. They ended up with MVP, and that's just fine. AGP is just another label that I don't think really helps with any actual decisions. Once you have agreement that you don't really know what's viable, lovable, or appropriate yet anyway, the wording is just bike-shed painting.
The underlying idea for all of these is that you build incrementally, and as early as possible[1], launch to real customers who can give you useful feedback and start imposing market discipline on your product. Only then can you find out what's actually important to them.
[1] defining "possible" is usually the sticking point. This needs to be discussed in detail, not in general about whether it's "viable" or "lovable". What actual features do you want feedback on, and what set of operations/uses is anyone willing to pay for?
Once you have agreement that you don't really know what's viable, lovable, or appropriate yet anyway, the wording is just bike-shed painting.
It seems to me like there are meaningful differences between these words: viable, lovable and appropriately gray. Viable is saying "the line is here to the left", lovable is saying "the line is here to the right", and appropriately gray is saying "there are tradeoffs involved in where you put the line".
defining "possible" is usually the sticking point. This needs to be discussed in detail, not in general about whether it's "viable" or "lovable". What actual features do you want feedback on, and what set of operations/uses is anyone willing to pay for?
This is giving me a vibe of being similar to what my argument is about AGPs. Eg. you have to think about what line you want to target. Or am I misunderstanding your point?
It seems to me like there are meaningful differences between these words: viable, lovable and appropriately gray.
It seems that way, but I haven't found it to be the case in actual product discussions. The details and unknowns overwhelm the semantic and general differences.
I'm having a little bit of trouble understanding what you mean by this.
The way I see it (roughly) is that you can have a line somewhere that says "at this point the product becomes viable", and you can have another line that says "at this point the product becomes lovable". One thing you can argue about is where those lines belong. What exactly does it mean for a product to be viable? To be lovable?
A second thing you can argue about is which of those lines you should target (for a hypothesis test release).
A third thing you can argue about is where a current iteration of the product lies. Eg. with features A, B and C, where would you place the product along this spectrum? What if we added feature D? How far to the right does feature D move us? What about feature E?
With that stuff said, perhaps you are saying that people more or less agree on the first and second things, so there isn't that much to discuss there, but there are substantial disagreements on the third thing, and thus most of the conversation is spent on this third thing. Is that what you mean?
I'm guessing the point is that the difficulty of the third thing makes paying attention to the finer distinctions within the first two things less useful.
The correct position on the viability axis will depend on the type of business you want to build. Experimenting on real users has one major cost: reputation. Starting too far to the left on the viability spectrum risks users remembering your first crappy product and not trusting you anymore. This reputational cost is very impactful for businesses that rely on subscriptions or long-term relationships with clients, so those businesses ought to start out already kind-of-viable. But if your business is something like a product that you only sell once, without any long-term client relationship then you can afford to alienate early users with a possibly crappy first product, and you should test stuff as soon as possible.
The correct position on the viability axis will depend on the type of business you want to build. Experimenting on real users has one major cost: reputation.
That makes sense. I agree that reputation is a big factor to consider.
It's also worth mentioning that in The Lean Startup, the author mentions that you can consider releasing products under a different brand name for this reason.
But if your business is something like a product that you only sell once, without any long-term client relationship then you can afford to alienate early users with a possibly crappy first product, and you should test stuff as soon as possible.
In retrospect, this is an important thing that I failed to bring up in my OP. Better late than never though. Thanks for prompting me to talk about it.
I'd like to distinguish between two different things. 1) Feedback releases and 2) hypothesis test releases (I just made these terms up). In the former, you just are pushing new code and getting feedback from users about it. In the latter, you are intending it as a hypothesis test, where if the results are good you keep going and if they are bad you pivot. For feedback releases, I think there are few reasons not to be super aggressive about it.
But I think that in talking about MVPs vs AGPs vs MLPs, the crux of the disagreement is regarding that second type of release: hypothesis test releases. MVPs say that you can do this quite early on. MLPs say the bar is higher and you want to make sure the product is lovable before you pivot. My point about AGPs is instead of just defaulting to something like "viable" or "lovable", you should consider the tradeoffs at play.
I've often heard MVPs used to describe feature sets - i.e. what features do we need to include in the first release for this to be a useful product to seem people.
The set of all feature sets is discrete, and so it makes much more sense for this to occasionally be black and white.
So MVP for pet food is:
Lost of available pet food. Option to click on pet food and pay for it online.
Things which may not be part of MVP:
Create accounts Have baskets Discounts Advertising
Etc.
Yes, but we don't know what that discrete feature set actually looks like. We have some amount of confidence that a given feature set will be an accurate test, and adding more features increases this confidence.
I don't know if an MVP is always about testing the waters.
It can also be about getting started earning revenue and getting customers. And so it can mean "What is the minimum set of features, without which there's nothing even to advertise - this product would have zero advantages over what already exists"
That is sounding to me like it has a similar problem. How do you know when you reach that point of surpassing "zero advantages over what already exists"? Maybe you thought that feature A was unnecessary but it turned out you were wrong. Maybe you thought that the set of features A, B and C was what you needed, when in reality just A and B would have sufficed.
Epistemic status:
Here is how I understand the idea of a minimum viable product (MVP).
Starting out, you have some hypothesis that you want to test. Eg. that people will want to buy pet food online. How would you go about testing this hypothesis?
You could go to town and spend 18 months building out a beautiful website that has pixel perfect designs with an accompanying iOS and Android app, but that is a little bit excessive. To test your hypothesis, doing all of that isn't really necessary.
On the other hand, you could just throw together a Google Doc with products, pictures and prices and tell people to email you if they want to place an order. But I'd argue that this wouldn't be enough. You wouldn't get a good enough test of your hypothesis. People who otherwise would be willing to buy pet food online might not place orders because a Google Doc like this feels sketchy. In other words, the risk of a false negative is too high.
With that context, let me explain what an MVP says. An MVP says to keep "moving to the right" until you have a hypothesis test that is "viable", and then once you hit that point, stop. You don't want to be to the left of the line because that would mean you can't trust the results of your experiment. But you also don't want to be to the right of the line because that would mean acting on assumptions that haven't been validated.
Shades of gray
Something has always rubbed me the wrong way about this. I don't like the idea that there is this line where you declare that something is viable. "To the left of this line things aren't viable, to the right of this line things are viable." You don't hear it said this explicitly, but given how people talk about MVPs, I think that it is often times implied.
Here is how I see it. You never really get to 100% viability. There is always some chance of false negatives. Which means, the more time you spend improving the product, the smaller this chance of false negatives is. So then, you have to weigh the pros and the cons of investing more time into the product. The pro of investing more time is that you lower the chance of false negatives, and the con is that you expend resources (time, money, etc.).
In other words, viability is not black and white. It has shades of gray. So then, as an alternative to thinking about MVPs, maybe it'd make sense instead to thing about AGPs: Appropriately Gray Products.
All models are wrong, some are useful
You've heard that expression before right? That all models are wrong, and some are useful? It's something that has always stuck with me.
The reason I bring it up is because I anticipate a counterargument:
It's a good counterargumnent. Here are my thoughts.
Lenses
You know what else is useful? Having lots of different models of how the world works. This really hit me the other day as I was reading Philosophy of Therapy.
I'm not necessarily making the point (yet) that AGPs are a better way of looking at things than MVPs. Right now I'm just saying that they are a different lens, and that having different lenses in your toolkit is helpful. Sure, maybe one of your tools is your favorite — it's the first one you reach for and you use it more often than not — but sometimes it isn't the right tool for the job and it helps to have alternatives.
For this point about different lenses, I feel like the bar is pretty high if you want to say "No, screw all of your other lenses, you should just use this one lens."
MVP is too far to the left
I haven't spent too much time investigating alternatives to MVPs, but I know that they are out there. At one company I worked at, we used what's called a Minimum Lovable Product. MLPs are a thing. If you google it, you'll see lots of blog posts written about MLPs. The way I think about it, MLPs are like MVPs, but the line is further to the right. (Yes, you can quibble that the axis of viability is different from the axis of lovability.)
My impression is that if you look at the alternatives to MVPs, they're mostly moving the line to the right, not to the left. And I think that this trend is important. I think it points towards the line for MVPs being too far to the left.
Another thing that makes me think this, other than instincts/intuition, is reversed stupidity. I suspect that people have a decently strong bias for saying, "Ugh, this way of doing things is so dumb. I hate it so much. We should push in the other direction." And then proceed to push too hard in the other direction.
Let me be more conrete. Before MVPs, a "hail mary" or waterfall-y approach was popular, where you would spend a lot of time and money building an awesome and very feature-full product. I get the impression that people who pushed for MVPs saw this old way of doing things as stupid and reacted against it. "No, no, no! Why would you spend all of this time and money building this elaborate product without releasing it and testing it against real users!"
I agree that the old way was stupid (ie. way too far to the right), so I understand where the MVP people were coming from. I'm just saying that it is human nature to "reverse stupidity" and push too hard in the other direction. Given this human nature, I actually think the burden of proof is moreso to show that they didn't push too hard in the other direction in this specific situation. I don't see reason to believe that.