Classic mistakes in product management:
1) assuming the user understands what they want/need - this is rarely the case. Figuring out what they really need is your job.
2) assuming that what you are building is something the user wants - until people use it, you have no proof for this. Lots of startups fall into the trap of building stuff that nobody wants or needs.
3) assuming what users ask for is actually what they need - always figure out why they are asking for this, whether they are actually going to use it if you build it (I've had cases where we built stuff that was never used), and what it is worth to them.
4) assuming that what your sales people say the customer wants is actually what they want or need. This one is tricky. I've had sales people go "unless you build X, I can't close the deal" and then you build X and it doesn't make a difference. Reason: the sales person's analysis was wrong.
Especially with new products, figuring out if it is something users want is tricky. Do users actually like the new thing? They won't be asking for it because it is a new thing. You have to pitch and explain the thing to them and even then they still might not get it. Only when you show them the thing and they like it will you get some confirmation that this might be something they want/need.
The classic example is selling cars when they were invented is that all customers ever asked for was faster horses.
The problem is that no one actually dug in, which is why I say 80% of product people are a net negative.
If someone asks for something, dig in.
Think about it like this.
I go to a mechanic and tell them to replace my alternator. They do. I'm unhappy with and tell them they did a bad job.
vs
I got to a mechanic and tell them to replace my alternator. They ask me WHY I want the alternator replaced. I tell them my vehicle isn't running and I need it to transport me from A to B. They then start asking me questions. Turns out it was the solenoid, they replace it, my vehicle successfully transports me from A to B and I think they did a great job.
This is why senior developers often make better product people than product people. Too often you'll see PM's who will tell you the have 1-2, sometimes up to 5 years of experience as a developer. But quite often they don't actually understand anything because you can't beat that veteran with a guy who worked for a year or two then got a certificate for a product job.
people need to dig in, it doesn't matter what your assumption is, if you're not digging in and not having conversations you're making suboptimal decisions.
Fully agreed.
Once worked in a place that hired a design researcher and they were amazing, they took the time to dig in to what you’re talking about here: the actual issues, not the words that came out of people’s mouthes. Had a lot of good insights, which were ultimately wasted on that particular business, but that’s for different reasons.
The most successful company I ever worked out figured out a way to perform and get high quality feedback on low cost user experiments with a quick turnaround.
About 4/10 failed, 5/10 did ok and then with 1/10 they would absolutely hit it out of the park - usually on something that looked about as promising as the other 10. They did this frequently enough that they clobbered the competition, who were mostly just trying to do normal product management stuff like user surveys, etc.
The most interesting thing was that most people if you'd have asked them before would have gone "I don't think you can turn this into an experiment" and then immediately adopted a different strategy. Somehow they always figured out a creative way of injecting a low cost experiment where most people would have imagined that it wasnt feasible to run any experiment at all. It required quite a lot of pressure/leadership from above.
Concrete examples sound interesting here in so far as you’re able to share
One example was when they decided to test out the idea of sending particular kinds of price change notification emails to users. They created one checkbox on a form our users could click and set up a trigger to fire notify somebody in the Philippines who would construct the emails manually from the database. They then iterated on those emails, creating them manually the whole time.
Once the email was iterated upon and the idea was proven (i.e. that users actually wanted it in the form that existed), it was handed back to me to automate properly, which in that case took about 7-10 days of dev work.
Or not, if it turned out the customers simply weren't interested. There were plenty of experiments like that where I created a checkbox or something and then we quietly removed it 2 weeks later.
It was a refreshing change from past jobs where I'd spent weeks working on a feature asked for by users that we were so sure would be a game changer only to see 1 or 2 people actually use it.
It's the same advice as building an MVP, do everything manually first, then automate once you've proven the market, basically Paul Graham's advice of doing things that don't scale. The Doordash founders hand-delivered food first before building an app.
Obviously, that does not apply if the market already exists, which it usually does. In that case you have to be different and better enough than the alternatives.
Why doesn't it apply? Even if there were a lot of food delivery companies, it would still be good to gain traction through doing it manually instead of (or before) spending months building out a software solution. Even better, because one would personally deliver the food, they can talk to customers and ask what problems they have currently with other food delivery competitors, thereby learning what differentiating advantage would work well when creating the software solution.
In other words, doing things that don't scale is not just useful for validating the market, it's arguably even more useful for validating one's specific implementation of a product, and also derisking the business model as a whole rather than having high capex initially.
If you're building in an established category, you should bring product intuition, honed by deep knowledge of the existing products, and confirmation of their weaknesses from their users. You are not going to win by rapidly cornering the market ("blitzscaling") because it's already cornered. You want to disrupt it with something better and different. I think the focus on "things that don't scale" takes away from this more important requirement.
Of course, you should try to validate assumptions as quickly and cheaply as possible. If "doing things that don't scale" means temporarily not automating things involved in validating assumptions, then fine. But don't manually do things that are not part of that calculus.
thanks! Seems very pragmatic. It seems the secret ingredient is having a responsive userbase big enough to have statistical significance though. Something we've struggled with in a recent startup.
Would love to hear the examples and how they were turned into experiments.
Absolutely.
95% of the suggestions product creators receive are solutions. A product creator's job is to work out what problem their customer is trying to solve. It goes to the same motivation as root cause analysis when analysing an outage: the actual problem is nestled deep under layers of semantic shielding.
Whether your product is trying to solve that problem is another question, but at least you can understand how others perceive and try to use your product, which goes a long way to getting a product to be more broadly adopted.
semantic shielding is a word I have not heard before. But it perfectly summarizes most of the interactions with customers and stakeholders
I honestly made it up on the spot, so don't take it as proper nomenclature. However, to describe it another way: the unconscious mind synthesises its emotional choice, which the conscious mind presents as rational fact. If you want to know how the mind arrived at that conclusion, you've got to go on a fact-finding mission, i.e., asking why.
However, asking that question without triggering a negative response is an art form within itself. Rarely can you be so direct, which is why it's often much easier to watch how a person interacts with a product rather than asking them why they did what they did.
the more common name for it is post-hoc rationalization.
Causal shielding feels like a better term in this case
I love this term. It describes perfectly something I encountered during my days as the sole technical support for a small ISP. I got really good at routing around it in my customers brains.
I would ask "have you installed anything?", they would say no.
I would ask "do you have any new programs or stuff?", they would say yes.
Because to them, installing something was like adding something under the hood of a car. Since putting in a new program didn't involve taking the cover off the computer, there wasn't anything "installed".
There was mismatch in our definitions of terms. I had to figure out how to route around it. I got really sensitive to things like that, and routing around it.
Semantic mismatch might be another term for it in my case.
An XY problem[0] about A-B!
[0] https://en.m.wikipedia.org/wiki/XY_problem
Thank you for this link! I finally understand why one of my former coworkers' answers always frustrated me as much as they did.
I would always ask my Slack questions in XY format, but lead with the actual question, followed by a full paragraph explaining exactly why I believed X to be the solution for Y, and what other letters I had tried before concluding that X was the next likeliest solution for Y.
Something in her brain was programmed to see an XY question and immediately assume the asker was wrong about X. Not once did she read the following paragraph. I would have to reiterate it sentence by sentence as she asked as many "what abouts" to try to challenge my X as I had sentences in the paragraph. It drove me BONKERS.
But I never understood how there was such a disconnect between my frustration and how much customers actually apparently liked dealing with her. Her default approach to XY questions was probably very effective with people who actually did not know the product in-depth, but was profoundly aggravating to those of us whose job it was to be familiar with all the other letters, who really, truly only wanted the answer about X and did not need someone to rethink our entire process.
I have such little patience for that kind of person. In slack (well mattermost, but it's essentially the same thing) you can copy the direct link to a message and it'll re-embed it if you link to it in a subsequent message. I'll do that if someone is asking questions I just answered, because really what's the point of a conversation if you're ignoring my half of it.
I started so many replies with, "As I mentioned," that I needed a keyboard shortcut for it.
The challenge here can be often that even though the client doesn't really need it, they are convinced they do. Even after digging deeper you understand there are other issues, for instance marketing or sales, that might not even be related to your software.
But if they are a key client you might end up building it, understanding it won't solve their core issues but doing so in order to keep them happy
This is one reason why it's better to have lots of small clients than a few big clients, because over time, the few big ones will increasingly dictate your product strategy in ways you will not want to go. Jason Fried from Basecamp has a good article about this [0].
[0] https://www.inc.com/magazine/201206/jason-fried/huge-account... (Archive link: https://archive.is/IHmwL)
That happens, then you need to figure out how to handle them. Sometimes when you show them what they really need 'the light goes on' and they don't care about what they asked for anymore (or worse someone else shows it to them and they don't buy from you at all despite your customer focus). Other times they remain convinced they need that thing and so you need to gike it to them lest they buy from someone else).
this it a hard problem to figure out, good luck.
It's a good old XY problem. Funnily I've heard customers that doesn't wanna bother getting asked why, they just want someone who do what they're told. Some junior engineers too, just wants the answer of what is asked instead of "the real problem".
The XY problem is poorly-framed.
First of all, it's often a false dichotomy: you can answer the X directly and dig deeper to find Y.
Second, it places the questioner in a "normal" bucket: surely they must have the same boring mundane needs as everyone else, and we just need to figure out which one it is (Y). But some people really are working on something interesting, and need to understand X to do so.
Thirdly, even if Y really is better for them, they still might benefit from an answer to X to avoid asking future variations of X.
It's hard for me to say this without wondering if I am being rude, but did you mean to say "dig down"? At least to me, "to dig in" means to prepare for a long-drawn confrontation (like an argument or a battle). Maybe I am wrong, but my brain just wouldn't let me go on to the next comment without pointing it out.
I can relate to the senior developers in PM roles. I only worked with one such PM, but he was amazing.
Most of the PMs come from non-tech background, more project-management types, and it shows.
I look at the Segway as a counter example. Maybe people really did just want a faster way to walk through a city. No offense to Dean Kamen —- he has a very impressive set of accomplishments.
I fully support performing user studies to better understand the problem domain and the feature space. I think your bullet points illustrate some of the challenges in finding the right balance. Unfortunately, I’ve run into way more Segway builders than I have car inventors. They build something based on founder intuition or just bad user studies and then glibly dismiss customer requests as calls for faster horses. I lack the time, energy, and desire to adapt to whatever custom workflow is added because I supposedly don’t know my domain well enough to know what I want.
There’s probably a B2C and B2B split in there, but I’ve never seen that distinction made in application of some of this advice. I know you aren’t advocating for the outright dismissal of user feedback, but I’ve seen that interpretation time and time again. I’d love for us to get a new metaphor or apocryphal story.
People obviously wanted a Segway, they just didn't want it at that price point.
People even obviously wanted a Segway in Europe, where the cities have impressive support for bikes.
If People hadn't wanted a Segway (but way cheaper) we wouldn't have the streets littered with cheap electric scooters everywhere you go.
I don’t see how it’s obvious people wanted a Segway. Cost is one problem. They’re also large, making them difficult to store at either end of a destination. They’re a nuisance and safety risk for pedestrians. Riders kept toppling them. They’re an even bigger problem than scooters when the battery dies mid-journey. I’ll grant some people were really enthusiastic about them, but I don’t see that translating into mass market appeal.
Even if I’m dead wrong and there’s a throng of people just waiting for a Segway fire sale, it doesn’t invalidate my point. If your product solves the problem by ignoring the economics of your target customer base that’s just as problematic as building the wrong product.
Electric scooters have seen a lot more traction. Both teams can’t use the supposed Henry Ford quote to say the customer doesn’t know what they want. Or, rather they can, but that doesn’t mean they have any more of a clue than the customer does.
My point is maybe we should stop being so antagonistic to the customer. You’ll find no shortage of instances where the customer is wrong. But there’s plenty of others where the customer is right and the product team is wrong. I’m tired of dealing with products that think they know better than me because they think they’re inventing cars.
Style is a big factor. Segway riders look awkward. Scooters—with the feet placed in front of each other—streamline the body and look more natural.
How in gods name did we get Segways 20 years before kids could grab a simple electric scooter?
Somebody invested a lot of time and money with an incredibly single-minded focus.
I don't see anyone being antagonistic. The attitude you are thinking of antagonism to me is more like: we're treating them like all other humans, namely lacking good articulation skills and rarely going in deeply analyzing what is that they truly want.
And it's not about customers being "right" or "wrong". It's about making a sale. And many customers will not give you several chances.
Hence, the strategy of "do your absolute very best to figure what they really need even if that means not immediately accepting what they say they want as the truth" is optimizing for the very limited amount of tries to get it right.
It's quite logical IMO and there's nothing antagonistic or demeaning in this process.
> I don’t see how it’s obvious people wanted a Segway.
I assume what bryanrasmussen means is: Any city-dweller will see electric scooters, electric unicycles, 'hoverboards' and e-bikes from time to time.
So the makers of the Segway were correct in identifying a demand for an urban transit option other than walking, bicycles, cars, mobility scooters and public transit. They were correct in making it electric, about the speed of a fast leisure cyclist, with a range a little above 10 miles, and operated from a standing position.
However, they were incorrect about the price and size. Segways were 10x too expensive (even ignoring inflation) and at 100lbs+ were too large and heavy to carry inside or on public transit.
It wasn't just the price point, it was the bulk/footprint of the thing being fairly non-pedestrian friendly and difficult to transport/store. Scooters are a massive improvement in that regard.
it's pretty common that the first iteration of a piece of hardware is big.
Of course, but the problem is the lack of adoption didn't really push it to become smaller or substantially cheaper and more accessible. A modern Segway has a similar footprint.
Maybe impressive by US standards but don’t underestimate us, we managed to transform perfectly walkable / cyclable cities and dense rail networks into car centric cities and highways everywhere in less than a century.
In fact I’d even say that people didn’t need cars but that cars created so much opportunities of extending human cities that once we got there, we made ourselves dependent of cars.
And in retrospection, it was hardly a good direction to take for our future.
So, well, it seems plausible that what people really needed was just better bikes and trains but that they all bought into the car "freedom" marketing.
Ok, let's go back and change things as a thought experiment. How could this have played out differently (in America).
Are you suggesting that America could be as large and diverse as it is now just using bikes and trains?
Are you also suggesting most of the 230 million US drivers would prefer this as a better solution?
Some times it looks like nobody even know street-cars existed anymore.
I completely agree with you. Sadly though, we inhabit a reality where the first semi-solution to a problem wins. That something is pervasive is IMO 95% of the time about whether it was first to market -- not if it was solving the problem perfectly.
Cars and many internet protocols (and software) are a perfect demonstration of this tendency.
It's not just marketing though. Cars do provide a level of freedom not given by other forms of transport. I say this as someone who started to drive relatively late in life, it completely changes what options are open to you on the weekend for example.
"assuming that what your sales people say the customer wants is actually what they want or need. This one is tricky. I've had sales people go "unless you build X, I can't close the deal" and then you build X and it doesn't make a difference. Reason: the sales person's analysis was wrong."
this is SUPER common, and very hard for organization to counter. As salespeople usually interact the most with users, PMs tend to just do what they say.
It's particularly common with big whale enterprise clients, where the salesperson is trying to close a very long sales process or the client has you by the short hairs in the contract. I remember one case where we had to implement a very complex and time-consuming custom SAML integration to a platform where we just had one client asking for it, because that client was too big to ignore. Once we integrated SAML, the number of end users actually using the SAML login option was in the single digits. It was just a checklist item for some purchasing officer flunky.
For SAML in particular I can definitely relate here. I will say though, that for SaaS products that you want large companies to use, you owe it to yourself to have an auth layer that is immensely flexible and pluggable. Auth is a huge mess and large companies nearly all have single-sign-on solutions, sometimes strange bespoke ones, and so you gotta expect to potentially build a zillion different auth integrations if you want to sell to large companies.
This is exactly why we built WorkOS. Check it out if you’re looking for SAML/SCIM for your app. (I’m the founder.)
https://workos.com/single-sign-on
Sure, but if you are planning to go from SME to enterprise you need to think strategically for the long term. You'll probably need to have more developers to deal with the complexities and hand-holding you've touched on (auth is just one of the issues and not even the most complex) and probably non-technical roles for all the compliance and so on.
The danger of landing just one or two "whales" is that you bend your product backwards for these clients, but without thinking through the TCO it can cost you more in the long run and put your business in more jeopardy than when you rely on a few hundred SMEs.
The fact that no one used it doesn't mean that building it was a mistake. Maybe the purchase officer got it wrong, but the salesperson who requested it be built got it right, since the deal did in fact depends on it.
Perhaps. But then it's important to look at the total cost of ownership: was it worth building a complex feature for one client that adds to the technical debt of a code base and tied up developer resources that could have been better employed elsewhere? Could we have pushed back on this feature or was it really a deal-breaker?
In other words, perhaps landing this one client was important to that salesperson, and perhaps in the short term to the company, but in the long term it may have been better to have lost that one deal. Very often these decisions are made with little calculus about the TCO.
I saw a head of sales literally kill a business by endlessly doing this, when in reality he was deflecting from his own lack of ability. There was always a clearly articulated reason why it was someone else's issue that he wasn't closing deals, and by the time that issue was fixed as a company emergency he had come up with two or three new ones.
This is a very common story. Unfortunately the incentives and general historical inertia around sales as a role don't always gel with the long term picture.
It's a very human story, but it's one I've had to try to put preventative structures around for my whole career.
The default, "your job is just to sell and your reward structures reflect that" creates an environment where the easiest path is to over-promise. Hence when that sales role sits too high in the org structure you naturally end up in the endless sell what you don't have -> scramble, sell -> scramble cycle.
It seems very obvious that the best salesman would sell the lowest number of features for the highest price, but the incentives are typically based on the top line figure instead which is nice for cash flow but not for profitability.
So far I've managed to solve it somewhat by building internal barriers, but they take constant maintenance.
I'm sure the better solution is to lean the incentives towards profitability but the implications are complex.
A land and expand strategy could maybe help mitigate it, what do you think? Keep the focus on small deals constantly closing, and then follow up to upsell them. Separate spiffs for each part of the sales motion.
On the SaaS side, yeah I think that's a good move.
In this case there's a large hardware component of the sale which incentivises the software over-promising. There's also a lot of competition (think a tender process).
It's really about shifting the small decisions, removing that marginal "yes, we can do that" which didn't need to be there at all for the sale to close.
You could have engineering estimate the cost of the new feature to deduct from the profit?
On the other hand, great sales people have a knack for understanding what customers really want, as well as non-technical obstacles to selling like procurement policies or management hierarchy. You ignore them at your peril.
Most people didn't want faster horses, they didn't even have slow horses.
What they wanted was time off and something to do with it.
Most potential customers then.
I don't understand.
Ford's introduced the 5-day work week (reduced from 6), and the "Five-Dollar Day" and reduced work days from 9 hours to 8.
No-one talked about horses.
The notion of a work week presumes the existence of work, which is enabled by a product for Ford to sell: cars. How they manage the company, and what benefits they confer to employees is a separate question.
I still don't understand what point you are trying to make, or how it relates to product development.
I'm saying you're mixing up the subject. We're talking about what people want in the context of product development, you are not. The work week has nothing to do with product.
Instead of faster horses, Ford's mass production process was the product. All those workplace changes were to expand the pie.
I sometimes sum this up as: Don‘t listen to your customer. Watch your customer.
Obviously needs to be taken with a grain of salt, but seeing how users behave is often more insightful than asking them what they want. You just need to setup the environment for watching in a way that you learn what you want to learn.
Great point. We have a customer who uses our tool in a sensitive environment - I'm not even allowed to watch them use it. This is mind bogglingly frustrating as I can only go by what they say, whereas half an hour actually seeing what they do with it would progress the tool and our relationship in leaps and bounds.
Can't you set up a test environment with fake data, and ask them to simulate their work there?
It's less than ideal, but it would allow youto see and understand their workflows
We worked on a large overhaul of the customer support system for Big Grocery Store a couple years ago, and the PMs and designers spent a lot of time sitting down with the users and watching them walk through their current flow. They observed, asked about pain points, and then iterated on designs that the users could actually interact with. They had about 70% of the functionality fleshed out before we ever started implementation, which felt really good. We still hit plenty of roadblocks and gaps, but it felt much better in terms of understanding what the users actually needed
I often join the daily standup meeting of the support and deployment team so I can 'watch' them and it has been really valuable. Sometimes giving them tips & tricks, sometimes just listening to newcomers questions to learn what is not simple enough, learning some awful workaround they come up with instead of asking/opening a ticket, and also learning their bad habits.
Absolutely and I would also add: passively monitor and listen to them. What are they complaining or asking about in communities/social media? What are they talking about in conferences and industry events?
"unless you build X, I can't close the deal"
There are things where this is true, compliance, SSO, enterprise integrations.
But most of the time this signals the product has no USP and it also signals that you should get rid of your sales staff.
What's USP?
https://en.m.wikipedia.org/wiki/Unique_selling_proposition
As mentioned in the other reply, "unique selling point" - something that makes your customers want to buy your product. If you take it away, customers won't buy.
Instead of working on their USP too many startups through everything at the wall and see what sticks, b/c they don't know why their customers buy or should buy their product.
Similiar, but not the same, is vision driven product development vs. checklist driven development.
As mentioned in the other reply, "unique selling point" - something that makes your customers want to buy your product. If you take it away, customers won't buy.
You know what literally no one asked for though? A faster horse controlled by a touchscreen, or a horse which spies on you to show you ads, or changes its crappy electron UI every 5 days thanks to corporate branding follies.
Yes those touchscreens in cars are just ridiculous distractions and should probably be limited in their use by legislation.
Instead we basically got legislation requiring their presence. At any rate, backup cameras are required now and probably no one is going to let all that screen space go unused when not backing.
Yeah, it's amazing how much this whole thread explains why modern software and tech products are such garbage which doesn't do anything that users want.
If I reuse the crappy car analogy from above: modern software engineers and product managers will look at your broken alternator and tell you that you need a new LCD screen on dashboard while gaslighting you that alternator isn't actually something that needs to work in a car. Despite you really needing it for the damn thing to run.
Also keep in mind that there are products out there were the real customer isn't the user, so giving users what they want is secondary to building the features that matter to the revenue stream. Obviously the major social media are guilty of this, but so are retail websites and airline booking apps. They may even employ dark patterns, which are clearly not in the users interests, and aren't even necessarily desired by the paying customers.
Majority of B2B SaaS too. It's the only reason SAP is what it is
Absolutely. The folks who make the decisions to buy aren't the ones that have to use it.
https://en.wikipedia.org/wiki/Principal%E2%80%93agent_proble...
For the benefit of younger devs, I'd like to add that this is a rookie sales mistake that indicates lack of software sales experience.
Definitely not a rookie mistake but quite common. Especially if a competitor has X. Then it's up to sales to convince the client they don't need X, which makes it an even tougher proposition than if they just asked for X in a vacuum
Can you expand on this a bit, or recommend an article that does?
1 and 3 is the same, if 4 then add 5) assume what devs say is actually what they want or need
They are similar but there's a difference between a customer explicitly asking for something / demanding it, and soliciting feedback.
Salespeople are often weak and wrong. I was top 0.3% nationwide and my claim to fame was honing in on deals that I could tell were deals and finishing them. Most salespeople are either oblivious or too hopeful and will make you and spin your wheels with false hope. It's not that a salesperson that talks to users and buyers is worthless. But if you can't tell if your salesperson themselves isn't spectacular, but instead just average, you're going to have trouble.
Your top sales tip for a solo SaaS founder?
Happy holidays! ^_^
If you have to build first in order to pitch and show it, then necessarily you've had to assume that what you were building is something they want. Maybe your point was to get new features quickly into the hands of users to test the assumption early, but that has its own significant downsides.
I like the way you phrased it though, in terms of "don't assume foo" instead of "do bar", because it implies that there's really no replacing good old-fashioned _thinking_ with blindly following a 5-step plan to success.
It argues for validating your assumptions as quick as you can. Which is why mvps, prototypes, and click demos can be a good thing. It doesn't always work though. Digging in and burning through a few million of investor money before your product encounters real users, is the expensive way to do it.
I suppose the trick is knowing when to trust their judgement. The customer isn't always wrong, and often has a lot of context you'll never fully understand.
"Show me why you need this" can sometimes help, but not always.
I will add one more case that's often overlooked:
5) assuming they are willing to pay what they want/need
The "faster horses" comment is universally attributed to Henry Ford, but it appears he didn't actually say it. Here's an article that shows the results of researching the source, and has some interesting observations about Ford's innovations and the blinders that early success can cause.
https://hbr.org/2011/08/henry-ford-never-said-the-fast
tHE classic mistake is not asking the right person within the customer's corporate hierarchy.
Product Management should be about making it more efficient for end-users, low-tiered workers, and not the purchase manager nor even CTO.
Another great example is video games.
Gamers will come up with all sorts of ideas that sound cool but have nothing fun to them except the initial novelty, particularly when it comes to simulation style games (eg "I should be able to walk around my spaceship and be able to do spacewalks to patch up the hull after micrometeorite impacts").
But on the other hand it is also very common for companies/developers to take this too far and blame the players for being wrong in not enjoying their game.
A good solution to #4 is to get the commitment in writing first. In the sales contract, say they will pay X upfront you will build F by Y date.
This solves many problems. Customer can’t back out. Gives sales person clear next steps. Clearly defines feature details and deadline.
Just make sure you set deadlines you can definitely hit.