You know how this goes. The design team runs three rounds of research, tests multiple prototypes, and brings data to the roadmap review. Then the PM says the sprint cannot wait. The first version ships. No validation. Just assumptions in production.
Atlassian's inaugural State of Product 2026, surveying over 1,000 product professionals, found that 40% of product teams do little or no experimentation at all. That is not a minority edge case. That is four in ten teams making product decisions without checking whether those decisions map to reality.
The default framing is that teams skip validation because they are busy or under-resourced. That framing misses the deeper problem. When a team ships without validating, they are not just skipping a research step. They are shipping their internal assumptions as if those assumptions were verified user needs.
The difference matters. A team that knows it is guessing can at least hedge. A team that treats its assumptions as facts builds confidently toward the wrong target.
This is why the same Atlassian report found that 84% of product teams worry their current products will not succeed in the market. Teams that skip validation are not just operating with less data. They are operating with misplaced confidence: high certainty about things they have not tested, low certainty about what users actually need.
What happens after a feature ships without validation follows a predictable pattern. Adoption comes in lower than expected. Post-launch feedback reveals that users want the feature to work differently. Marcus, the PM, calls for iteration. Alex and the engineering team absorb the rework. Timelines slip. And somewhere in the retrospective, the phrase "design misunderstood the user" appears.
Sarah, the designer, did not misunderstand the user. The team skipped the step that would have surfaced the user's actual mental model. But by that point, the conversation has moved on to delivery timelines, and the root cause gets buried.
The numbers on rework confirm the cost. Forrester's 2025 Total Economic Impact study, commissioned by UserTesting, found that validating designs before development reduces iteration cycles by 25%, saving enterprises approximately $2.5 million in developer costs. Code Climate's analysis of DORA research found that average dev teams rework 26% of their code before release, costing mid-sized companies upwards of $4.7 million annually.
These are not abstract figures. They are the downstream invoice for the decision not to validate.
Every research and testing platform publishes content about better research practices. Maze positions itself as continuous product discovery. UserTesting publishes ROI content for enterprise buyers. Dovetail focuses on methodology for teams already running research. What none of them address is the 40% who do no validation at all. Their content assumes the reader is already a practitioner. That assumption excludes four in ten product teams.
Why do those teams skip validation? Four root causes appear consistently: time pressure compresses the process until research becomes a sprint casualty; competitive mimicry treats "the market leader has it" as sufficient validation; team overconfidence grows from months of proximity to a build; and in many organisations, design is not framed as a revenue function, so its methods are treated as optional overhead.
Each of these is understandable. None of them accounts for the downstream cost.
Airtable's product management trends research found that only one in three product teams say their workflows are truly efficient and repeatable. That statistic and the 40% no-validation figure are not independent data points. They are describing the same underlying condition.
Teams without efficient, repeatable workflows make ad-hoc calls about what to skip. Validation is one of the first things to go. The result is lower confidence in what ships, lower efficiency in how it ships, and a pattern that becomes self-reinforcing: teams that skip validation get burned by post-launch iteration, which creates more time pressure, which leads to more skipped validation.
The same Atlassian report found that 80% of product teams do not involve engineers early in the process. For Alex and engineering teams everywhere, the validation skip is not isolated. It is part of a broader pattern of alignment gaps across the product development cycle.
The solution is not a better research process. Teams that skip validation are not missing better tools or methodologies. They are missing the framing that connects the skip to its cost.
Validation is not a UX formality. It is a cost avoidance decision. The 25 minutes Sarah spends testing a prototype with five users is a lever against thousands of dollars in developer rework per saved cycle. The research round Marcus wants to skip has a price tag attached. That price tag only becomes visible weeks after the feature ships, when it shows up as rework, iteration backlog, and post-launch firefighting.
The disconnect is not between designers who want to research and PMs who do not. It is between the moment a product decision is made and the moment its consequences appear. Validation closes that gap before it becomes a cost. Until teams connect those two moments explicitly, 40% will keep shipping assumptions as features.
