My brother is a Product Manager. I'm a Product Designer. We worked at different companies and had the same conversation every week: what are you actually building right now, and why is that the priority? That friction, repeated often enough, became a brief.
01 · The Problem
The roadmap problem is usually framed as an internal alignment issue. It isn't only that.
A 2016 Label Insight survey found that 94% of consumers are more likely to be loyal to businesses that offer complete transparency, and 73% said they'd pay more for products that guarantee it. Bain and Company found that a 5% increase in customer retention can raise profits between 25% and 95%. A public roadmap isn't just a communication tool. Used well, it's retention infrastructure.
Product Gears is a SaaS tool that lets companies publish their product roadmaps, collect customer suggestions, and close the feedback loop when something gets built. My brother brought the PM perspective. I brought the design. We were also, conveniently, the exact users we were building for.
02 · Architecture
Before any research, I knew what kind of product this was: two-sided. The admin — a PM or product team — needs a dashboard to manage suggestions, set statuses, merge duplicates, and control what customers see. The end customer needs a simple, frictionless interface to submit ideas, follow progress, and feel heard. Those two users never interact directly. But every design decision I made for one had consequences for the other.
Mixing admin capabilities with the customer-facing surface — which several competitors do — creates confusion on both sides. The PM doesn't want customers to see internal labels. The customer doesn't want to navigate a backend. I designed them as genuinely separate interfaces from the start: their own navigation logic, their own information hierarchy, a shared data layer underneath.
Admin interface
Manage suggestions, set statuses, merge duplicates, and control what customers see. Designed for speed — PMs triaging input in short focused sessions, not exploring a product.
Customer-facing interface
Submit ideas, follow progress, receive notifications when status changes. Designed for trust — customers who feel heard stay. Customers who submit and hear nothing leave.
03 · Research
I ran desk research, a CSD matrix, competitive benchmarking using Nielsen's 10 Heuristics, and remote interviews with Product Managers and Designers recruited through PM communities on WhatsApp and Telegram. The benchmarking told me what the market had already normalized. The interviews told me where the actual friction was.
PMs dreaded duplicates
The fear wasn't getting feedback — it was getting the same feedback forty times and having to manage it manually. Any tool that increased input volume without solving the management problem would create more work than it solved.
Silence broke trust
Customers who submitted ideas and heard nothing felt ignored. The absence of a notification when a suggestion changed status broke the trust the tool was supposed to build. A transparency product that goes quiet after receiving input contradicts its own premise.
Both of these insights went straight into the feature architecture. Neither was obvious before the research — which is exactly what research is for.
04 · What Shipped
The MVP scope came from a MoSCoW exercise that forced explicit decisions about what the product would and wouldn't do at launch. Features like Zapier integration, GitHub, and Apple login were deferred — not arbitrarily, but because their absence didn't break the core use case.
Usability testing ran two parallel scripts: one through the admin interface, one through the customer-facing page. No critical navigation failures. The two-sided architecture held — users understood instinctively which surface they were on and what it was asking them to do.
Merge suggestions: deferred → must-have
Multiple participants flagged unprompted that managing duplicates without it would make the product more of a burden than a tool. Moved pre-launch.
Customer notifications: reprioritized up
For the same reason research had pointed toward: the feedback loop wasn't complete without them. A suggestion submitted with no status update is a broken promise.
05 · What I'd Do Differently
Pressure-test the revenue model earlier
The research validated the problem clearly. It didn't deeply validate whether PMs at small-to-mid companies would pay for a standalone tool, or whether they'd expect it as a feature inside something they already used. That question should have been explicit in the interviews.
Over-scoped the design system
Building a full component library alongside an MVP added overhead the project didn't need yet. A lightweight token set and a handful of reusable components would have been enough. I was conflating "system that supports building" with "system as the deliverable."
The post-launch plan was too passive. Following Hotjar behavior and prioritizing from there is the right instinct, but without pre-defined hypotheses, you end up with data and no direction. Instrumentation needs a question to answer, not just behavior to record.
Product Gears shipped. The core loop worked. The lessons it taught about two-sided architecture and the gap between designing for a hypothesis versus designing for evidence were more durable than any single feature decision. Those traveled forward.