Brandwatch, 2024
Find out why users are leaving

Tracks Covered
Problem Discovery, UX Architecture, Rapid Prototyping
Team Involved
Project Manager, FE and BE Engineers, Customer Success Managers
Tools Used
Figma, Miro, Productboard
Timeline Spanned
⟢ Overview
The TL;DR
About Measure from Brandwatch
Measure, a B2B SaaS social analytics tool from Brandwatch, helps businesses track their social media performance through customisable dashboards, showing them where efforts are succeeding and where to adjust their content strategies.

Social Analytics with Measure
Problem
Declining retention, loss of competitive edge, and at the centre — frustrated users. How does a product overcome a web of issues rooted in API restrictions, complicated workflows, and poor in-platform education?

Problems to the Left and Right
Method
Identify the core product issues driving user churn and their interconnections.
Explore immediate refinements to long-term, structural changes to tackle these issues.
Develop a product-wide shared understanding of the problem and the need for a solution.

Project Methodology
Solution
We discovered that user frustrations were symptoms of an inefficient dashboard structure, which forced surrounding features to compromise and adapt, leading to suboptimal user experiences.
This was all illustrated in a Problem Canvas that we created and was a key tool in aligning internal stakeholders around a shared understanding of the core product challenges and the need for change.
By creating a Now-Next-Later roadmap, we identified urgent changes to implement immediately, while also laying the groundwork for more foundational and impactful reworks in the future. This process played a key role in shaping the long-term vision for the product.
As a result, the solutions we proposed have been included in the 2025 Product Roadmap.
1
Aligned vision across the product
80%
Stakeholder buy-in for the action plan
3 out of 5
'Now' and 'Next' solutions added to Product Roadmap

⟢ Research
When a Crack in The Foundation Surfaces
So, what you're saying is that we're the problem?
After discussing with the product manager declining retention in the product, I cross-checked issues mentioned in cancellation reasons to user feedback that we had in Productboard. I was able to uncover a myriad of issues. Interestingly enough, the more I started to uncover all these issues, the more they started to seem... related? 🤨
Issues Mentioned in User Feedback Recorded on Productboard
Enter affinity diagraming, make it vertical, and voila, the initial Problem Canvas. The issues were starting to look more like symptoms of a bigger problem.
Initial Problem Canvas with Miro
The user and business problems (in lime green and orange respectively) were all symptoms of cracks in how the data in the dashboards were structured. These structures, however, were set in place because of the nature of the data sent to us by the social media networks.
I needed to validate this canvas and further understand the problem from a different perspective.
Bothering my favourite people
Our Customer Success Managers were power users of Measure and also incredibly close with our users and their needs. We wanted to ask them:
How users use the two different category of metrics
How often they filtered their dashboards


Workshop Summaries Made with Miro
Their responses made us have a better understanding of our user's behaviours. It highlighted how the product was not supporting them achieving their goals and how it forced them to redefine those goals to what was possible within Measure.
We then presented the Problem Canvas to them. They provided their feedback and added their own inputs (in grey) to the canvas, expanding it further and making the whole scope of the problem crystal clear to us.
Problem Canvas with CSM Input (In Grey) on Miro

⟢ Problem
It Was, in Fact, All Related
Bingooooo!
The way channel and content metrics were mixed into our dashboards was causing a lot of issues for the users and the business. Using the Lean Canvas we identified our key questions to address this:
01
How might we better support our users in terms of in-platform education and transparency regarding the metrics we offer?
02
How might we redesign how users select metrics so that they are aware of what metrics are available for which visualisation and also which social network?
03
How might we offer more visualisation options and settings while managing the differences between our different data sources?
04
How might we make the process of dashboard and widget creation more intuitive and easier to follow?

Lean Canvas with Miro
A Problem Shared is a Problem Halved
*Respectfully* stalking our competitors
Interestingly enough, the social networks makes the same metrics available to everyone, so how come it seemed as though our competitors weren't experiencing the same problems? What we found was telling! Unlike us they:
01
Supported cross-network analysis
In theory, it shouldn't work but through a specific back-end technique, it ca be done. It's not accurate but it does fulfil the need for comparative analysis.
02
Tightly wove mentions of channel/content metrics into all appropriate UI elements
This made it easier for users to understand what exactly they had available to them and lessened chances for confusion.
03
Structured their creation flows much more simply
Their flows made much more sense than ours did.
Competitor Analysis with Miro
10 braincells hard at work
With the problems identified and inspiration from the competitors gathered, I led an ideation workshop with the Measure team. It was a healthy mix of 10 people trying to solve these problems from the perspectives of Product, Design, Research, and Engineering.
We exchanged ideas on how to answer the HMWs, referencing our competitor's approaches here and there. After the gathering and sharing ideas, we grouped the similar ones together and evaluated how well they addressed the HMWs. We then voted on the ones we assumed to be better fit at answering their respective HMWs.

Ideation Workshop with Miro
⟢ Solution
Enter the Game Plan
First things first
After voting, we organised the winners into a usability vs. effort matrix and approached it from 3 perspectives:
From the perspective of the back-enders
From the perspective of the front-enders
From the perspective of product (product x design x user research)
We then averaged the placement of each idea in those matrices into a new matrix and made adjustments wherever the team found it necessary:

Prioritisation Matrix with Miro
Quick, think fast!
It was clear that a structural change to the core flow of Measure was needed. We then shifted the workshop to become about imagining the “ideal” version of Measure. Keeping the HMWs in mind, we brainstormed a reworked flow that could enable all the solutions we had prioritised.
With everyones ideas, I quickly put together a low-fidelity prototype, tweaking it in real-time as the team provided their feedback. The prototype was nowhere near perfect, but the importance of it was that it enabled us to have a shared vision.
Current flow

Current Flow in Miro, Visualised in Measure
Users had to first select a visualisation before selecting their metrics, often resulting in selecting visualisation that did not support the metric they had in mind.
Users had to commit to creating a widget before they could apply filters to it, not allowing users to experiment with data freely.
Ideal flow

Ideal Flow in Miro, Visualised with Figma
Users can select from different data sources, with descriptions and supplemental info to help them understand what they're choosing.
Users can freely play around with metrics, visualisations, and filters at the same, no longer needing to commit to a selection only to realise it doesn't match with their needs.
Is it over yet?
After more than 3 hours, all our social batteries and brain juices were pretty much exhausted… but, we did have something to show for it!
A shared understanding of how important it is that we earnestly solve these problems.
A prototype to showcase to internal stakeholders a vision of a much improved and user friendly Measure.
A plan for what to do now, next, and later:

Workshop Next Steps with Miro
Yes, it's over… the first part anyways.
With the refined problem canvas, How Might We's, lofi prototype, and roadmap, we now had a solid case to push to out stakeholders. What started out as as a simple discussion between the product manager and I turned into a full-fledged and stakeholder-supported objective to work towards in the 2025 Measure Official Product Roadmap.
Now, the project has been handed over to another product designer and they’ve since continued where I left off and *as of writing* have now started testing the roadmap items on users.
⟢ Reflection
Now, That's a Wrap!
Ann-Louise has gained the power up of hindsight!
This entire discovery was a side project squeezed into small blocks of free time between all our other large projects. It spanned over 8 months. 8 months is a long time.
And while it would've been near impossible to push any of the solutions into the current and full 2024 roadmap, a lot more could have happened within that time. Looking back, user testing could have been done. That way our assumptions regarding each category of solutions could have been validated before going into 2025.
Ann-Louise has levelled up her problem discovery skills!
I really tested my ability to think critically during this project, repeatedly questioning myself ‘why?'. Why this? Why not that? Why is it like this? And why is it not like that? And a bajillion other whys. It genuinely took a lot of brain power but I'm proud to have made such an impact in the vision of Measure.
- END -
To Jaime, bless you for enduring those 3 hour long discussions that left us absolutely drained. And to Jesse, your mentorship has made me a better product designer and person. Thank you both.