For years, we’ve relied on fixed-term refresh models: refresh every three years, or five, or whatever the spreadsheet says. It’s predictable and scalable, but it’s wrong.
These rigid models ignore the lived experience of the user, the actual performance of the device, and the financial realities of modern IT operations. Some employees receive new hardware they don’t need, while others are left struggling with outdated machines that don’t support them.
That shouldn’t be the case. We need a more responsive approach that’s driven by telemetry but anchored in user sentiment. While the concept may sound somewhat straightforward, breaking the mold is anything but. For my team and me, it took the better part of six months to go from idea to a fully operational telemetry-based refresh model.
That journey taught us a lot about data, about people, and about how to build systems that actually serve the workforce. And it wasn’t just theory – this approach delivered over $40 million in savings globally, while improving employee satisfaction with their devices. Here’s what we learned and how you can apply it to build a refresh strategy that actually works.
Start with Sentiment, Then Scale It
Telemetry is powerful, but it’s certainly not self-explanatory. For instance, frequent blue screens are a clear sign of user frustration. But other signals, like high CPU usage, are harder to interpret as it might suggest friction, or simply reflect heavy software use. That’s why we made it a point not to assume.
We validated our telemetry against real user feedback, running targeted surveys across key personas including executives, new hires, and field workers, and asked them to rate their satisfaction with their devices. Instead of surveying everyone, we focused on a small but representative sample. We ensured the data was statistically robust, then extrapolated those insights across the broader user base, clearly marking them as inferred.
This gave us a strong baseline to correlate technical signals with real-world sentiment — without overburdening the business. It was a pragmatic approach that balanced rigor with respect for employees’ time, and it worked.
Flatten the Telemetry
Raw telemetry data can be messy, so the key piece in the puzzle is to flatten it into simple, person-level metrics:
- “David has 7 blue screens per week”
- “Jane hits 90% CPU three times a week”
Each metric became an attribute in our dataset, tied to a user. This allowed us to compare satisfaction scores with telemetry, spotting patterns that revealed deeper insights. We moved beyond abstract numbers to real experiences which made the data far more meaningful.
Correlate, Automate, and Trigger
With the data in place, it was easier to identify reliable predictors of satisfaction. It also made it possible to debunk a few things. For example, users with moderate frequent blue screens were not more unhappy than others. Users with high CPU usage generally all showed low satisfaction. From there, we were able to build logic: “If X happens, and Y is true, then trigger a refresh.”
This logic fed into an automated workflow, refreshing devices when they truly required it, rather than relying solely on their age.
Don’t Trust the System Blindly
One of the biggest lessons in our hardware refresh journey? Models change, and what correlated last year might not hold today. As systems evolve and user behaviour changes, your telemetry logic needs to be recalibrated.
To combat this, we built in regular checks and balances to revalidate our assumptions, updating thresholds, and staying honest about what the data was really telling us.
Make It Make Business Sense
This wasn’t just a technical project, it also involved financials. Refresh decisions affect depreciation schedules, leasing terms, and residual value. In a global organization, external factors like labor costs also play a crucial role in determining what makes sense. To get it right, we worked closely with finance to understand the true cost implications.
That partnership gave us the credibility and the clarity to speak the business’s language, and that made all the difference when engaging senior stakeholders. We didn’t present telemetry charts or technical data but instead laid out real options that represented real people: “If we refresh at threshold A, it will cost B and improve satisfaction by C, possibly improving productivity by D..”
Now, by framing these trade-offs in business terms, leaders trusted the model – a big contributor to shifting IT from a cost center to a strategic partner.
Refreshed Thinking
Contrary to fears, our model didn’t lead to mass refreshes but instead it led to a model of precision. A few users were refreshed quickly, and others were kept on their devices longer, with the confidence that these were still serving them well enough to perform their best.
The results speak for themselves: improved employee satisfaction, smarter use of IT resources, and over $40 million in savings – all achieved without sacrificing performance or experience.
In my opinion, the future of experience management is not about blindly chasing happiness scores or blindly trusting telemetry. It’s about combining data, sentiment, and financial insight to make smarter decisions, at scale.
If you’re still refreshing devices on a fixed schedule, it’s time to rethink. The data is already there, and your employees are ready to tell you what they need.