Introduction
Why do you care if your customers are happy? In our increasingly fast-paced world, service providers like you and your peers are racing to deliver the most value to the customer by maximising perceived brand value and therefore your percentage of the overall stack of revenue paid by the customer. Everyone wants to avoid becoming commoditised and joining their competitors in a race to the bottom, so differentiation is the name of the game. But differentiation only works if users find, understand and use the differentiating aspects of your service, and then in time come to depend on them.
There are many ways to define customer happiness – the most basic being that the customer pays the bill! But that’s a lagging indicator e.g. if you’re lucky enough to have your customers to pay annually in advance, then it might be 12 months until you discover that they aren’t going to pay the next bill, by which time it’s far too late to fix the relationship.
So what do you need? For a start, you need leading indicators for customer happiness. That can be obtained through people-centric processes such as measuring the Net Promoter Score, but that’s slow and laborious, or by tracking Twitter sentiment, but that’s not sufficiently precise (not everyone throws a public fit on Twitter when they are unhappy – most just stop using the product).
First, connect
With today’s increasingly connected devices, a whole new way of measuring customer happiness has opened up: with a connected device, Product Managers have the golden opportunity to directly and continuously to measure how (and if) each user is using the device.
We can therefore automate the process of measuring customer happiness... and even automate ‘closing the loop’ to take actions to recover a souring customer relationship before it is too late. The techniques described here should be seen not just a side benefit of connecting a device, but as increasingly one of the strong reasons for doing so.
As background on the general opportunities and challenges of connecting products, you might like to read our other white papers surrounding this:
Metrics for happiness
So let’s assume that we’ve designed our product to collect usage data and we’ve launched it into the real world, at least in trials. What metrics should we collect, and how can we interpret them?
Is it working?
The most basic metric is probably "is the product working?", because if it’s not then the user can’t possibly be happy. For a connected product, "working" can be non-trivial even to define, let alone measure. I remember in the early days of the relationship between my previous company AlertMe and its biggest customer British Gas, when we had a few thousand customers live, there was a meeting one day when BG essentially said "your stuff doesn’t work" and we disagreed - "yes it does, look at these stats". And we discovered that – despite both companies having worked together on the proposition that became Hive for many months – we had somehow neglected to actually define "working"! Points to consider include:
- Any connected product will occasionally be offline for reasons beyond your control (e.g. home broadband going down) – should those stats be included?
- If a part of a device is non-functional for a while (e.g. its internet connection) but the device isn’t actually using that part (e.g. the user doesn’t look at the device online) – should those stats be included? This is the "if a tree falls in the forest and there is no-one to see it, does it still make a sound?" question, almost philosophical but important to agree.
- If the user intentionally does things which actually make the device non-functional (e.g. remove the batteries, or turn off their broadband at night), then do you count that?
Any process expert will say that to improve something you must first measure it. Define Key Performance Indicators (KPIs) which might include e.g. "being connected", "application running" etc. and combine them into a Service Level Agreement such as "At least 95% of devices should be working at least 95% of the time over any rolling 30 days".
Then you can plot that metric for all devices, and for individual devices, so customer-by-customer you can see if you’ve met "good enough". It’s important to know whether a device is working right now, but if a customer calls up unhappy then it’s equally important to know that although it might be working right now, it has been up and down like a yo-yo over the past 30 days. You can then also draw up a list of target customers who need proactive remediation, outreach etc., maximising the effectiveness of your Operations team, and even create alerts when you are in danger of missing SLAs on individual customers, or on the device estate as a whole.
Is the customer using it?
"Technically working" is a necessary-but-not-sufficient criterion. Sadly, many systems are engineered to measure just this and nothing else, but what's really needed is to take a resolutely customer-centric perspective of the "is it working?" question. Is it doing what they expect? Is it making their lives better? Is it, in a word, making them happy? If the answer to any of these is "no" then – from their perspective – it is not working for them. So now we get to the question of interaction between the product and the user. For the first time in the history of the world we can now measure this, for every customer, live and continuously. If it’s a hair dryer, how often is it used? For how long? At what time of day? If it’s a heating system, how often are the settings changed? How often does the user check or change the settings? Even if it’s a piece of infrastructure with no obvious users, there are still goodness indicators we can measure, for example with a traffic-light system we can measure how well the traffic is flowing.
We can detect which parts of the product the user is using – have they progressed from "newbie" to "guru"? Have they discovered that great new feature? Are they doing things the hard way, e.g. turning the temperature up manually when they could be enabling automatic occupancy control?
Closing the loop
Making these measurements is one thing, but we need to take action in response. We can probably spot customers who are at risk of churn through not interacting at all with the product, and reach out to re-engage them, perhaps with messaging, a gift or some other kind of assistance or "stroke".
And perhaps we can also spot systematic problems (technical or usability) and automatically respond to them. Examples from my previous company AlertMe would be automatically dispatching batteries when they were low, or radio repeaters if the radio signal was weak. Automatic processes are good at reacting speedily, and a customer who experiences a problem rapidly solved can often turn into a greater fan than a customer who’s never had a problem, because they see that you really have your act together.
Interaction - good or bad?
In the above we have talked about interaction as though it’s a metric which is always positive. In the social media world this is generally true - clicks, likes, retweets etc. are always seen as a good thing, and the engineers in companies like Facebook and Twitter are driven by a relentless push to increase interaction, to keep your eyeballs stuck to their screen not someone else’s.
But in a connected product some types of interaction can indicate problems, and are a negative indicator.
If you’re deploying e.g. a heating controller then in the early days interactions (e.g. button-presses on the device) are generally a good sign that the user is engaging with the device, teaching it their requirements, and by implication getting value. However sustained (or sudden) high levels of interaction (frantic button-pushing) may indicate either that the user is trying to compensate for the product not working, or has not understood how the product works, both of which are problems that need urgent attention.
Another reason why sustained interaction can be a negative metric goes to the very nature of the IoT phenomenon. There are more and more connected devices in the world – the number is growing at roughly 10x per decade – which means more connected devices per person. Since human attention is finite, the attention-per-device must go down at the same rate. Therefore IoT devices increasingly need to disappear into the fabric of our lives – becoming what was originally termed UbiComp – Ubiquitous Computing. They should just get on with doing whatever their job is whilst requiring the minimum of interaction.
So typically usage will follow a pattern of an intense honeymoon followed by a reduction to a lower plateau, with possibly occasional temporary peaks due to e.g. changes in season for a heating controller, or dips due to e.g. vacation.
Every product proposition is different, but happy customers will tend to exhibit certain types of usage pattern over time. By correlating those patterns with other happiness parameters measured perhaps with more effort – e.g. interviewing customers to discover the net promoter score – it may be possible to get yourself into the enviable position of being able to imply the NPS for your entire device estate, continuously, without the cost or customer-hassle of continuous interviews.
Show them the trends
In all the above, it’s worth considering giving everyone in your company visibility into these key customer-happiness metrics, and even more importantly their trends, to ensure that everyone is able to put the customer first, and has the justification to take action if they are not.

Does any of this make any real difference?
You may be wondering whether any of this really makes a difference, and I can share a piece of strong evidence that it does: In about 2010 when British Gas had deployed what became Hive in the first 8,000 homes, they measured the change in Net Promoter Score that had occurred. NPS is a number which ranges from -100 if your customers all hate you to +100 if everyone loves you. As you might expect, NPS in the Utility world tends to be pretty mediocre and unchanged between suppliers and over time.
When BG measured the change in NPS across their statistically-significant initial customer base of 8,000, they discovered a jaw-dropping, game-changing improvement of +50 in the NPS. This is because the customer relationship had entirely changed – from getting a bill once every few months, to having an app in their hand that could control their home. Something they interacted with several times a day and showed to their friends down the pub.
That is the radical promise of connected devices, to underpin any company’s transition from being a product company to being a service company, and transform the value of their brand.
Conclusion
When a company deploys a website they use tools to create their website - but that tells them nothing of the customer experience of the users of the site. Are the users bouncing? Are they converting? In short, are they happy? That absolutely vital data must be collected with different tools such as Google Analytics or KissMetrics which can track users’ individual and collective experience of the website. Armed with that information, the site designer can then continually improve the customer journey to maximise retention, increase referral, and ultimate maximise their revenue.
There is a very strong analogy here with connected devices. When any company connects a product, it’s essential to measure the customer experience, to understand their users’ experience of that product, for all the same reasons. DevicePilot can be that tool.