Introduction
Connected products are sufficiently complex that an in-field trial is often an essential stage between benchtop R&D and mass production. This paper discusses the goals of an IoT trial and, most-importantly, how to tell when you’ve finished your trial and are ready for prime time.
A trial is, by definition, an experiment. You hope everything will go smoothly but you should anticipate surprises and expect to learn something – otherwise why are you doing it? If you were confident that there’d be no surprises you wouldn’t be doing a trial. But a trial isn’t an open-ended learning process and should have definite goals, both of risk-reduction (what were the mistaken assumptions?) and of confidence-building (are we really ready to launch?).
Here we discuss the three key areas which, in our experience, define IoT trial success. Paying attention to them helps ensure successful mass deployment.
1. Working Tech?
Particularly for technology-led companies, this is probably the most obvious question a trial must answer. IoT technology stacks are complex, there are lots of moving parts and therefore lots of integration points, and the technology is being deployed in the real world, which is a messy and uncontrolled place. For many IoT propositions the “happy path” where everything’s working may be pretty trivial to code for, but there are so many ways that a product can fall off the happy path when exposed to the real world (and real users who won’t follow instructions) that this may be where 90% - or 99% - of your code and effort falls. Trials help uncover those unhappy paths.
So it’s important to have an effective customer-support process in place for trials, to capture and diagnose the failure modes - ideally even to make them repeatable so that well-defined bugs can be handed to the tech team for fixing.
Trials must be large-enough to uncover problems at a quality level that will suit your production ambitions. For example, one device behaving strangely in a 100-user trial could be dismissed as a one-off, but a 1% in-field failure rate translates to an unacceptable 1000 unhappy customers if you plan to ship 100k units. It may be sensible to plan for trials at multiple scales, or to enlist “beta” customers, perhaps even paying ones, in an early rollout stage where you can still be vigilant to problems.
Programmers know that good tests give you good “code coverage” – that the tests have exercised every part of the code. Likewise a good IoT trial will test every technical part. Particularly important technical areas to test are:
- the physical hardware which you can’t change later, e.g. durability, usability, battery life, memory capacity, processing power, radio capabilities etc.
- “get out of jail” technologies such as the ability to do software upgrades in the field. Your production code will have bugs, missing features and security flaws, so upgrades are not optional. And they’re not trivial to do. For example if a code upgrade fails (it will) then is there a rollback or secure bootloading process to recover a device? The cost of having to physically replace a production device can be significant – as well as the reputational damage.
Is it working?
As you go from trials into production this is a question which you and your customers will increasingly ask – and you may well discover that you don’t share the same definition as your customer. For example:
- Will you count situations where a device can’t possibly be working because the user has done something which prevents it working, such as not replacing batteries or removing its internet connection?
- If a device is not working at a time when the user isn’t trying to use it, would you count that?
Whatever the definition, you need to know what general level of uptime you are achieving, and what is acceptable, and whether the former is approaching the latter as your trials conclude.
|