In 2006 Pilgrim Beart co-founded AlertMe, the Connected Home platform powering the British Gas Hive brand in the UK and the Lowes Iris brand in the USA. AlertMe was sold to British Gas in 2015 for $100m having successfully deployed millions of connected devices inside home networks – without enabling a ‘botnet’. Today Pilgrim heads-up DevicePilot, used by IoT companies to manage their products from trials to scale-up. DevicePilot helps manage the many processes around connected devices - including security.
In this paper Pilgrim discusses five simple New Year Resolutions you can take to ensure the security of your connected product - helping you sleep well at night for the rest of 2017.
What’s the problem?
The fantastic thing about technology in general - and the Internet in particular - is that it offers power at scale. Over the past decade it’s become possible for anyone to access almost the sum total of human knowledge in an instant from their phone - but unfortunately the technology genie doesn’t grant this power only to the good guys.
In the past a burglar could attack my house physically which was frightening enough, but now thanks to the internet that same burglar can attack a million houses from the comfort of their bedroom, which is potentially terrifying. When one considers the potential to disrupt not just individuals’ lives but also our public infrastructure, this is clearly a threat to be taken very seriously, and one which is now playing-out in the media in a weekly litany of IoT security breaches.
While individual hackers often escape justice, the liability for the security breach falls to whoever dropped the security ball. If you leave your front door open and someone burgles your family’s house then although the burglar is the actual criminal it’s you who’ll probably take the blame.
Security is one of those Black Swan topics which all companies need to confront before it’s too late to avoid a disaster of (at least) brand-damaging proportions.
So what should I do?
Security is a potentially enormous subject so it’s not realistic to provide an exhaustive guide, or even a complete check-list. But because security is a process, we’ve here instead provided a handful of high-level principles which should help you avoid the many security traps for the unwary, if you pay attention to them throughout the development and deployment of your connected devices.
You can use all these principles regardless of which IoT platform and technology you’re running.
Resolution 1: Take security seriously from day one
In Field of Dreams, actor Kevin Costner says “If you build it, he will come” but in regards to hackers it’s more a case of “if you don’t build it, he will come”: Secure early and often.
At DevicePilot we’ve seen many cases where companies get a long way down the track in connecting their products (e.g. into field trials with real users) without having apparently really considered security at all. We see plenty of interfaces that are publically-open on the internet, or that use insecure communications (e.g. HTTP instead of HTTPS). Sometimes these insecure interfaces offer direct access to sensitive customer data, sometimes they even offer control over dangerous machines. It’s hard to believe that any company would intentionally plan for this outcome, so the situation seems to arise just because no-one takes responsibility for security early-on.
It is never too early to take the pledge! And in fact establishing good security principles early-on sets a way of working which in the long run will mean less work. Note that it’s often impossible to add effective security after a product has been launched into the field – the horse has well and truly bolted.
You then need to continue to take security seriously. However careful you are, your product will go to market with security holes – all products do. In your software stack if not in your application code. So don’t be an ostrich. Have a process for rapidly reacting when you find security problems, which of course includes the ability to securely remote-patch your products.
Resolution 2: Use experts
Security is a field of expertise like any other, so if you want to get it right then it’s sensible to consult the experts.
Although increasingly it’s possible to buy software stacks which encapsulate much of the required security process, the software engineers doing the implementation still need to have a good grasp of security fundamentals, even if they’re not implementing detailed security code themselves. This is because it’s all-too-easy to accidentally use these stacks wrong and blow a massive hole in your security. One frequent example is developers accidentally shipping private keys in devices because they don’t really understand the principles of public-private cryptography.
There’s a saying that you may have heard Test professionals use: “If it hasn’t been tested, it doesn’t work”. This may sound like exaggeration but in practice it is spot-on: in any complex system the chances of everything working perfectly first time are effectively zero, so until you’ve tested it you can be sure that it doesn’t work. This is equally-true for security: however good your implementation team, you should assume that your product has security holes until proved otherwise and employ professionals to find them - before a hacker does.
Hire security consultants to carry-out a security audit by lifting the lid on your software and internal processes to spot bad practice. And also consider the more “black-box” approach of Penetration Testing where hired “white hat” experts will probe your offering from the outside - in the same way that hackers will once you launch.
Resolution 3: Minimise attack surface
“Attack surface” is a phrase capturing the idea that the larger the interface between your product and the outside world, the more security flaws it might have.
A key place to enforce security is at the “API” (Application Programming Interface) – the interface where external services (including your User Interface) can read from and write to your core service. Here, less is more: The more complicated your interface, the less likely anyone is to be able to understand all its implications … and therefore the less likely it is to be secure.
Store as little data as possible, in as few places as possible. Gathering data into a database makes that database a big target for hackers. Data seems to have a natural tendency to proliferate. An anecdote we heard is that during a recent audit a bank discovered that personal data was being stored in more than 2,000 unlisted, unauthorised databases – e.g. spun-up for tests or demos – which is asking for trouble.
And related to that, consider how to sensibly minimise staff access to data, particularly to personal data. You might for example divide staff into “developers” who need to have access to source-code and “users” who don’t. Developers should use test data for testing, not live data. And users can then be given logins which restrict their access to only what they need, and their access can be audited.
Resolution 4: Minimise need for Trust
Trust makes the world go round, but is easy to abuse, so a secure process is one which requires you to place as little trust as possible in third parties, and thereby gives those parties as little power as possible to subvert your security. An example from AlertMe days is that we found that whenever we deployed computers into manufacturing facilities in the Far East they became riddled with malware (viruses and Trojans). So we devised a process which did not require manufacturing to be secure – devices left the factory containing only vanilla code, and security (certificates etc.) was established and enforced later, on first-use.
It is also important to limit access to individual people and systems. For example don’t protect interfaces with a single universal password, as that will be shared around people, sent by email and lost track of. For some people, a secret is something they tell only one person at a time! Instead, keys should be cryptographically-strong and issued individually to each person or service that needs access to an interface. Then the use of those keys can be tracked and audited, access can be limited specific to each user, keys can be individually revoked if compromised, and if necessary individuals can be held liable for the secrecy and use of their key (much as credit card users are responsible for keeping their PINs and banking passwords secret).
Resolution 5: Secure from end-to-end
Security is an end-to-end problem. In the Internet of Things the communications chain often involves several different devices. For example in a Smart City application data might hop from one lamppost to another, then to a gateway, then through a cellular link to a cloud service, then through the internet to the application. In the early IoT days each link used a different standard, and although each link might have been secure, a malicious user breaking-into an intermediate routing point (e.g. by physically attacking a gateway) could break into the chain and can see everything - and no-one can detect that this has happened.
This is not how the Internet in general works, and IoT must now play catch-up: The Internet assumes that all intermediate points are vulnerable (indeed, public) and that security must therefore be established end-to-end. The increasing ability of edge devices to support Internet Protocol is exciting partly because it allows true Internet of Things – using the Internet from end to end – and therefore it allows the well-known end-to-end security protocols that we already rely on elsewhere to be applied to the IoT.
Read more about this subject by Googling “Alice and Bob”.
A final point to consider is whether you can keep data encrypted even when it’s at rest (i.e. when it’s stored on a disk in your cloud service). It is tempting to assume either that Cloud services are invulnerable, or that if they are compromised then all is lost. But if one considers data-at-rest as just a point along the journey from one end to the other, then there’s a strong argument to keep the data encrypted, particularly if it’s highly sensitive. For example, if you are handling video data from within peoples’ homes, you might need to decode the video metadata so that you can index the video within your service, but you might want to keep the actual video pixels encrypted so that the video never appears in the tabloids.
Resolution 6: Security is a process
Finally here’s a final “meta” New Year resolution to tie-together our previous five. As we observed in the Introduction it’s best to think of security as an ongoing process. So are there any general good process techniques that we can use to get security right? Sure, here are three:
- Audit: Assume that you have security problems, and frequently audit your systems and device estate to find them. Side-benefit: By measuring the results of this audit over time you will see if this vital metric is getting better or worse.
- Enforce: Any process can be done badly or not at all, particularly if humans are involved. So include checks-and-balances to enforce good process, a way to ensure that e.g. deployment cannot be completed until after security provisioning has occurred.
- Automate: If it’s done manually then Security can potentially add cost or complexity for your ops team or end-users. But the power-at-scale principle can be applied to carry out the above security processes automatically, making it efficient to correctly audit and enforce security (and more generally, quality) at scale.
Conclusion – take security seriously in 2017 and sleep at night
The internet provides the good guys with huge power at scale, but as recent news has shown, that power is all-too-easily abused by the bad guys to do “hacking at scale”. The good news however is that there are a handful of basic principles to help you get your security balance right - and the power of technology can be used to apply them consistently and efficiently.
The more technically-minded might like to go on to read the DevicePilot White Paper “IoT Security” here