Executive Summary
How can you innovate and scale faster? Based on a talk delivered at AWS Summit London in June 2017 - and leaning on his experience building multiple successful IoT startups including the company which built British Gas Hive - Pilgrim Beart suggests ways to address some of the key challenges of modern innovation. In particular Pilgrim explores the dynamics of early market ecosystems and how to make them work in your favour, and how make best use of your most precious asset - your people.
Building from best-of-breed parts
In a previous white paper we explored how the structure of the IoT market is evolving towards an open ecosystem (similar to the Web) where it’s increasingly possible for companies to build connected products out of off-the-shelf parts. Ecosystems emerge because they are economically efficient: allowing customer companies to concentrate on delivering their unique value-add, rather than re-inventing wheels, and allowing vendor companies to amortise development across greater scale.
So any company building an IoT solution can now follow roughly these steps:
- Identify what parts are needed (in our 2016 workshop “A-Z of the IoT Ecosystem”) we attempted to enumerate a complete such list
- Draw-up a shortlist of vendors of each part, analyse the strengths and weaknesses of each, and whether there are any gaps between parts
- Select a best-of-breed vendor for each part
- Integrate
- Trial (which is usually when the need for something like DevicePilot becomes fully apparent)
- Scale-up!
As the pace of technology continues to increase it’s not just a case of picking the right pieces for today, but also of architecting so that the pieces can be swapped-out later as they improve, fail, merge and separate. It’s also worth considering whether your entire proposition might become a piece in someone else’s architecture.
The various vendors of the parts must collaborate to ensure that there are no gaps between their parts, and that they integrate easily.
People increasingly precious
I’m now into my fifth decade on this planet which means I’ve spent about thirty years working in start-ups doing innovation in several different new markets. During that time I’ve noticed a gradual but profound shift which now amounts to something quite revolutionary. In the 1980’s and 1990’s we used to invest huge amounts of people-time in writing and optimising code – we had to, because of the severe constraints on compute and memory. But since then compute has halved in price every 18 months - and it’s every 12 months for storage - while the number and cost of people has remained roughly constant. The cumulative effect is that it's increasingly important to optimise for people time by sweating the machines. The next few sections of this paper riff on this theme. Interestingly it seems to mean that – far from stealing people’s jobs – machines are making people increasingly precious.
Optimise Late
That old-fashioned way of hand-coding also brought the unfortunate side-effect of hard-wiring the details of the solution, which then ties the hands of the machines and the underlying frameworks to continuously deliver benefits.
Let’s take a specific example1. Imagine that we want to do edge-detection - a common function required for image-recognition. We hand-code it in C, which probably takes us several days of effort to optimise and results in something which is quite fragile (several security problems of recent years stem from the fact that C provides no run-time checking). So instead, recognising the importance of optimising for developer productivity, let's instead choose to implement in a higher-level language such as Python. Our developer productivity will increase (and the resulting code will be significantly more robust), but it runs slower – 3x slower. It’s a tribute to the army of software engineers who built the underlying frameworks that it’s only 3x slower.
One reason for the increased productivity is that Python is a much more expressive language – rather than iterating manually through each pixel individually, we can take advantage of patterns such as mapping (applying) a function across all pixels, which takes us towards a higher-level declarative expression of our problem, rather than the low-level C implementation which is imperative. This gives our machines much more flexibility in how they solve the problem.
And it turns out that – sticking with high-level Python – we can now compile to programmable hardware (FPGAs on e.g. AWS F1) and run 11x faster than our hand-coded C. This then gives us the amazing win-win of higher programmer productivity (i.e. faster, cheaper) and execution which not only gets faster according to some learning curve, but can also take radical leaps forward.
We can consider this to be an example of “optimising late” – by leaving flexibility in how the problem is solved, it can be optimised late – the later you optimise, the more information you have to do a good job. And the best use you make of your precious developers.
1 This is a real example taken from https://www.nextplatform.com/2017/06/05/python-coils-around-fpgas-broader-accelerator-reach/ . There are many other examples where a declarative approach is beneficial, e.g. map-reduce frameworks.
Ops increasingly precious
Operations people are increasingly precious, too. Anyone who’s worked with physical servers - even virtual servers such as AWS EC2 - has experienced a disk filling-up, or a server crashing, usually at 3am. It takes a serious size of Ops team to provide effective 24/7 support for this kind of infrastructure. And we know from our experiences building Hive at AlertMe, and also from the experiences of our DevicePilot customers, that Ops can easily be 50-90% of the ongoing cost of IoT deployment and growth.
So it’s not surprising that the theme of the last decade – DevOps – is now giving way to the concept of “No-Ops”. If you use a serverless pattern, such as AWS S3, Lambdas and DynamoDB, then you have no disk to fill-up, you have no server to crash. Amazon’s Ops people have more scale and experience in doing the underlying Ops support than you can ever afford.
Attention increasingly precious
The end-user is increasingly precious too – or at least their time is. Let’s look at IoT in the home as an example. A decade ago in 2007 we typically had one connected device in our homes – our PC. That was the year that the iPhone was launched, and now we have not only iPhones, iPads and the like, but also IoT devices such as Hive, Nest, Sky etc. – there’s a great mobile app called “Fing” which finds all the devices on your network. Give it a try and you’ll probably find more than 10 devices connected.
And this trend will continue. In another decade typical homes will have 100+ connected devices. I’ve no idea what half of those will be, but I’m confident it will happen because the costs of connecting devices are falling and the benefits of connecting are rising, and those kinds of technological curves usually drive inexorable change.
The problem this creates is that there are still the same number of humans in the picture, with the same finite amount of attention. And so the amount of attention they can spend per device has to be much smaller.
This in turn means that increasingly we have to consider the customer’s needs first (such as limited attention), and use those to drive the product/proposition definition, and in turn use that to drive the technology. Too often in the past the chain of causation has been done in the opposite direction, with customers forced to put up with whatever the technology dictated – but no more.
Iterate faster – experiments!
Most of us will have heard of techniques such as Agile, Continuous Integration, Continuous Deployment and Test-Driven Development. They vastly improve technology delivery by breaking-up the mammoth 18-month waterfall projects of the past, which by the time they launched had inevitably overrun and failed to deliver what the market (now) wanted. But they are about more than just tech development.
Short sprints allows users to try the product early and often and feed back. This prevents the deliverables from departing from market need and encourages a mind-set of continuous experiments – it’s OK for an experiment to fail if it only took the team a week to try it. This then allows a process of “co-creation” – jointly developing a product with the help of its users.
Summary
We’ve seen that the challenge is basically to move faster. A great way to achieve that is to build as much as possible from off-the-shelf parts which are themselves moving faster - but that does necessitate coping with some flux.
- We should aim to write less code, and to optimise late, enabling the machines and the people who create new frameworks for them to hand us future step-change benefits almost for free. The new serverless pattern of several new AWS services such as Lambdas and S3 increasingly allows a "no-Ops" approach.
- We should aim to co-create with our customers by rapid iteration, and we should seek to “buy not build” the parts needed for our solution, so we can focus our precious people time on the things that no-one else can do.
- And finally the environment which increasingly makes all of this possible – and to which we willingly contribute in the expectation of that reward – is the ecosystem. The IoT ecosystem is gathering pace today as customers identify the "best-of-breed" vendors of each part. As an aside, I note from our customer engagements at DevicePilot that at least half of our pipeline customers are either using AWS IoT, actively moving to it, or at least considering it.
About the author
Pilgrim Beart FIET is a Computer Scientist, serial entrepreneur and lifelong innovator. His career in innovation began when he joined a startup as a teenager, through three startups in Silicon Valley in the 1990's, and then co-founding five technology companies in the UK. Presently CEO of DevicePilot, his previous startup was AlertMe which developed the Europe's most successful Connected Home platform, known in the UK as Hive, and was sold to British Gas in 2015 for $100m.
About DevicePilot
DevicePilot is a Cloud service which helps companies deploy their IoT devices at scale. A complete Service Assurance solution, it provides visibility, monitoring and automation across your entire device estate, from trials to scale-up.
DevicePilot:
- Runs on Amazon Web Services (AWS) and is an AWS Advanced Technology Partner
- Integrates seamlessly with AWS IoT - so you can use DevicePilot to manage your AWS IoT devices without writing a single line of code
- Is an AWS Marketplace Partner - so you can buy DevicePilot through your AWS account
Contact us today to explore the value of choosing DevicePilot as part of your IoT ecosystem.
![]() |
![]() |
![]() |