Connected communities and the circular flow of data
Sensors are permeating our lives. They’re all around us – from smartphones that record temperature and vibration, to water pumps underneath our streets that monitor the flow of water.
The Internet of Things (IoT) is the exploitation of the data captured from those sensors to effect a smarter action, either automated or not. Its ubiquity creates connections between areas of our lives that were previously separate; for example, the triangle of dependencies starting to exist between public services, manufacturers and insurers.
Circular data flow
We have reached a point where we can control individual lightbulbs with our smartphones. Using the insight from those connections transforms the way homes can be managed and safeguarded. It impacts our insurance premiums because there is data on whether a tap has been left on; if a house is occupied when it should be empty; or if electricity usage spikes. Manufacturers are developing new products to meet increasing demand for connected devices and to be a part of this new ecosystem.
Let’s extend the lighting example into the public sector. Connected street lighting is self-maintaining, with the ability to predict when lights need changing. City councils and insurers can get a real-time understanding of what’s going on in the neighbourhood; in turn, insurance premiums and council spending can be more effectively targeted. The flow of data is circular: the lightbulb is connected to the insurer; manufacturers get maintenance data for future product design.
While the data from connected devices is itself quite straightforward, it’s the volumes that are challenging: in this case, approximately 13 million homes, each with multiple devices. Supercomputing power is essential for the efficient and timely capture and processing of high volumes of data for decision-making. In the real-time world, any latency has consequences. Here, supercomputing is less about data storage and more about how to compute – or hypercompute – the data deluge. And it’s not binary. With streams of data coming in, supercomputing algorithms prescribe what’s happening in multi-dimensional environments.
Supercomputing can be used with local government and public agencies to build data-hubs that stream anonymised data in connected communities. Real-time data on activities and movements is analysed, together with information on council assets (council offices, crime data, council gym activities, cycle hire schemes and so on) to enable a council to deliver better community outcomes. This is about using data and supercomputing to diagnose what’s happening and direct resources (both real-time and planned) in a better-targeted, more holistic way – for example, what streets should be lit when, what public spaces should be staffed, where local policing should be in evidence, where urban redevelopment should be done. This can also be connected to insurers and manufacturers to take action.
Digital Vision for Supercomputing & Big Data
This article is part of the Atos Digital Vision for Supercomputing & Big Data opinion paper. The challenge for any organisation is how to turn data into tangible advantage. Becoming truly data-driven is perhaps our most definitive step into the digital age. In our Digital Vision for Supercomputing & Big Data, we explore the implications for organisations and what lies ahead.