Artificial Intelligence as the second phase for IoT – Part 1

Posted on: December 5, 2018 by Jose Gato

The Fourth Industrial Revolution strongly relies on IoT related technologies and devices since “they have great potential to continue to connect billions of more people to the web, drastically improve the efficiency of business and organizations and help regenerate the natural environment through better asset management[1].

On one hand, after more than ten years of research and innovation activities, IoT is now a mature and transforming technology with billions of devices deployed in a huge amount of real production environments and use-cases. It has also demonstrated its business and societal impact. On the other hand, the landscape is still too complex since ecosystems have been collapsed with a countless number of platforms and protocols which impose difficulties regarding interoperability and cause silos. Moreover, security and privacy are centre stage because recent massive attacks on safety-critical infrastructures have leveraged on unsecured IoT devices[2].

In this moment, we are immersed in the Second Phase of IoT, it is time to exploit all the gathered data (and the unlimited data coming) to turn these pieces of silicon, chips, copper and steel, into a more functional device. According to a recent Accenture report[3]by 2020, smart sensors and other Internet of Things devices will generate at least 507.5 zettabytes of data”.

What is next? How to address the challenge of transforming data into intelligence? Different data analysis techniques have appeared during the recent years, but IoT scenarios are complex by default and AI would be a perfect partner in this second phase travel. Now, it is time to focus on using all this data, and the AI technologies are mature enough to provide us the tools for the required data analysis.

So, from the different technologies that englobe AI, Machine Learning would be the most suitable for IoT scenarios. Basically, it is based on detecting patterns and behaviours from gathered data. Therefore, the more data you have, the more past experiences will be analysed, and better patterns would be modelled. These trained models will be used to predict future situations and behave in an intelligent fashion way based on the experience.

Gathering data from the past is not an issue for IoT scenarios, and the expected intelligence of devices could be modelled by these patterns, for instance: automatically setting the temperature of a thermostat (depending on the time of the day, number of people in the room, personal comfort, external weather, etc), predicting traffic flows (depending on the time of the day, holidays, events, weather), and many other examples.

As it can be seen in these examples, models and patterns will rely not only on the amount of data but also the integration (and correlation) of data sources.

Which factors (data sources) would affect acting intelligently? Some would be trivial, but others require deeper studies (butterfly effect? [4]).

In a similar approach, other kind of techniques, like Image Recognition, would detect patterns and shapes in an image. Imagine smart farming scenarios where drones will detect the size and colors of different plans (based on previous analysis of millions of images), enabling smarter farming.

Even though the usage of AI techniques is something that has been carried out at cloud level since several years ago (as a matter of fact, this concept, applied to Computer Science, was coined in the 50s decade[5]), the challenges brought about after the outburst of IoT demands the introduction of different solutions, closer to the vast volume of devices that are generating/streaming huge amounts of data permanently.

The first and foremost impact that can be easily inferred is scalability. A cloud-based AI system that must handle the information of thousands/millions of devices at the same time is getting unbearable. In consequence, the weight of all the data that is to be processed (computational cost associated to e.g. train the models/neural networks), together with the transmission time required to deliver the information to the appropriate server(s) (networking cost) introduces a non-negligible latency that deters these solutions from performing actuation in near-real-time.

Moreover, it is also worth highlighting the fact that transmitting all the information through the Internet opens the door to a wide range of potential threats and vulnerabilities (e.g., tampering, man-in-the-middle, sensitive data disclosure, etc.) and implies more complexity to comply with privacy protection regulations like GDPR.

All in all, bringing the potential of AI directly to IoT devices seems to be a sensible solution to overcome all these drawbacks. That I’ll explain more in my next post.


[1] Marr, Bernard, Why Everyone Must Get Ready For The 4th Industrial Revolution". Forbes (blog). Retrieved 2016-12-12.

[2] C. Kolias, G. Kambourakis, A. Stavrou and J. Voas, "DDoS in the IoT: Mirai and Other Botnets," in Computer, vol. 50, no. 7, pp. 80-84, 2017.




Share this blog article

  • Share on Linked In

About Jose Gato
Head of Internet of Everything Lab, Research & Innovation, Atos Spain
Jose Gato Luis has a Master Degree in Computers Engineering with more than ten years of experience in ICT sector in the fields of software development, OpenSource, and innovative technologies. Currently he is working as Head of Internet of Everything Lab in Atos Research & Innovation (ARI), leading technical responsibilities in multiple IoT research project about: integration & interoperability, security, data gathering, and recently, about ML technologies applied to IoT. He has also previous experience managing research European and National projects based on open source, mobile technologies, social networks, augmented reality, data mining and semantic, logistics and transport. In addition, he has been teaching in a Master Degree of the University Rey Juan Carlos about open source software technologies.