Do you know what makes IoT systems so easy to kill? Quality!
When web-cams or DVRs are mass produced and sold today, they all contain exactly the same software, typically a full Linux system including a web server, some database and other stuff, the actual application you bought it for, bit by bit, each a perfect digital copy.
It also very likely includes some obscure bug, nothing you’d notice when you actually try to use the device, but something which allows a crook to take the gadget you just bought and paid for and turn it into a member of a robot army answering to him, not you.
MIRAI, the botnet that brought down major parts of the Internet in 2016 was just to demo the power and prove how cheap IoT just cannot be made safe… right?
By the very nature of how IT has been done for decades, if the crook finds a return-oriented-programming exploit on one device, he’ll only have to gather his army, because a million others run exactly the same binary code. That’s why it’s worth the trouble: Invest once, gain big time!
And it’s only because we are being foolish about quality!
Not too little! Too much! Let me explain:
Programmers don’t write the binary code that gets deployed. Pretty much all parts of the Linux software stack that winds up in this tiny computer of yours were written in some kind of high level language. Only when it was compiled from that high-level language, the optimizer went through perhaps hundreds of variants of functionally identical code for what the programmer wrote, to come up with the one variant it judges “best”. Typically, it’s the fastest, although today it might be the most energy efficient. From there the linker will take all these chunks of binary object code and fit them together in some preferred order, perhaps alphabetically, perhaps grouping stuff it thinks belongs together or perhaps just one after the other. In fact, any order is typically ok, because CPUs jump pretty quickly in any direction these days.
And then IoT devices rarely ever care about speed: They don’t run your SAP or a scientific workload, mostly they just wait for a bit of action and then 99% of all the variants the compiler optimizer threw away because they weren’t “good enough” would do just as well.
As a matter of fact: They would do better! As would a linker that’s actually a bit messy and not so nit-picky about the proper order in which code winds up loaded into memory.
Because if each IoT device were to get another one of these functionally equivalent pieces of code, which compute logically the very same result, even if in a slightly different manner, the crook would have a real problem: He’d have to go and find a different exploit for each device he wants to control. He’d never try, because he’d know that “That crime don’t pay!”
When we enter a new territory, we tend to do things there, just the way we have always done them back home. We may be ready for new threats and challenges, but not reflect if past threats and challenges have actually disappeared here.
Using the very same software build process we used on big and expensive servers in our data centers—which we used there for good reasons—in the wild and dangerous spaces of the Internet of Things, we despair; because the complex security processes we employ there to counter the threats there, are impossible to duplicate economically on a vastly bigger population of cheap small devices. And by doing so, we quite literally let the solution run through our fingers, or those of the optimizing compiler.
All we need to do is deploy all these variants the optimizer discarded in the domain where speed doesn’t even matter, and we’ll wind up with a million diverse Things which have a totally new quality: Not worth exploiting!
This blog is part of a series on Next-generation architecture. For more information read Journey 2022 Resolving Digital Dilemmas Thought Leadership report researched and written by the Atos Scientific Community.