Perhaps the better question is: would any IoT standards reduce cybercriminal activity? The internet of things has still yet to reach maturity, which means manufacturers and programmers haven’t conclusively drawn up a standardized system of protocol for both security and personal safety. In fact, at this point in the game, device makers can barely agree on communication channelsâ€”Zwave, bluetooth, and WiFi all vie to become the dominant protocol.
And yet the technology hasn’t paused to allow standards to catch up. Developers are busy thinking of new and even more fantastical uses for automationâ€”without a great deal of thought as to how to secure these innovations. The presence of driverless cars and autopilot systems, for instance, such as Tesla’s autonomous steering feature, highlight just how dangerous this tech can be when not properly secured. In 2015, professional researchers were able to hack into a Jeep Cherokee and remotely cut the transmission, effectively demonstrating that the IoT industry has larger issues on its hands than some mishandled personal data. Meanwhile, there are other risks of bodily harm and property loss: last year, University of Michigan researchers devised a malware app to break into leading smart home security devices and steal the individual PIN that opens the front door.
There are hundreds of other cases just like these, all of which hit home the need for better standards and a more methodical approach to device and network security. Of course, that’s easier said than done. There are several challenges that stand in the way of such standardization, such as uniform testing procedures. It’s even difficult, at this point, to assign legal responsibility when something does go wrong. Here are some considerations to make as IoT progresses into a fully-fledged ecosystem.
Problems with open software versus customized solutions
Security experts believe that one of the problems endemic to smart devices is that manufacturers are too rushed to push out new products. In their haste to beat competitors to the market, they often rely on open-source software with known security flaws.
Even proprietary platforms can cause issues, however. Take, for instance, Samsung’s SmartThings framework, which offers an open API for app developers. Security failures in SmartThings allowed apps to access every function and device in a connected home, rather than limiting access. Essentially that meant that an enterprising hacker could potentially develop an innocuous-seeming app, like the battery life monitoring one group of researchers created in a University of Michigan study, which let developers access high-security functionality, such as the locking feature on a smart security system.
The research, of course, spurred Samsung into action, and the company released multiple security updates over the following months. However, the root issue is not that uncommon: Samsung purchased SmartThings when it was merely a fledgling IoT startup and just assumed that the developers would have implemented the high security standards required by a multimillion-dollar company. This illustrates the core of the problem: mainstream tech companies and appliance manufacturers are so eager to snatch up innovative IoT properties that they forget that they must nurture these startups past their cobbled-together beginnings.
Testing devices proves challenging for developers
That being said, however, developers should not be held completely responsible when it comes to security flaws. Testing these devices involves so many conditions and factors that it’s often impossible to replicate every real-life environment an IoT product might encounter. Meanwhile, recreations like these are expensive and time-consuming to construct.
Additionally, such devices are only as strong as their weakest link: many rely on interaction with systems and services maintained externally by a completely separate third party group. For instance, a developer using Amazon’s source code in their product has to trust that their protocol is complete and their networks secure. In some cases, QA testers may not even be able to access such subcomponents during their device’s trial run. Issues like these make it difficult to tease out not only flaws in device security, but also responsibility in the case that security is breached.
Legal responsibility: a missing ingredient for IoT security
Cohesive standards not only factor into device manufacturingâ€”all providers involved in the IoT device industry need to agree on a standardized process for designating legal responsibility. A number of players, including the Department of Homeland Security, together are currently working on defining legal standards for IoT security. The DHS best practices recommendations advise that designers must tackle the responsibility of securing devices from the very first conceptual phases. In this way, designers can build device functionality to incorporate not just the handling of various tasks, but also how the product deals with the interruption of these operations.
Additionally, developers and manufacturers need to take a more integrated perspective when designing devices, offering multiple-level security checks at the network, application, and physical layers. Of course, that may sound wearying to homeowners who already believe that IoT setup makes for a confusing and time-consuming home improvement project. The challenge for developers will be in building complex security structures that don’t hamper device usability. Still, with so much attention on cybersecurity, universal standards are likely only a hair’s breadth away. All we have to do now is wait.
About Erin Vaughan
Erin VaughanÂ is a blogger, gardener and aspiring homeowner. Â She currently resides in Austin, TX where she writes full time for Modernize, with the goal of empowering homeowners with the expert guidance and educational tools they need to take on big home projects with confidence.