If we want to build a world that doesn’t require an internet connection but can still make use of ubiquitous sensors, as Alasdair argues above, then we need smarter sensors. For an idea of what that might look like, I consulted with Pete Warden, an expert in machine learning.
Warden is the former CTO of Jetpac, a company purchased by Google in 2014. He recently left Google and is now thinking about how to disconnect devices from the internet using machine learning on microcontrollers or other devices at the edge. By putting machine learning on a sensor, engineers can build devices that don’t need an internet connection for basic tasks.
It’s possible to put sensing such as wake-word detection for limited phrases on a microcontroller. This could enable voice activation for lamps or speakers as needed. We could also enable object recognition in the form of person recognition or rodent recognition on a sensor to build devices such as people counters or rat traps, respectively.
In a research paper written with several others, Warden explains that if we build such smarter sensors, we should also make them incredibly simple, abstracting out much of the current complexity associated with running machine learning on a sensor. He and his fellow authors offer four different use cases including key word detection, object recognition, transcribing numbers or letters from an analog sensor to a digital one, and finger tap recognition. They suggest that such sensors wouldn’t have connectivity, and would have a pre-tested and authenticated ML algorithm loaded on the device ahead of shipment.
Smarter sensors would make it easier for developers to build products using those sensors. And having the sensor locked down with an established algorithm already embedded on the device would ensure a level of security and accountability for the sensor. Yes, there would be costs associated with this approach, such as the loss of total customization that could potentially reduce the overall cost and and power usage of such a sensor.
However, I do think that cutting out the ML experts and embedded hardware engineers required to design a custom sensor would more than make up for the cost considerations. And if a device took off, then it might make sense to engineer a custom sensor for the workload, much like we see cloud computing companies build their own servers to tweak for any optimal power and performance characteristics.
There is a lot to the paper, and if you are interested in stepping back to rebuild the IoT without some of the layers of complexity and weakness it currently features, I recommend you spend some time reading it. With smarter sensors we can provide a lot of functionality without incurring ongoing cloud costs. We can also create an audit trail for ML algorithms and reduce the surface area to which hackers can get access.
This audit trail, and the testing of such sensors before they get deployed in the field, also offer regulators and academics the chance to test algorithms for bias or unwanted societal effects. For example, it would be much harder for someone who wanted to build a wake-word detection sensor that is keyed to the phrase “abortion” or “baby” to get their algorithm in devices unless they were an expert at machine learning and building embedded systems.
The paper showcases a movement, and a potential business plan for a company that makes sensors according to its principles. I’d like to see such an effort take place, because I don’t believe the benefits of TinyML belong to the few firms with the engineering capabilities to design and implement them. I’d like to see what could be done if such smart sensors were available to everyone.