MIT Research’s new tech will bring deep learning to IoT

deep learning in IoT

In what could eventually be a great leap for smart devices, MIT researchers have developed a system that adds deep learning neural networks on smaller devices like the tiny computer chips in wearable medical devices, household appliances, and other similar, over 250 billion objects that constitute the Internet of Things (IoT).

Called MCUNet, it designs compact neural networks that will deliver “unprecedented” speed and accuracy for deep learning on IoT devices, despite the latter’s limited memory and processing power.

The research, according to an announcement by MIT, will be presented at next month’s Conference on Neural Information Processing Systems.

The lead author is Ji Lin, a PhD student in Song Han’s lab in MIT’s Department of Electrical Engineering and Computer Science. Co-authors include Han and Yujun Lin of MIT, Wei-Ming Chen of MIT and National University Taiwan, and John Cohn and Chuang Gan of the MIT-IBM Watson AI Lab.

The Problem

IoT devices often run on microcontrollers — computer chips with no operating system, minimal processing power, and less than one thousandth of the memory of a typical smartphone. So pattern-recognition tasks like deep learning are almost difficult to implement on IoT devices. For complex analysis, IoT-collected data is often sent to the Cloud, making it vulnerable to hacking.

The Solution

With MCUNet, Han’s group co-designed two components needed for “tiny deep learning” — the operation of neural networks on microcontrollers. One component is TinyEngine, an inference engine that directs resource management, akin to an operating system. TinyEngine is optimized to run a particular neural network structure, which is selected by MCUNet’s other component: TinyNAS, a neural architecture search algorithm.

Designing a deep network for microcontrollers wasn’t easy, again, because of the size.

So Lin developed TinyNAS, a neural architecture search method that creates custom-sized networks. “We have a lot of microcontrollers that come with different power capacities and different memory sizes,” says Lin, in the article by MIT News. “So we developed the algorithm [TinyNAS] to optimize the search space for different microcontrollers.”

The customized nature of TinyNAS means it can generate compact neural networks with the best possible performance for a given microcontroller — with no unnecessary parameters. “Then we deliver the final, efficient model to the microcontroller,” say Lin.

To run that tiny neural network, a microcontroller also needs a lean inference engine. A typical inference engine carries some dead weight — instructions for tasks it may rarely run. The extra code poses no problem for a laptop or smartphone, but it could easily overwhelm a microcontroller. “It doesn’t have off-chip memory, and it doesn’t have a disk,” says Han. “Everything put together is just one megabyte of flash, so we have to really carefully manage such a small resource.” Cue TinyEngine.

The researchers developed their inference engine in conjunction with TinyNAS. TinyEngine generates the essential code necessary to run TinyNAS’ customized neural network. Any deadweight code is discarded, which cuts down on compile-time. “We keep only what we need,” says Han. “And since we designed the neural network, we know exactly what we need. That’s the advantage of system-algorithm codesign.”

After codesigning TinyNAS and TinyEngine, Han’s team put MCUNet to the test.

The Challenge

MCUNet’s first challenge was image classification. The researchers used the ImageNet database to train the system with labelled images, then to test its ability to classify novel ones. On a commercial microcontroller they tested, MCUNet successfully classified 70.7 percent of the novel images — the previous state-of-the-art neural network and inference engine combo was just 54 percent accurate.

“Even a 1 percent improvement is considered significant,” says Lin. “So this is a giant leap for microcontroller settings.”

Source: MIT News

Image credit: MIT

Leave a Reply

Click here to opt out of Google Analytics