TensorFlow Lite for Microcontrollers

TensorFlow Lite for Microcontrollers

Translation / TF Community Translation Team

Introduction: We previously introduced “The Future of Machine Learning – Miniaturization (Part 1)&(Part 2)“, and many friends expressed great interest and raised some questions. This article provides a detailed introduction to microcontrollers. We welcome friends who are interested and have needs in this area to communicate with us more~

TensorFlow Lite for Microcontrollers is an experimental port of TensorFlow Lite, designed for microcontrollers and other devices with only a few kilobytes of memory.

It can run directly on “bare metal” without requiring operating system support, any standard C/C++ libraries, or dynamic memory allocation. The core runtime only requires 16KB when running on Cortex M3, and with enough space to run a voice keyword detection model, it only needs 22KB.

Getting Started

To quickly get started and run TensorFlow Lite for Microcontrollers, please read Getting Started with Microcontrollers.

Note: Getting Started with Microcontrollers link

https://tensorflow.google.cn/lite/microcontrollers/get_started

Why Microcontrollers Are Important

Microcontrollers are typically small, low-power computing devices often embedded in hardware that only requires basic computations, including household appliances and IoT devices. Billions of microcontrollers are produced each year.

Microcontrollers are usually optimized for low power consumption and small size, but at the cost of reduced processing power, memory, and storage. Some microcontrollers have features to optimize performance for machine learning tasks.

By running machine learning inference on microcontrollers, developers can add AI to a wide range of hardware devices without relying on network connections, which often helps to overcome constraints caused by bandwidth, power, and the high latency they can lead to. Running inference on-device can also help protect privacy, as no data is sent out from the device.

Features and Components

  • C++ API, with a runtime that only requires 16KB on Cortex M3

  • Uses the standard TensorFlow Lite FlatBuffer architecture (schema)

  • Pre-generated project files for popular embedded development platforms like Arduino, Keil, and Mbed

  • Optimized for multiple embedded platforms

  • Example code demonstrating speech keyword detection

Development Workflow

This is the process of deploying a TensorFlow model to a microcontroller:

  1. Create or Obtain a TensorFlow Model The model must be very small to fit your target device after conversion. It can only use supported operations. If you want to use operations that are currently not supported, you can provide your own implementation.

  2. Convert the Model to TensorFlow Lite FlatBuffer You will use the TensorFlow Lite Converter to convert the model to the standard TensorFlow Lite format. You may want to output quantized models, as they are smaller and more efficient to execute.

  3. Convert FlatBuffer to C Byte Array The model is stored in read-only program memory and provided as a simple C file. Standard tools can be used to convert FlatBuffer to C arrays.

  4. Integrate the TensorFlow Lite for Microcontrollers C++ Library Write microcontroller code to use the C++ library for inference.

  5. Deploy to Your Device Build the program and deploy it to your device.

Note: Build and Convert Model link

https://tensorflow.google.cn/lite/microcontrollers/build_convert#%E8%BD%AC%E6%8D%A2%E6%A8%A1%E5%9E%8B

Understanding C++ Library link

https://tensorflow.google.cn/lite/microcontrollers/library

Supported Platforms

One of the challenges of embedded software development is the presence of many different architectures, devices, operating systems, and build systems. Our goal is to support as many popular combinations as possible and to make it easy to add support for other devices.

If you are a product developer, you can download the build instructions or pre-generated project files for the following platforms:

TensorFlow Lite for Microcontrollers

Download link see “Read the original article”

If your device is not yet supported, adding support may not be difficult. You can learn about the process in the README.md.

Note: README.md link

https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/experimental/micro/README.md#how-to-port-tensorflow-lite-micro-to-a-new-platform

Portable Reference Code

If you have not yet considered a specific microcontroller platform, or just want to try out the code before starting the porting process, the simplest method is to download platform-independent reference code.

Note: Reference Code link

https://drive.google.com/open?id=1cawEQAkqquK_SO4crReDYqf_v7yAwOY8

The archive contains many folders, each containing the source files needed to build a binary file. Each folder has a simple Makefile, and you should be able to load the files into almost any IDE and build them. We also provide a pre-configured Visual Studio Code project file, so you can easily browse the code in a cross-platform IDE.

Note: Visual Studio Code link

https://code.visualstudio.com/

Goals

Our design goals are to make the framework readable, easy to modify, well-tested, easy to integrate, and fully compatible with TensorFlow Lite through a consistent file structure, interpreter, API, and kernel interface.

You can read more about design in terms of goals and trade-offs.

Note: Goals and Trade-offs link

https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/experimental/micro#goals

Limitations

TensorFlow Lite for Microcontrollers is designed for specific limitations in microcontroller development. If you are using a more powerful device (like an embedded Linux device such as Raspberry Pi), the standard TensorFlow Lite framework may be easier to integrate.

Consider the following limitations:

  • Only supports a limited subset of TensorFlow operations

  • Only supports a limited number of devices

  • Low-level C++ API requires manual memory management

Note: Limited Subset link

https://tensorflow.google.cn/lite/microcontrollers/build_convert#%E6%94%AF%E6%8C%81%E7%9A%84%E6%93%8D%E4%BD%9C

If you want to learn more about TensorFlow, please refer to the following documentation. These documents delve into many topics mentioned in this article:

  • TensorFlow Lite Microcontrollers

    (https://tensorflow.google.cn/lite/microcontrollers/overview)

  • FlatBuffer(https://google.github.io/flatbuffers/)

  • Example code for speech keyword detection(https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/experimental/micro/examples/micro_speech)

  • Building and Converting Models(https://tensorflow.google.cn/lite/microcontrollers/build_convert#%E8%BD%AC%E6%8D%A2%E6%A8%A1%E5%9E%8B)

  • Understanding C++ Library(https://tensorflow.google.cn/lite/microcontrollers/library)

  • Supported Platforms(https://tensorflow.google.cn/lite/microcontrollers/overview)

  • Adding Support README.md(https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/experimental/micro/README.md#how-to-port-tensorflow-lite-micro-to-a-new-platform)

TensorFlow Lite for Microcontrollers

Leave a Comment