Meet Sipeed’s TinyMaix: An Open-Source Lightweight Machine Learning Library For Microcontrollers

Sipeed TinyMaix is an open-source machine learning library designed for microcontrollers. According to findings, it is lightweight enough to be compatible with Microchip ATmega328 MCU found in the Arduino UNO board and its many clones.

The core code of TinyMax, which was created during a weekend hackathon, has roughly 400 lines, a binary size of about 3KB, and uses very little RAM, allowing it to execute the MNIST handwritten digit classification on an ATmega320 MCU with only 2KB SRAM and 32KB flash.

TinyMax emphasizes

  1. Limited footprint: The core code (TM layers.c+tm model.c+arch O0.h) has fewer than 400 lines, and the code.text portion is under 3 KB. Further, the MNIST categorization uses less than 1 KB of RAM.
  2. Simple user interfaces, load/run models
  3. INT8/FP32 model support.
  4. Compatible with multiple architectures: ARM SIMD/NEON, MVEI, RV32P, and RV64
  5. Fully supports static memory configuration.

While other machine learning libraries, such as TensorFlow Lite for microcontrollers, microTVM, or NNoM, already exist, Sipeed claims that TinyMax is a simpler TinyML library. It does not need libraries like CMSIS-NN and gives up a lot of new functionality. Following this design objective, compiling TinyMaix now only requires five files.

The project’s GitHub repository consists of complete instructions for usage, training, and model conversion from Keras H5 or TensorFlow Lite, and the source code is released under the liberal Apache 2.0 license. Although it was not yet accessible at the time of writing, Sipeed is also attempting to add support for MaixHub’s online model training.

According to their article, the future TinyMaix features may include:

  • Concat OPA for Mobilenet v2 support (but uses twice the memory and may be slow)
  • The INT16 quant model for higher accuracy and better support for SIMD/RV32P acceleration at the cost of a larger footprint.
  • Winograd Convolution Optimization for faster inference speeds at the expense of more RAM and memory bandwidth consumption 




Tanushree Shenwai is a consulting intern at MarktechPost. She is currently pursuing her B.Tech from the Indian Institute of Technology(IIT), Bhubaneswar. She is a Data Science enthusiast and has a keen interest in the scope of application of artificial intelligence in various fields. She is passionate about exploring the new advancements in technologies and their real-life application.

🐝 Join the Fastest Growing AI Research Newsletter Read by Researchers from Google + NVIDIA + Meta + Stanford + MIT + Microsoft and many others...