On-device Machine Learning provides new and exciting opportunities for our devices. One of the most important aspects we can take from this technology is that it reduces latency, enhancing efficiency all around. What’s more interesting about On-Device ML? It doesn’t require network connectivity!
The development teams deploying on-device ML on Android today encounter these common challenges: apps are size constrained, so having to bundle and manage additional libraries just for ML can be a significant cost; unlike server-based ML, the compute environment is highly heterogeneous with differences in performance stability and accuracy. Maximizing reach often leads companies to use older, more broadly available APIs, which limits the usage of the latest advances in ML.
To solve these problems, Google released a new mobile ML stack to help developers when using on-device machine learning. The Android ML Platform is built around TensorFlow Lite. It aims to solve, among other things, the difficulty of deploying models in multiple languages for different device types as well as improving speed so that you can get predictions more quickly after your model finishes training.
With Android ML Platform, developers get: Built-in on-device inference essentials – which includes optimized performance and reduced apk size. It also has a consistent API that spans across different versions of the operating system to make it more reliable for developers.