PyTorch Lightning Team Introduces Flash Lightning That Allows Users To Infer, Fine-Tune, And Train Models On Their Data

0
1363
Source: https://pytorchlightning.ai/

What is Flash?

Flash is a collection of fast prototyping tasks, baselining and fine-tuning scalable Deep Learning models, built on PyTorch Lightning. It offers a seamless experience from baseline experiments to state-of-the-art research. It enables users to build models without getting intimidated by all the details and flexibly experiment with Lightning for complete versatility. 

PyTorch Lightning is an open-source Python library providing a high-level interface for PyTorch. PyTorch Lightning has recently received an excellent response for decoupling research from boilerplate code, enabling seamless distributed training, logging, and deep learning research code reproducibility. Many research labs and AI companies use Lightning to simplify the training of PyTorch models around the globe.

Starting a Deep Learning project can be quite overwhelming. It takes time to get a baseline model running on a new dataset or out of domain task. But with Flash, users can create their image or text classifier in a few code lines without requiring fancy modules and research experience. 

The standard workflow for any new Deep Learning project follows the three-step cycle. Generally, the example repositories do not scale to production training and inference. Most of the time, they are not maintained for more than a few weeks or sometimes months. Data loading is usually hardcoded to different benchmark datasets. This leads to multiple incompatible standards for Task representation. It is nearly impossible to modify or build upon the code unless you master its work approach.

With Flash, users can quickly get a Lightning baseline to benchmark their experiments against, using proven backbones for common data patterns. Flash replaces this cycle allowing users to focus on science and not on infrastructure. It provides data scientists, developers, and Kagglers easy access to Lightning’s power and makes baselining trivial for more experienced researchers.

Compatible with PyTorch Lightning’s aim of getting rid of the boilerplate, Flash intends to efficiently train, inference and fine-tune models with Lightning quickly and flexibly. Users can override their Task code with Lightning and PyTorch to find the right level of abstraction for their skillset.

https://github.com/PyTorchLightning/pytorch-lightning

How Flash Works

Flash comprises a collection of Tasks. Flash Tasks are laser-focused objects that use SOTA approaches to solve everyday problems. They are designed to make Inference, Fine-tuning, and Training harmonious. It currently supports Image classification, Image embedding, tabular classification, text-classification, summarization, and translation.

Flash tasks include all the essential information to solve the task at hand, like the number of class labels that are to be predicted, the number of columns in the given dataset, and details on the model architecture used such as loss function, parameters, etc. One can choose which architecture to use for various implementations by overriding the loss function or optimizers.

Flash is the first high-level framework to provide consistent support for distributed training and inference of Deep Learning models. With PyTorch Lightning, users can train and fine-tune their flash tasks on any hardware, including CPUs, GPUs, or TPUs, without making any changes in the code.

Flash GitHub: https://github.com/PyTorchLightning/lightning-flash

PyTorch Lightening: https://pytorchlightning.ai/

Source: https://medium.com/pytorch/introducing-lightning-flash-the-fastest-way-to-get-started-with-deep-learning-202f196b3b98

Advertisement

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.