NVIDIA AI Releases the TensorRT Model Optimizer: A Library to Quantize and Compress Deep Learning Models for Optimized Inference on GPUs

Generative AI, despite its impressive capabilities, needs to improve with slow inference speed in its real-world applications. The inference speed is how long it takes for the model to produce an output after giving a prompt or input. Generative AI models, unlike their analytical counterparts, require complex calculations to generate creative text, images, or other outputs. Imagine a generative AI employed to create a realistic image or video with complex scenarios. It needs to consider lighting, texture, and object placement, all of which demand significant processing power. This translates to hefty processing demands, making them expensive to run at scale. 

As these models grow in size and complexity, the need to efficiently produce results to serve numerous users simultaneously continues to escalate. Accelerated inference speeds are crucial for generative AI to reach its full potential. Faster processing allows for smoother user experiences, quicker turnaround times, and the ability to handle larger workloads, which are all essential for practical applications. 

Researchers from NVIDIA aim to accelerate the inference speed of generative AI models by expanding their inference offerings. The need to develop robust model optimization techniques that can reduce memory footprints and accelerate inference while maintaining model accuracy is rising. NVIDIA’s researchers address these challenges by introducing the NVIDIA TensorRT Model Optimizer, a comprehensive library of cutting-edge post-training and training-in-the-loop model optimization techniques.

Current methods for model optimization often lack comprehensive support for advanced techniques such as post-training quantization (PTQ) and sparsity. Techniques like filter pruning and channel pruning remove unnecessary connections within the model, streamlining calculations and accelerating inference. In contrast, quantization methods convert the model’s data to lower precision formats for reducing memory usage and enabling faster computations. These methods provide fundamental techniques but often fail to provide the calibration algorithms that are required for accurate quantization. Further, achieving 4-bit floating-point inference without compromising accuracy remains a challenge. In response to these limitations, NVIDIA’s TensorRT Model Optimizer offers advanced calibration algorithms for PTQ, including INT8 SmoothQuant and INT4 AWQ. Moreover, it addresses the challenge of 4-bit inference accuracy drop by providing Quantization Aware Training (QAT) integrated with leading training frameworks.

The TensorRT Model Optimizer leverages advanced techniques such as post-training quantization and sparsity to optimize deep learning models for inference. With PTQ, developers can reduce model complexity and accelerate inference while preserving accuracy. For example, leveraging INT4 AWQ, a Falcon 180B model can fit onto a single NVIDIA H200 GPU. In addition, QAT allows 4-bit floating-point inference without lowering accuracy by working out scaling factors during training and incorporating simulated quantization loss into the fine-tuning process. The Model Optimizer also offers post-training sparsity techniques, providing additional speedups while preserving model quality.

The TensorRT Model Optimizer has been evaluated, qualitatively and quantitatively, on various benchmark models to ensure its efficiency for wide-ranging tasks. With tests on a Llama 3 model, it was shown that the INT4 AWQ can be 3.71 times speedup than the FP16. There was a 1.45x speedup on RTX 6000 Ada and a 1.35x speedup on a L40S without FP8 MHA when tests compared FP8 and INT4 to FP16 on different GPUs. INT4 performed similarly, getting a 1.43x speedup on the RTX 6000 Ada and a 1.25x speedup on the L40S without FP8 MHA. When the optimizer is used to generate images, NVIDIA INT8 and FP8 can produce images with quality that is almost the same quality as the FP16 baseline while speeding up inference by 35 to 45 percent.

In conclusion, the NVIDIA TensorRT Model Optimizer addresses the pressing need for accelerated inference speed for generative AI. By providing comprehensive support for advanced optimization techniques such as post-training quantization and sparsity, it enables developers to reduce model complexity and accelerate inference while preserving model accuracy. The integration of Quantization Aware Training (QAT) further facilitates 4-bit floating-point inference without compromising accuracy. Overall, the Model Optimizer achieved significant performance improvements, as evidenced by MLPerf Inference v4.0 results and benchmarking data.

[Announcing Gretel Navigator] Create, edit, and augment tabular data with the first compound AI system trusted by EY, Databricks, Google, and Microsoft