Google AI Proposes ‘MLGO’: A Machine Learning Guided Compiler Optimization Python Framework

Since the invention of modem computers, there has been a constant demand for optimization and speedier code compilation. Large data center programs can benefit much from optimization, but mobile and embedded systems, as well as software installed on protected boot partitions, need reduced code. As the area has developed, the headroom has been severely constrained by ever-complicated heuristics, preventing the maintenance and additional advancements. Recent studies have demonstrated that compiler optimization can significantly benefit by substituting ML strategies for complex heuristics. Adopting ML in all-purpose, industrial-strength compilers is still tricky, nevertheless. To solve this problem, a group of Google Research engineers has presented “MLGO: a Machine Learning Guided Compiler Optimizations Framework,” the first-ever broad industrial-grade framework for systematically integrating ML approaches with LLVM. LLVM is a well-known open-source industrial compiler infrastructure that creates critical high-performance software. To train neural networks to make decision policies that can replace heuristics in LLVM, MLGO uses reinforcement learning. The team has disclosed two MLGO optimizations for LLVM, the first involving inlining to reduce code size and the second involving register allocation to enhance code performance. Both improvements may be found in the LLVM source and have been used in real-world applications.

By making decisions that allow for the removal of unnecessary code, MLGO uses inlining to reduce code size. Production code consists of thousands of functions that call each other (called caller-callee pairs). The compiler decides whether or not to inline a caller-callee pair during the inlining phase after traversing the call graph for all caller-callee pairings. This is a sequential decision-making process as earlier inlining decisions can change the call graph as it can influence subsequent decisions and the outcome. The inline/no-inline decision was formerly made by a heuristic that got harder and harder to improve. The purpose of MLGO is to replace this heuristic with a machine learning model. The compiler consults a neural network throughout the call graph traversal to determine whether to inline a specific caller-callee pair and then performs the choices sequentially until the entire call graph has been traversed. 


Using methods for policy gradient and evolution strategies, MLGO trains the decision network with RL. To gather data and refine the trained policy, online RL alternates between training and running compilation. During the inlining step, the compiler references the model for an under-training model when deciding whether to inline or not to inline. A log of the sequential decision process is produced once the compilation is complete. The trainer will then use the log to update the model. Up till an acceptable model is obtained, this process is repeated. The training policy is incorporated within the compiler to give inline / no-inline decisions during compilation. Unlike the training situation, the policy does not generate a log. The XLA AOT included in the TensorFlow model turns the model into executable code. This minimizes the extra time, and memory cost introduced by ML model inference at compilation time and prevents TensorFlow runtime reliance and overhead.

The issue of allocating physical registers to live ranges is resolved by register allocation (i.e., variables). The register allocation step was enhanced using MLGO as a universal framework, substantially enhancing the code performance in LLVM. In conclusion, MLGO is a general framework that may be made more profound by applying more robust RL algorithms and adding more elements. It can also be broadened by using it with additional optimization algorithms besides regalloc and inlining. The team is very excited about the potential contributions MLGO can make to the field of compiler optimization, and Google looks forward to its continued acceptance and upcoming research community contributions.

This Article is written as a summary article by Marktechpost Staff based on the research paper 'MLGO: a Machine Learning Guided Compiler
Optimizations Framework'. All Credit For This Research Goes To Researchers on This Project. Checkout the paper, github, demo and ref article.

Please Don't Forget To Join Our ML Subreddit

Khushboo Gupta is a consulting intern at MarktechPost. She is currently pursuing her B.Tech from the Indian Institute of Technology(IIT), Goa. She is passionate about the fields of Machine Learning, Natural Language Processing and Web Development. She enjoys learning more about the technical field by participating in several challenges.