Microsoft AI Research Introduces Generalized Instruction Tuning (called GLAN): A General and Scalable Artificial Intelligence Method for Instruction Tuning of Large Language Models (LLMs)

Large Language Models (LLMs) have significantly evolved in recent times, especially in the areas of text understanding and generation. However, there have been certain difficulties in optimizing LLMs for more effective human instruction delivery. While LLMs have shown progress in tasks involving token prediction and task execution with a limited number of demonstrations, this does not necessarily transfer to better human instruction.

Instruction tuning comes as a solution, which includes fine-tuning LLMs on instructions matched with replies that humans like. The currently existing techniques for instruction tuning frequently rely on Natural Language Processing (NLP) datasets, which are scarce, or self-instruct approaches that produce artificial datasets having difficulty with diversity. To address this problem, Evolve-Instruct uses data augmentation to enhance already-existing datasets, but this still limits the program’s scope because of the initial input datasets.

To overcome all these limitations, a team of researchers from Microsoft has introduced GLAN (Generalized Instruction Tuning), a paradigm that has been influenced by the human education system’s organized framework. GLAN includes a range of subjects, levels, and disciplines and generates large-scale teaching data across all disciplines methodically using a pre-curated taxonomy of human knowledge and capacities. 

This method breaks down human knowledge into domains, sub-fields, and disciplines using LLMs and human verification. After that, this taxonomy is divided into subjects, with a syllabus created for each subject. Each class session’s specific essential themes are covered in detail in the syllabus. GLAN uses samples from these ideas to produce a variety of instructions that closely resemble the design of the human educational system.

The team has shared that GLAN is a flexible, scalable, and all-purpose approach. It is scalable, producing instructions on an enormous scale, and task-agnostic, spanning a wide range of disciplines. The input, a taxonomy, has been created with minimal human effort through LLM prompting and verification. GLAN also makes customization simple because it doesn’t require recreating the entire dataset when adding new fields or skills.

Using its comprehensive curriculum, GLAN has produced a wide range of instructions covering every possible combination of human knowledge and abilities. A number of experiments have been conducted on LLMs, including Mistral, which has shown how excellent GLAN is in a variety of dimensions. These dimensions include coding, logical reasoning, mathematical reasoning, academic tests, and general instruction afterward. GLAN does this without using task-specific training data for these particular tasks.

In conclusion, GLAN is a reliable, flexible, and efficient technique for instruction tuning in LLMs. Because of its adaptability, the dataset can be expanded and changed without having to start over from scratch. It has a simple customization feature, and new domains or proficiencies can be easily added to GLAN with the addition of a new node to its taxonomy. 


Check out the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter and Google News. Join our 38k+ ML SubReddit, 41k+ Facebook Community, Discord Channel, and LinkedIn Group.

If you like our work, you will love our newsletter..

Don’t Forget to join our Telegram Channel

You may also like our FREE AI Courses….

Tanya Malhotra is a final year undergrad from the University of Petroleum & Energy Studies, Dehradun, pursuing BTech in Computer Science Engineering with a specialization in Artificial Intelligence and Machine Learning.
She is a Data Science enthusiast with good analytical and critical thinking, along with an ardent interest in acquiring new skills, leading groups, and managing work in an organized manner.

🐝 Join the Fastest Growing AI Research Newsletter Read by Researchers from Google + NVIDIA + Meta + Stanford + MIT + Microsoft and many others...