Graph Transformer Networks (GTN) is an open-source framework with weighted finite-state transducers (WFSTs), a powerful and expressive type of graph. GTN, just like PyTorch, provides a framework for WFSTs. GTN is used to effectively train graph-based machine learning models and combine different sources of information in applications such as handwriting recognition, speech recognition, and natural language processing.
Making it possible to train different types of models, the GTN library provides better results. More structured graphs allow researchers to encode prior knowledge about tasks into a learning algorithm. GTN will enable us to encode the word’s pronunciations into a graph and incorporate that graph into the learning algorithm.
But the graphs are in use previously too. So, What’s s new? The previously used individual graphs at training time were implicit, and the graph structure had to be hard-coded in the software. But with this new framework, the researchers can use WFSTs dynamically at the training time. Thus, the whole system can more efficiently learn and improve from the data.
It was challenging to build ML models with functional graph-based data structures because of the lack of easy-to-use frameworks. By separating graphs or data from operations on graphs, users will now have more freedom to experiment with the structured learning algorithms’ larger design space.
The graph structure with GTN is more appropriate for encoding useful (prior) knowledge suggestively (but not overly prescriptive). The whole system can learn and improve from data. The structure of WFSTs, combined with data learning, can make ML models modular, more accurate, and lightweight in the long term.
GTN makes it easy to construct WFSTs, visualize, and also perform operations. Gradients can be computed for any of the graphs participating in computation by simply calling gtn.backward. The releasing team hopes to encourage the researchers in the field to help us explore this new design space for better learning algorithms.