Using TensorFlow To Reconstruct Thousands of Particles in One Attempt at The CERN LHC (Large Hadron Collider)

Highly energetic particle beams, when collided at large colliders such as the CERN LHC (Large Hadron Collider), create massive and possibly many unknown particles from the collision energy. The newly formed particles are generally unstable and immediately decay to particles at a stable state. It is essential to determine the decay product properties to understand the high energy collision. 

Large detectors are used to surround the collision interaction points, covering all possible directions and energies of the decay products. Further, the detectors are divided into sub-detectors, and each detector collects complementary information. The innermost detector consists of multiple layers and acts as the tracker. Each layer can detect the spatial position at which a charged particle passed through it, providing access to its trajectory. When combined with a strong magnetic field, this trajectory offers access to the particle charge and the particle momentum. The tracker aims to measure the trajectories while minimizing further interaction with and scattering the particles. The later sub-detector layers then stop them entirely. 

Information from all these sub-detectors are combined to reconstruct the final particles is a challenging task. The extension of the CERN LHC aims to collect unprecedented amounts of data. In every collision, these algorithms are given a collision rate of 40MHz and up to 200 simultaneous interactions, resulting in about a million signals.

🔥 Promoted Read: Document Processing and Innovations in Intelligent Character Recognition (ICR) Over the Past Decade

Conventional reconstruction algorithms in high-energy physics use factorization of individual steps. However, the assumptions needed to develop these algorithms limit the performance. Many Machine-learning (ML) techniques refine the classically reconstructed particles. The accurate simulations of all detector components and physics processes help ML methods (such as Neural network-based identification and regression algorithms) to produce large sets of labeled data in a short amount of time. 

TensorFlow is adopted as the standard inference engine in the software framework of the Compact Muon Solenoid (CMS) experiment. 

ML reconstruction approaches are automatically optimizable and need a loss function to train to quantify the final reconstruction target. However, instead of just refining already reconstructed particles, extending ML-based algorithms to the first step of rebuilding the particles has a minimization problem of the data structure and phrasing reconstruction.

Due to many sub-detectors, the detector data is highly irregular. Moreover, even though the individual tracker layers are not densely packed, they have a considerable amount of space between them. Only a tiny fraction of sensors are active in each event, changing the number of inputs from event to event. That is why even convolutional neural network architectures not applicable.

Graph neural networks help to bridge this gap by allowing abstracting from the detector geometry. But due to the high dimensionality of input data, it cannot be used to reconstruct particles directly from hits. TensorFlow allows the implementation of and loads custom kernels into the graph. It also integrates custom analytic gradients for fused operations. Combining these custom kernels and the network structure allows loading an entire physics event into the GPU memory, training the network on it, and performing the inference.

Source: https://blog.tensorflow.org/2021/04/reconstructing-thousands-of-particles-in-one-go-at-cern-lhc.html

Many reconstruction tasks also rely on an unknown number of inputs. Recently added ragged data structures in Tensorflow is a step toward integrating TensorFlow even deeper into the reconstruction algorithms and making some custom kernels obsolete. 

Training a network to predict an unknown number of particles from an unknown number of inputs is challenging. Algorithms used for detecting dense objects in dense data, such as images, rely on the objects having a clear boundary. However, particles in the detector often overlap largely and thereby do not have clear boundaries. In Object Condensation, object properties are condensed in at least one representative condensation point per object. A neural network is used to choose objects through a high confidence score freely. Experiments show that it outperforms the conventional particle reconstruction algorithms and provides an alternative to classic reconstruction approaches directly from hits.

Source: https://blog.tensorflow.org/2021/04/reconstructing-thousands-of-particles-in-one-go-at-cern-lhc.html

GitHub: https://github.com/cms-pepr/HGCalML

Tanushree Shenwai is a consulting intern at MarktechPost. She is currently pursuing her B.Tech from the Indian Institute of Technology(IIT), Bhubaneswar. She is a Data Science enthusiast and has a keen interest in the scope of application of artificial intelligence in various fields. She is passionate about exploring the new advancements in technologies and their real-life application.