Stanford AI Researchers Propose ‘Point2Cyl’: A Supervised Neural Network Transforming A Raw 3D Point Cloud To A Set of Extrusion Cylinders

Reverse engineering 3D CAD models into primitives interpretable and usable by CAD designers

If an item was created before the rise of digital manufacturing, its associated CAD model is not available. There is a need to reverse engineer point clouds into primitives that CAD designers can use and interpret during the modeling process. To change 3D data in shape editing software and expand their usage in many downstream applications, reverse engineering from raw geometry to CAD models is required. This research applies a neural network technique to learn the underlying geometric proxies before resolving the extrusion cylinder decomposition problem in a geometry-based manner. The proposed neural network gains knowledge of the surface normal, base/barrel membership, and per-point extrusion instance segmentation.

This study frames the problem as a task that has to be broken down into Extrusion Cylinders. Extrusion Cylinders are a type of parameterized primitive that serves as a general term for a collection of sketch-extrude operations used to characterize CAD models better. Figure 1 illustrates the conversion of a point cloud into a flexible 3D CAD model format, which CAD modelers commonly use.

Source: https://arxiv.org/pdf/2112.09329.pdf

This research reframes the 3D reconstruction process as an Extrusion Cylinder decomposition issue, making it appropriate for CAD modeling. Fusion Gallery and DeepCAD, two already-existing CAD datasets, are used in this study for quantitative and qualitative validations.

Instead of a finite number of primitives, this study views a building block as an arbitrary sketch-extrude approach applied to any closed, non-self-intersecting 2D loop. The extrusion cylinder used in this work is primitive, resembling the CAD design process. It allows building any structure out of randomly closed loops by connecting them with a series of boolean operations.

The sketch-extrude decomposition issue is resolved in this research in a geometry-based manner by first studying the geometric features as proxies. These proxies consist of surface normals, base-barrel segmentation, and instance segmentation. A differentiable and closed-form approach is created to estimate additional extrusion parameters from projected geometric proxies. The barrel points of each segment are scaled and projected onto the plane to predict the sketch depiction. 

Handwritten notes

A multi-task, non-convex objective of segmentation, base-barrel classification, normal, and sketch regularization losses are used to train the parameters. Per-point unoriented normals are calculated for the provided input point cloud, and the absolute cosine distance between the predicted and GT normals is penalized for assessing the normal loss. A set of unordered segments is anticipated to prevent numerous alternative orderings of the sketch-extrude blocks from producing a similar output shape. By computing the Relaxed Intersection over Union (RIoU) between the predicted and actual segmentation, Hungarian matching is also employed to identify the extrusion cylinder segments that have the best each match with the ground truth. The regularizer is introduced to ensure that the predicted parameters produce meaningful sketches. A global 3D point cloud feature is learned using the Pointnet++ backbone. Then, to produce instance, base/barrel, and normal segmentations, this feature vector is transmitted through two separate but completely linked branches.

The network is initially trained using segmentation, base-barrel classification, and normal losses, and parallel pre-training is performed on the implicit sketch network. Bothe pre-trained networks are appended to train the entire model Point2Cyl using the full loss function. The networks are trained and tested on 4316/1242 models on Fusion and 34910/3087 on DeepCAD. Moreover, the approach is evaluated using several evaluation metrics such as Segmentation IoU, Normal angle error, Base/barrel classification accuracy, Extrusion-axis angle error, Extrusion center error, Per-extrusion cylinder fitting loss, and  Global fitting loss. 

Pytorch is used to implement the network, and models are trained for 300 epochs. The network has 3.6M parameters, and on a single Titan RTX GPU, each training batch takes 0.41 seconds, while a single model at inference time takes 0.25 seconds. The experiments show that the proposed approach quantitively performs better than the other baselines in all metrics. It can be observed from the approach’s ablation research that the network requires the preliminary drawing to regularise and create segments that project almost a closed loop sketch. Also, it cannot adaptively address the problems of biases present in the hand-crafted estimation if it does not learn the normals.

Hence, the paper introduces the Extrusion Cylinder and develops its base for fitting to point sets. Furthermore, differentiable methods are proposed for a neural architecture that divides a point cloud into a collection of extrusion cylinders. Finally, for extra reconstruction, viewing, and reuse, the output of the proposed Point2Cyl supports shape modifications and may be immediately imported into contemporary CAD models.

This Article is written as a paper summary article by Marktechpost Research Staff based on the research paper 'Point2Cyl: Reverse Engineering 3D Objects from Point Clouds to Extrusion Cylinders'. All Credit For This Research Goes To Researchers on This Project. Checkout the paper and project.

Please Don't Forget To Join Our ML Subreddit

Priyanka Israni is currently pursuing PhD at Gujarat Technological University, Ahmedabad, India. Her interest area lies in medical image processing, machine learning, deep learning, data analysis and computer vision. She has 8 years of teaching experience to engineering graduates and postgraduates.