Introduction to Intel’s oneAPI Unified Programming Model for Python Machine Learning

The popular Python machine learning toolkit Scikit-learn is a simple yet effective framework for classical machine learning. Scikit-learn is the preferred choice for training models based on linear regression, logistic regression, decision trees, or random forest techniques.

Feature engineering — discovering the right qualities — and handpicking the correct algorithms associated with the business challenges are both expected in traditional machine learning. For most problems involving structured data stored in relational databases, spreadsheets, or flat files, this is the best technique.

Deep learning, on the other hand, is a subset of machine learning that uses enormous datasets and a lot of computing power to find high-level features and hidden patterns in data. Deep learning techniques based on well-defined neural network architecture are chosen by ML engineers and researchers when training models based on unstructured input such as photos, video, and audio.

Other advanced AI frameworks, such as TensorFlow, PyTorch, Apache MXNet, XGBoost, and others, can be used to train models based on structured or unstructured datasets and a wide variety of methods used in deep learning and conventional machine learning workflows, in addition to Scikit-learn. Researchers and engineers in the field of machine learning prefer versions of these frameworks that have been tuned for speed. The AI acceleration is achieved by the use of both hardware and software.

Deep learning frameworks such as Apache MXNet, TensorFlow, and PyTorch use NVIDIA CUDA and cuDNN acceleration tools to give interfaces to the underlying Nvidia GPUs. Heterogeneous Interface for Portability (HIP) and ROCm, which enable access to AMD GPUs, provide a comparable mix. The GPU, software drivers, runtime, and libraries are all used to accelerate AI in these scenarios. Deep learning frameworks are strongly connected with AI acceleration technologies to accelerate deep learning model training and inference on the GPUs.

While GPUs are widely utilized in deep learning training, CPUs are more common in the whole end-to-end AI workflow, which includes data preprocessing/analytics, machine learning modeling/deployment, and data preprocessing/analytics. Intel Xeon Scalable processors are, in reality, the most commonly utilized AI server platform from the cloud to the edge.

Intel has been at the vanguard of oneAPI, a cross-industry, open, standards-based unified programming paradigm that targets a variety of architectures, including the aforementioned CPUs and GPUs, as well as FPGAs and other AI accelerators. Developers can use the oneAPI toolkit as a collection of toolkits associated with HPC, AI, IoT, and ray tracing application cases.

The Intel oneAPI AI Analytics Toolkit (AI Kit) is a Python-based toolkit for data scientists and AI engineers. It comes with optimized AI frameworks for Scikit-learn, XGBoost, TensorFlow, and PyTorch as part of Intel’s end-to-end portfolio of AI development tools.

The Intel Distribution of Modin and Intel Extension for Scikit-learn, which are highly tuned for the CPU and promise a 10-100X speed gain, is the most exciting component of the AI Kit for developers and data scientists employing a machine learning process. The best part about these frameworks is that they are backward compatible with Pandas and vanilla Scikit-learn, allowing for easy swapping.

The Intel Distribution of Modin is a high-performance, parallel, distributed, pandas-compatible DataFrame acceleration solution aimed at helping data scientists work more efficiently. The Pandas API is completely compatible with this package. Its backend is powered by OmniSci, and it offers faster analytics on Intel systems.

Modin is Pandas-compatible, and it uses Ray and Dask to enable distributed data processing. It’s a Pandas drop-in replacement that turns single-threaded Pandas into multithreaded Pandas, taking advantage of all CPU cores and instantly speeding up data processing workflows. Modin performs particularly well on large datasets when pandas might otherwise run out of memory or become exceedingly slow.

Modin also comes with a powerful interface that supports SQL, spreadsheets, and Jupyter notebooks. Modin allows data scientists to take advantage of parallelized data processing capabilities while still using the familiar Pandas API.

Modin is easy to set up. It’s available through Intel oneAPI AI Analytics Toolkit’s Conda package management.

conda create -n aikit-modin intel-aikit-modin -c intel -c conda-forge
conda activate aikit-modin

A simple Modin implementation is shown in the code snippet below:

import modin.pandas as pd
df = pd.read_csv('~/trips_data.csv')

The Intel Extension for Scikit-learn provides optimized versions of numerous scikit-learn algorithms that are compatible with the original version and deliver faster results. The package simply reverts to Scikit-original learn’s behavior methods or uses parameters that the extension does not support, giving developers a consistent experience. A given ML application will continue to work as before, if not quicker, without the need to rewrite additional code.

Patching replaces the stock scikit-learn algorithms with their optimized versions provided by the extension, resulting in a faster learning speed. The module can be installed using either conda or pip.

pip install scikit-learn-intelex
conda install scikit-learn-intelex -c conda-forge