Hits
5 min read (850 words)

Table of contents

Training versus inference

The deep learning workflow begins with defining the problem at hand and assembling a labeled dataset. Following this, data preprocessing techniques are applied to clean and format the dataset for training. A suitable neural network architecture is then chosen to address the specific problem, and the model is compiled with a defined loss function and optimizer. The training process involves iteratively feeding the prepared data through the network, adjusting parameters to minimize the loss, and continuing until the model performs well on a validation set.

Once trained, the model undergoes evaluation to ensure its effectiveness in generalizing to new, unseen data. In the inference phase, the model is applied to make predictions on fresh data, typically in a production environment integrated into an application or service. To enhance the model’s performance over time, optimization may involve adjusting hyperparameters, acquiring more diverse data, or implementing advanced techniques like transfer learning. The final steps include deploying the trained and optimized model to the target environment and maintaining its relevance through ongoing monitoring and updates based on changing conditions or evolving requirements. This cohesive workflow encapsulates the end-to-end process of developing, deploying, and maintaining deep learning models.

Fig. 1. High-level deep learning workflow showing training, then followed by inference [1]
Fig. 1. High-level deep learning workflow showing training, then followed by inference [1]

The figure below illustrates the dual phases of the deep learning process: training and inference. During training, a large batch of multiple inputs is fed into the deep neural network, initiating both forward and backward passes. The forward pass involves processing the input data through the network, while the backward pass computes the gradients and adjusts the network’s parameters to minimize the loss. This iterative process is performed in batches, optimizing the model’s performance over time.

In contrast, during the inference phase, new inputs are processed in smaller batches using the pre-trained network. The emphasis shifts to the forward pass only, as the model utilizes its learned parameters to extract meaningful information from the input data. This distinction highlights the training phase’s iterative nature involving both forward and backward passes for parameter optimization, while inference solely relies on the forward pass to apply the trained model to new, unseen data efficiently.

Fig. 2. Training uses multiple inputs in large batches to train a deep neural network, while inference extracts information from new inputs in smaller batches using the trained network [2]
Fig. 2. Training uses multiple inputs in large batches to train a deep neural network, while inference extracts information from new inputs in smaller batches using the trained network [2]

Why we need optimization

The crucial step is optimizing a trained neural network for deployment in practical scenarios. The trained model, having acquired specific capabilities through training, undergoes optimization processes tailored for efficient inference during deployment. This optimization is particularly essential to adapt the model to diverse platforms, ranging from embedded systems and automotive applications to data center environments. Briefly speaking, there are a few critical points why optimization can be useful:

  • Improved performance: optimized models can deliver faster and more responsive predictions
  • Efficient resource utilization: leads to cost savings, as less powerful hardware, or running multiple models simultaneously
  • Deployment flexibility: explanding reach and applicability (mobile, edge, embedded systems, etc.)
Fig. 3. Inference optimization for deployment. [1]
Fig. 3. Inference optimization for deployment. [1]

As the optimized model transitions to deployment or runtime, it undergoes platform-specific tuning to ensure seamless integration and optimal performance. The diverse nature of deployment environments necessitates adjustments to enhance efficiency and resource utilization. The figure above underscores the significance of model optimization, illustrating how the same trained neural network can be fine-tuned for various platforms, reinforcing the adaptability and practicality of deep learning models in real-world applications.

Existing challenges

However, there are several challenges. The first one challenge of optimizing deep neural networks lies in balancing accuracy and efficiency. Achieving higher accuracy often involves complex models, demanding more computational resources, while optimizing for efficiency may compromise accuracy. Striking the right balance is crucial for deploying models that meet accuracy requirements without overwhelming resource constraints in real-world applications.

The challenge lies in managing the rising complexities and sizes of deep neural network models. As models become more intricate during training, they grow in size, demanding increased computational resources. This poses a deployment challenge, especially in resource-constrained settings.

Fig. 4. Timeline of state-of-the-art NN models with the number of parameters (from ImageNet) [4]
Fig. 4. Timeline of state-of-the-art NN models with the number of parameters (from ImageNet) [4]

Adapting to diverse hardware and platform variations is a key challenge in optimizing deep neural networks. This involves using techniques like model quantization and platform-specific optimizations to ensure efficient performance across different environments. The objective is to create models that can seamlessly handle the variability in computational capacity and memory across a range of devices and architectures.

Fig. 5. Hardware platforms comparison [4]
Fig. 5. Hardware platforms comparison [4]

Least but not the last, ensuring compatibility with frameworks and libraries poses an additional challenge in the optimization of deep neural networks. The varied landscape of deep learning tools and libraries requires models to be adaptable and interoperable across different frameworks. This challenge involves addressing differences in syntax, model serialization formats, and optimization techniques specific to each framework. Achieving compatibility streamlines the deployment process, enabling the effective integration of optimized models into diverse environments and applications.

Fig. 6. Deep learning frameworks landscape
Fig. 6. Deep learning frameworks landscape

Concluding remarks

In summary, the optimization journey in deep learning unfolds through the tandem efforts of training and inference, each with its distinct objectives. While training is geared towards honing the model’s performance, accuracy, and generalization on training data, the inference phase shifts its focus to real-time operational efficiency. Here, priorities encompass minimizing latency, optimizing memory usage, and reducing power consumption, directly impacting the model’s responsiveness in practical, deployed scenarios. Achieving a harmonious equilibrium between these twin facets is key to unleashing the full potential of deep neural networks, where accuracy meets real-time efficiency in a seamlessly optimized deployment.

How to cite

Askaruly, S. (2023). Inference optimization (part 1). Tuttelikz blog: tuttelikz.github.io/blog/2023/09/inference-1

References

[1] Nemire, B. (n.d.). NVIDIA Deep Learning Inference Platform Performance Study. Retrieved from https://developer.nvidia.com/blog/nvidia-deep-learning-inference-platform-performance-study/
[2] Andersch, M. (n.d.-b). Inference: The Next Step in GPU-Accelerated Deep Learning. Retrieved from https://developer.nvidia.com/blog/inference-next-step-gpu-accelerated-deep-learning/
[3] Andersch, M. (n.d.-a). Dev Stack: How to Optimize Your Model for Inference? Retrieved from https://www.devstack.co.kr/inference-optimization-using-tensorrt/
[4] Capra, M. (2020). Hardware and software optimizations for accelerating deep neural networks: Survey of current trends, challenges, and the road ahead”. In IEEE Access 8, pp. 225134-225180