Learn What's New in AVCLabs Video Enhancer AI Windows V3.2.0
July 2023 Updated (V3.2.0) for Windows Version: Support TensorRT Acceleration
AVCLabs Video Enhancer AI Windows V3.2.0 achieves notable advancements in video processing on NVIDIA GPUs by effectively utilizing TensorRT Models.
You can check out the performance improvements of TensorRT models on different NVIDIA GPUs:
The comparison of processing time: TensorRT vs. CUDA ONNX
What is NVIDIA TensorRT?
NVIDIA TensorRT is a high-performance deep learning inference optimizer and runtime library specifically designed to maximize the efficiency of deep learning inference on NVIDIA GPUs. TensorRT is commonly used to accelerate inference for deep learning models developed with frameworks like TensorFlow, PyTorch, and ONNX.
TensorRT takes trained deep learning models and optimizes them for deployment on NVIDIA GPUs, achieving higher inference throughput and lower latency compared to running the models on regular CPU-based systems.
The benefits of the utilization of TensorRT models in AVCLabs:
The TensorRT Models have undergone comprehensive training and optimization to enhance performance, resulting in faster video or image processing and reduced GPU memory usage. By leveraging TensorRT Models, AVCLabs Video Enhancer AI achieves substantial speed improvements, typically 1-3 times faster than previous versions.
For more release information, please refer to version history.