No-reference Video Quality Assessment for Noise, Blur, and MPEG2 Natural Videos

Abstract

In this paper, we propose a new no-reference VQA metric, called Video Hybrid No-reference (VHNR) method. It is based on natural video statistics built from the coefficients of 3D curvelet and cosine transforms. VHNR can blindly predict the quality of noisy, blurry, or MPEG2 compressed videos and requires no original reference video. The 3D curvelet transform is known to be sensitive to surface singularities, generated by noise or blur artifacts in the videos. On the other hand, the cosine transform is well-suited to detect MPEG2 compression artifacts. No-reference works because we studied tens of thousands of distorted videos and obtained a statistical relation between the video quality, the specific video characteristics in the transformed spaces, and the video motion speed. Intensive computations are required to analyze tens of thousands of simulated high resolution videos. Since 3D curvelet transform of each video requires 6GB memory and large amounts of computation time, the algorithm is implemented on the FSU High-Performance Computing (HPC) using MPI. The parallelism reduces the computational time of the whole experiment from 118 days to 9 days (a speedup of 13).