Skip to content
/ vmaf Public
forked from Netflix/vmaf

Perceptual video quality assessment based on multi-method fusion.

License

Notifications You must be signed in to change notification settings

1480c1/vmaf

 
 

Repository files navigation

VMAF - Video Multi-Method Assessment Fusion

Build Status libvmaf Windows ffmpeg Docker

VMAF is an Emmy-winning perceptual video quality assessment algorithm developed by Netflix. This software package includes a stand-alone C library libvmaf and its wrapping Python library. The Python library also provides a set of tools that allows a user to train and test a custom VMAF model.

Read this tech blog post for an overview, this post for the tips of best practices, and this post for our latest efforts on speed optimization, new API design and the introduction of a codec evaluation-friendly NEG mode.

Also included in libvmaf are implementations of several other metrics: PSNR, PSNR-HVS, SSIM, MS-SSIM and CIEDE2000.

vmaf logo

News

  • (2023-12-07) We are releasing libvmaf v3.0.0. It contains several optimizations and bug fixes, and a full removal of the APIs which were deprecated in v2.0.0.
  • (2021-12-15) We have added to CAMBI the full_ref input parameter to allow running CAMBI as a full-reference metric, taking into account the banding that was already present on the source. Check out the usage page.
  • (2021-12-1) We have added to CAMBI the max_log_contrast input parameter to allow to capture banding with higher contrasts than the default. We have also sped up CAMBI (e.g., around 4.5x for 4k). Check out the usage page.
  • (2021-10-7) We are open-sourcing CAMBI (Contrast Aware Multiscale Banding Index) - Netflix's detector for banding (aka contouring) artifacts. Check out the tech blog for an overview and the technical paper published in PCS 2021 (note that the paper describes an initial version of CAMBI that no longer matches the code exactly, but it is still a good introduction). Also check out the usage page.
  • (2020-12-7) Check out our latest tech blog on speed optimization, new API design and the introduction of a codec evaluation-friendly NEG mode.
  • (2020-12-3) We are releasing libvmaf v2.0.0. It has a new fixed-point and x86 SIMD-optimized (AVX2, AVX-512) implementation that achieves 2x speed up compared to the previous floating-point version. It also has a new API that is more flexible and extensible.
  • (2020-7-13) We have created a memo to share our thoughts on VMAF's property in the presence of image enhancement operations, its impact on codec evaluation, and our solutions. Accordingly, we have added a new mode called No Enhancement Gain (NEG).
  • (2020-2-27) We have changed VMAF's license from Apache 2.0 to BSD+Patent, a more permissive license compared to Apache that also includes an express patent grant.

Documentation

There is an overview of the documentation with links to specific pages, covering FAQs, available models and features, software usage guides, and a list of resources.

Usage

The software package offers a number of ways to interact with the VMAF implementation.

  • The command-line tool vmaf provides a complete algorithm implementation, such that one can easily deploy VMAF in a production environment. Additionally, the vmaf tool provides a number of auxillary features such as PSNR, SSIM and MS-SSIM.
  • The C library libvmaf provides an interface to incorporate VMAF into your code, and tools to integrate other feature extractors into the library.
  • The Python library offers a full array of wrapper classes and scripts for software testing, VMAF model training and validation, dataset processing, data visualization, etc.
  • VMAF is now included as a filter in FFmpeg, and can be configured using: ./configure --enable-libvmaf. Refer to the Using VMAF with FFmpeg page.
  • VMAF Dockerfile generates a docker image from the Python library. Refer to this document for detailed usage.
  • To build VMAF on Windows, follow these instructions.
  • AOM CTC: AOM has specified vmaf to be the standard implementation metrics tool according to the AOM common test conditions (CTC). Refer to this page for usage compliant with AOM CTC.

Contribution Guide

Refer to the contribution page. Also refer to this slide deck for an overview contribution guide.

About

Perceptual video quality assessment based on multi-method fusion.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 43.2%
  • C 41.5%
  • MATLAB 8.6%
  • Cuda 2.4%
  • C++ 1.8%
  • Assembly 1.5%
  • Other 1.0%