VMAF_Python_library.md 23.5 KB
Newer Older
1
# VMAF Python Library
2

3
The VMAF Python library offers full functionalities from running basic VMAF command lines, software testing, training and validating a new VMAF model on video datasets, data visualization tools, etc. It is the playground to experiment with VMAF.
Z
Zhi Li 已提交
4

Z
Zhi Li 已提交
5
## Build
6

Z
Zhi Li 已提交
7
Make sure you have `python3` (python 3.6 or higher). You can check the version by `python3 --version`.
8

9
Follow the steps below to set up a clean virtual environment
10
```shell script
Z
Zhi Li 已提交
11 12 13
python3 -m pip install virtualenv
python3 -m virtualenv .venv
source .venv/bin/activate
14 15 16 17 18 19 20
```

from this point forward `python3` and `pip` will be relative to the virtualenv and isolated from the system python. Returning to the project in subsequent shell sessions will require re-activating the virtualenv with `source .venv/bin/activate`.

Now install the tools required to build VMAF into the virtualenv.

```
Z
Zhi Li 已提交
21 22
pip install meson cython numpy
sudo [package-manager] install nasm ninja doxygen
23
```
24 25

You need to invoke `[package-manager]` depending on which system you are on: `apt-get` for Ubuntu and Debian, `yum` for older CentOS and RHEL, `dnf` for Fedora and latest CentOS (and use `ninja-build` instead of `ninja`), `zypper` for openSUSE, `brew` for MacOS (no `sudo`).
26

27
Make sure `nasm` is 2.13.02 or higher (check by `nasm --version`) and `ninja` is 1.7.1 or higher (check by `ninja --version`).
28

29
Depending on the system, you may also need to install `python-dev` or equivalent (`python-devel` on CentOS):
30
```shell script
Z
Zhi Li 已提交
31
sudo [package-manager] install [python-dev]
32
```
Z
Zhi Li 已提交
33
where `[python-dev]` is either `python3-dev` or `python3-devel` depending on the system.
34

Z
Zhi Li 已提交
35
Build the binary by:
36
```shell script
Z
Zhi Li 已提交
37
make
38 39
```

Z
Zhi Li 已提交
40
Install the rest of the required Python packages:
41
```shell script
Z
Zhi Li 已提交
42
pip install -r python/requirements.txt
43 44 45 46
```

## Testing

Z
Zhi Li 已提交
47
Run unittests and make sure they all pass:
48
```shell script
49 50 51 52 53
./unittest
```

## Basic Usage

54
### Run VMAF Using `run_vmaf`
55

56
One can run VMAF in the command line by `run_vmaf`, which allows the input videos to be the `.yuv` format. To run VMAF on a single reference/distorted video pair, run:
57

58 59 60 61 62 63
```shell script
PYTHONPATH=python ./python/vmaf/script/run_vmaf.py \
    format width height \
    reference_path \
    distorted_path \
    [--out-fmt output_format]
64 65 66 67 68 69 70
```

The arguments are the following:

- `format` can be one of:
    - `yuv420p`, `yuv422p`, `yuv444p` (8-Bit YUV)
    - `yuv420p10le`, `yuv422p10le`, `yuv444p10le` (10-Bit little-endian YUV)
71 72
    - `yuv420p12le`, `yuv422p12le`, `yuv444p12le` (12-Bit little-endian YUV)
    - `yuv420p16le`, `yuv422p16le`, `yuv444p16le` (16-Bit little-endian YUV)
73 74 75 76 77 78 79
- `width` and `height` are the width and height of the videos, in pixels
- `reference_path` and `distorted_path` are the paths to the reference and distorted video files
- `output_format` can be one of:
    - `text`
    - `xml`
    - `json`

80
For example, the following command runs VMAF on a pair of `.yuv` inputs ([`src01_hrc00_576x324.yuv`](https://github.com/Netflix/vmaf_resource/blob/master/python/test/resource/yuv/src01_hrc00_576x324.yuv), [`src01_hrc01_576x324.yuv`](https://github.com/Netflix/vmaf_resource/blob/master/python/test/resource/yuv/src01_hrc01_576x324.yuv)):
81

82
```shell script
83 84
 PYTHONPATH=python ./python/vmaf/script/run_vmaf.py \
  yuv420p 576 324 \
85 86
  src01_hrc00_576x324.yuv \
  src01_hrc01_576x324.yuv \
87 88 89 90 91
  --out-fmt json
```

This will generate JSON output like:

92 93 94 95 96 97 98 99 100 101 102 103 104
```json
{
    ...
    "aggregate": {
        "VMAF_feature_adm2_score": 0.93458780776205741, 
        "VMAF_feature_motion2_score": 3.8953518541666665, 
        "VMAF_feature_vif_scale0_score": 0.36342081156994926, 
        "VMAF_feature_vif_scale1_score": 0.76664738784617292, 
        "VMAF_feature_vif_scale2_score": 0.86285338927816291, 
        "VMAF_feature_vif_scale3_score": 0.91597186913930484, 
        "VMAF_score": 76.699271371151269, 
        "method": "mean"
    }
105 106 107
}
```

108
where `VMAF_score` is the final score and the others are the scores for VMAF's elementary metrics:
109 110 111
- `adm2`, `vif_scalex` scores range from 0 (worst) to 1 (best)
- `motion2` score typically ranges from 0 (static) to 20 (high-motion)

112
### `ffmpeg2vmaf`
113

114
Historically, we provide `ffmpeg2vmaf` as a command line tool that offers the capability of taking compressed video bitstreams as the input. But it is now considered deprecated in favor of [using FFmpeg with VMAF](ffmpeg.md) to achieve the same purpose.
115

116
`ffmpeg2vmaf` essentially pipes FFmpeg-decoded videos to VMAF. Note that you need a recent version of `ffmpeg` installed (for the first time, run the command line, follow the prompted instruction to specify the path of `ffmpeg`). 
117

118 119 120 121 122
```shell script
PYTHONPATH=python ./python/vmaf/script/ffmpeg2vmaf.py \
    quality_width quality_height \
    reference_path distorted_path \
    [--model model_path] [--out-fmt out_fmt]
123 124
```

Z
Zhi Li 已提交
125
Here `quality_width` and `quality_height` are the width and height the reference and distorted videos are scaled to before VMAF calculation. This is different from `run_vmaf`'s  `width` and `height`, which specify the raw YUV's width and height instead. The input to `ffmpeg2vmaf` must already have such information specified in the header so that they are FFmpeg-decodable.
126 127 128 129 130

## Advanced Usage

VMAF follows a machine-learning based approach to first extract a number of quality-relevant features (or elementary metrics) from a distorted video and its reference full-quality video, followed by fusing them into a final quality score using a non-linear regressor (e.g. an SVM regressor), hence the name “Video Multi-method Assessment Fusion”.

Z
Zhi Li 已提交
131
In addition to the basic commands, the VMAF package also provides a framework to allow any user to train his/her own perceptual quality assessment model. For example, directory [`model`](../../model) contains a number of pre-trained models, which can be loaded by the aforementioned commands:
132

133 134 135 136 137 138
```shell script
PYTHONPATH=python ./python/vmaf/script/run_vmaf.py \
    format width height \
    reference_path \
    distorted_path \
    [--model model_path]
139 140 141 142
```

For example:

143
```shell script
144
PYTHONPATH=python ./python/vmaf/script/run_vmaf.py \
145 146 147 148
    yuv420p 576 324 \
    python/test/resource/yuv/src01_hrc00_576x324.yuv \
    python/test/resource/yuv/src01_hrc01_576x324.yuv \
    --model model/other_models/nflxtrain_vmafv3.pkl
149 150 151 152 153 154 155 156 157 158 159 160
```

A user can customize the model based on:

- The video dataset it is trained on
- The list of features used
- The regressor used (and its hyper-parameters)

Once a model is trained, the VMAF package also provides tools to cross validate it on a different dataset and visualization.

### Create a Dataset

Z
Zhi Li 已提交
161
To begin with, create a dataset file following the format in [`example_dataset.py`](../../resource/example/example_dataset.py). A dataset is a collection of distorted videos. Each has a unique asset ID and a corresponding reference video, identified by a unique content ID. Each distorted video is also associated with subjective quality score, typically a MOS (mean opinion score), obtained through subjective study. An example code snippet that defines a dataset is as follows:
162

163
```python
164 165 166 167 168 169 170 171 172
dataset_name = 'example'
yuv_fmt = 'yuv420p'
width = 1920
height = 1080
ref_videos = [
    {'content_id':0, 'path':'checkerboard.yuv'},
    {'content_id':1, 'path':'flat.yuv'},
]
dis_videos = [
Z
Zhi Li 已提交
173
    {'content_id':0, 'asset_id': 0, 'dmos':100, 'path':'checkerboard.yuv'},
174 175 176
    {'content_id':0, 'asset_id': 1, 'dmos':50, 'path':'checkerboard_dis.yuv'},
    {'content_id':1, 'asset_id': 2, 'dmos':100, 'path':'flat.yuv'},
    {'content_id':1, 'asset_id': 3, 'dmos':80, 'path':'flat_dis.yuv'},
177 178 179
]
```

Z
Zhi Li 已提交
180
See the directory [`resource/dataset`](../../resource/dataset) for more examples. Also refer to the [Datasets](datasets.md) document regarding publicly available datasets.
181 182 183 184 185

### Validate a Dataset

Once a dataset is created, first validate the dataset using existing VMAF or other (PSNR, SSIM or MS-SSIM) metrics. Run:

186
```shell script
187
PYTHONPATH=python ./python/vmaf/script/run_testing.py \
188 189 190 191 192
    quality_type \
    test_dataset_file \
    [--vmaf-model optional_VMAF_model_path] \
    [--cache-result] \
    [--parallelize]
193 194
```

Z
Zhi Li 已提交
195
where `quality_type` can be `VMAF`, `PSNR`, `SSIM`, `MS_SSIM`, etc.
196

197
Enabling `--cache-result` allows storing/retrieving extracted features (or elementary quality metrics) in a data store (under `workspace/result_store_dir/file_result_store`), since feature extraction is the most expensive operations here.
198 199 200 201 202

Enabling `--parallelize` allows execution on multiple reference-distorted video pairs in parallel. Sometimes it is desirable to disable parallelization for debugging purpose (e.g. some error messages can only be displayed when parallel execution is disabled).

For example:

203
```shell script
204
PYTHONPATH=python ./python/vmaf/script/run_testing.py \
205 206 207 208
    VMAF \
    resource/example/example_dataset.py \
    --cache-result \
    --parallelize
209 210 211 212 213 214 215 216 217 218 219 220 221 222
```

Make sure `matplotlib` is installed to visualize the MOS-prediction scatter plot and inspect the statistics:

- PCC – Pearson correlation coefficient
- SRCC – Spearman rank order correlation coefficient
- RMSE – root mean squared error

#### Troubleshooting

When creating a dataset file, one may make errors (for example, having a typo in a file path) that could go unnoticed but make the execution of `run_testing` fail. For debugging purposes, it is recommended to disable `--parallelize`.

If the problem persists, one may need to run the script:

223 224 225 226
```shell script
PYTHONPATH=python ./python/vmaf/script/run_cleaning_cache.py \
    quality_type \
    test_dataset_file
227 228 229 230
```

to clean up corrupted results in the store before retrying. For example:

231 232 233 234
```shell script
PYTHONPATH=python ./python/vmaf/script/run_cleaning_cache.py \
    VMAF \
    resource/example/example_dataset.py
235 236 237 238 239 240
```

### Train a New Model

Now that we are confident that the dataset is created correctly and we have some benchmark result on existing metrics, we proceed to train a new quality assessment model. Run:

241
```shell script
242
PYTHONPATH=python ./python/vmaf/script/run_vmaf_training.py \
243 244 245 246 247 248
    train_dataset_filepath \
    feature_param_file \
    model_param_file \
    output_model_file \
    [--cache-result] \
    [--parallelize]
249 250 251 252
```

For example:

253
```shell script
254
PYTHONPATH=python ./python/vmaf/script/run_vmaf_training.py \
255 256 257 258 259 260
    resource/example/example_dataset.py \
    resource/feature_param/vmaf_feature_v2.py \
    resource/model_param/libsvmnusvr_v2.py \
    workspace/model/test_model.pkl \
    --cache-result \
    --parallelize
261 262 263 264
```

`feature_param_file` defines the set of features used. For example, both dictionaries below:

265
```python
266
feature_dict = {'VMAF_feature': 'all', }
267 268 269 270
```

and

271
```python
272
feature_dict = {'VMAF_feature': ['vif', 'adm'], }
273 274
```

275
are valid specifications of selected features. Here `VMAF_feature` is an "aggregate" feature type, and `vif`, `adm` are the "atomic" feature types within the aggregate type. In the first case, `all` specifies that all atomic features of `VMAF_feature` are selected. A `feature_dict` dictionary can also contain more than one aggregate feature types.
276 277 278

`model_param_file` defines the type and hyper-parameters of the regressor to be used. For details, refer to the self-explanatory examples in directory `resource/model_param`. One example is:

279
```python
280 281 282
model_type = "LIBSVMNUSVR"
model_param_dict = {
    # ==== preprocess: normalize each feature ==== #
283
    'norm_type':'clip_0to1',  # rescale to within [0, 1]
284
    # ==== postprocess: clip final quality score ==== #
285
    'score_clip':[0.0, 100.0],  # clip to within [0, 100]
286 287
    # ==== libsvmnusvr parameters ==== #
    'gamma':0.85, # selected
288 289 290
    'C':1.0,  # default
    'nu':0.5,  # default
    'cache_size':200,  # default
291 292 293
}
```

294
The trained model is output to `output_model_file`. Once it is obtained, it can be used by the `run_vmaf`, or by `run_testing` to validate another dataset.
295 296 297 298 299 300 301 302

![training scatter](/resource/images/scatter_training.png)
![testing scatter](/resource/images/scatter_testing.png)

Above are two example scatter plots obtained from running the `run_vmaf_training` and `run_testing` commands on a training and a testing dataset, respectively.

### Using Custom Subjective Models

303
The commands `run_vmaf_training` and `run_testing` also support custom subjective models (e.g. MLE_CO_AP2 (default), MOS, DMOS, SR_MOS (i.e. ITU-R BT.500), BR_SR_MOS (i.e. ITU-T P.913) and more), through the [sureal](https://github.com/Netflix/sureal) package.
304 305 306

The subjective model option can be specified with option `--subj-model subjective_model`, for example:

307
```shell script
308
PYTHONPATH=python ./python/vmaf/script/run_vmaf_training.py \
309 310 311 312 313 314 315
    resource/example/example_raw_dataset.py \
    resource/feature_param/vmaf_feature_v2.py \
    resource/model_param/libsvmnusvr_v2.py \
    workspace/model/test_model.pkl \
    --subj-model MLE_CO_AP2 \
    --cache-result \
    --parallelize
316

317
PYTHONPATH=python ./python/vmaf/script/run_testing.py \
318 319 320 321 322
    VMAF \
    resource/example/example_raw_dataset.py \
    --subj-model MLE_CO_AP2 \
    --cache-result \
    --parallelize
323 324
```

325
Note that for the `--subj-model` option to have effect, the input dataset file must follow a format similar to [example_raw_dataset.py](../../resource/example/example_raw_dataset.py). Specifically, for each dictionary element in `dis_videos`, instead of having a key named `dmos` or `groundtruth` as in [example_dataset.py](../../resource/example/example_dataset.py), it must have a key named `os` (stands for opinion score), and the value must be a list of numbers. This is the "raw opinion score" collected from subjective experiments, which is used as the input to the custom subjective models.
326 327 328

### Cross Validation

Z
Zhi Li 已提交
329
[`run_vmaf_cross_validation.py`](../../python/script/run_vmaf_cross_validation.py) provides tools for cross-validation of hyper-parameters and models. `run_vmaf_cv` runs training on a training dataset using hyper-parameters specified in a parameter file, output a trained model file, and then test the trained model on another test dataset and report testing correlation scores. `run_vmaf_kfold_cv` takes in a dataset file, a parameter file, and a data structure (list of lists) that specifies the folds based on video content's IDs, and run k-fold cross valiation on the video dataset. This can be useful for manually tuning the model parameters.
330 331 332

### Creating New Features And Regressors

333
You can also customize VMAF by plugging in third-party features or inventing new features, and specify them in a `feature_param_file`. Essentially, the "aggregate" feature type (for example: `VMAF_feature`) specified in the `feature_dict` corresponds to the `TYPE` field of a `FeatureExtractor` subclass (for example: `VmafFeatureExtractor`). All you need to do is to create a new class extending the `FeatureExtractor` base class.
334

335
Similarly, you can plug in a third-party regressor or invent a new regressor and specify them in a `model_param_file`. The `model_type` (for example: `LIBSVMNUSVR`) corresponds to the `TYPE` field of a `TrainTestModel` sublass (for example: `LibsvmnusvrTrainTestModel`). All needed is to create a new class extending the `TrainTestModel` base class.
336

Z
Zhi Li 已提交
337
For instructions on how to extending the `FeatureExtractor` and `TrainTestModel` base classes, refer to [`CONTRIBUTING.md`](../../CONTRIBUTING.md).
338 339 340

## Analysis Tools

341
Overtime, a number of helper tools have been incorporated into the package, to facilitate training and validating VMAF models. An overview of the tools available can be found in [this slide deck](VQEG_SAM_2018_111_AnalysisToolsInVMAF.pdf).
342 343 344

### BD-Rate Calculator

Z
Zhi Li 已提交
345
A Bjøntegaard-Delta (BD) rate [implementation](../../python/vmaf/tools/bd_rate_calculator.py) is added. Example usage can be found [here](../../python/test/bd_rate_calculator_test.py). The implementation is validated against [MPEG JCTVC-L1100](http://phenix.int-evry.fr/jct/doc_end_user/current_document.php?id=7281).
346 347 348

### LIME (Local-Explainer Model-Agnostic Explanation) Implementation

Z
Zhi Li 已提交
349
An implementation of [LIME](https://arxiv.org/pdf/1602.04938.pdf) is also added as part of the repository. For more information, refer to our [analysis tools](VQEG_SAM_2018_111_AnalysisToolsInVMAF.pdf) presentation. The main idea is to perform a local linear approximation to any regressor or classifier and then use the coefficients of the linearized model as indicators of feature importance. LIME can be used as part of the VMAF regression framework, for example:
350

351 352 353 354 355 356
```shell script
PYTHONPATH=python ./python/vmaf/script/run_vmaf.py \
    yuv420p 576 324 \
    src01_hrc00_576x324.yuv \
    src01_hrc00_576x324.yuv \
    --local-explain
357 358 359 360
```

Naturally, LIME can also be applied to any other regression scheme as long as there exists a pre-trained model. For example, applying to BRISQUE:

361
```shell script
362
PYTHONPATH=python ./python/vmaf/script/run_vmaf.py yuv420p 576 324 \
363 364 365
    src01_hrc00_576x324.yuv \
    src01_hrc00_576x324.yuv \
    --local-explain \
366
    --model model/other_models/nflxall_vmafv1.pkl
367
```
368 369 370 371 372

## Format Tools

### Convert Model File from pickle (pkl) to json

373
A tool to convert a model file (currently support libsvm model) from pickle to json is added at `python/vmaf/script/convert_model_from_pkl_to_json.py`. Usage:
374 375 376 377 378 379 380 381 382 383
```text
usage: convert_model_from_pkl_to_json.py [-h] --input-pkl-filepath
                                         INPUT_PKL_FILEPATH
                                         --output-json-filepath
                                         OUTPUT_JSON_FILEPATH

optional arguments:
  -h, --help            show this help message and exit
  --input-pkl-filepath INPUT_PKL_FILEPATH
                        path to the input pkl file, example:
Z
Zhi Li 已提交
384 385
                        model/vmaf_float_v0.6.1.pkl or
                        model/vmaf_float_b_v0.6.3/vmaf_float_b_v0.6.3.pkl
386 387
  --output-json-filepath OUTPUT_JSON_FILEPATH
                        path to the output json file, example:
Z
Zhi Li 已提交
388
                        model/vmaf_float_v0.6.1.json or model/vmaf_float_b_v0.6.3.json
389 390 391
```

Examples:
392
```shell script
393 394

python/vmaf/script/convert_model_from_pkl_to_json.py \
Z
Zhi Li 已提交
395 396
--input-pkl-filepath model/vmaf_float_b_v0.6.3/vmaf_float_b_v0.6.3.pkl \
--output-json-filepath ./vmaf_float_b_v0.6.3.json
397 398

python/vmaf/script/convert_model_from_pkl_to_json.py \
Z
Zhi Li 已提交
399 400
--input-pkl-filepath model/vmaf_float_v0.6.1.pkl \
--output-json-filepath ./vmaf_float_v0.6.1.json
401
```
402 403 404

## Core Classes

405
The core classes of the VMAF Python library can be depicted in the diagram below:
406 407 408

![UML](../images/uml.png)

409
### Asset
410

411 412 413
An Asset is the most basic unit with enough information to perform a task on a media. It includes basic information about a distorted video and its undistorted reference counterpart, as well as the auxiliary preprocessing information that can be understood by the `Executor` and its subclasses. For example: 
  - The frame range on which to perform a task (i.e. `dis_start_end_frame` and `ref_start_end_frame`)
  - At what resolution to perform a task (e.g. a video frame is upscaled with a `resampling_type` method to the resolution specified by `quality_width_hight` before feature extraction)
414

415
Asset extends the `WorkdirEnabled` mixin, which comes with a thread-safe working directory to facilitate parallel execution.
416

417
### Executor
418

419
An `Executor` takes a list of `Assets` as input, run computations on them, and return a list of corresponding `Results`. An `Executor` extends the `TypeVersionEnabled` mixin, and must specify a unique type and version combination (by the `TYPE` and `VERSION` attribute), so that the `Result` generated by it can be uniquely identified. This facilitates a number of shared housekeeping functions, including storing and reusing `Results` (`result_store`), creating FIFO pipes (`fifo_mode`), etc. `Executor` understands the preprocessing steps specified in its input `Assets`. It relies on FFmpeg to do the processing for it (FFmpeg must be pre-installed and its path specified in the `FFMPEG_PATH` field in the `python/vmaf/externals.py` file).
420

421 422 423
An `Executor` and its subclasses can take optional parameters during initialization. There are two fields to put the optional parameters:
  - `optional_dict`: a dictionary field to specify parameters that will impact numerical result (e.g. which wavelet transform to use).
  - `optional_dict2`: a dictionary field to specify parameters that will NOT impact numerical result (e.g. outputting optional results).
424

425
`Executor` is the base class for `FeatureExtractor` and `QualityRunner` (and the sub-subclass `VmafQualityRunner`).
426

427
### Result
428

429
A `Result` is a key-value store of read-only execution results generated by an `Executor` on an `Asset`. A key corresponds to an "atom" feature type or a type of a quality score, and a value is a list of score values, each corresponding to a computation unit (i.e. in the current implementation, a frame).
430

431
The `Result` class also provides a number of tools for aggregating the per-unit scores into an average score. The default aggregatijon method is the mean, but `Result.set_score_aggregate_method()` allows customizing other methods (see `test_to_score_str()` in `test/result_test.py` for examples).
432

433
### ResultStore
434

435
`ResultStore` provides capability to save and load a `Result`. Current implementation `FileSystemResultStore` persists results by a simple file system that save/load result in a directory. The directory has multiple subdirectories, each corresponding to an `Executor`. Each subdirectory contains multiple files, each file storing the dataframe for an `Asset`.
436

437
### FeatureExtractor
438

439
`FeatureExtractor` subclasses `Executor`, and is specifically for extracting features (aka elementary quality metrics) from `Assets`. Any concrete feature extraction implementation should extend the `FeatureExtractor` base class (e.g. `VmafFeatureExtractor`). The `TYPE` field corresponds to the "aggregate" feature name, and the `ATOM_FEATURES`/`DERIVED_ATOM_FEATURES` field corresponds to the "atom" feature names.
440

441
### FeatureAssembler
442

443
`FeatureAssembler` assembles features for an input list of `Assets` on a input list of `FeatureExtractor` subclasses. The constructor argument `feature_dict` specifies the list of `FeatureExtractor` subclasses (i.e. the "aggregate" feature) and selected "atom" features. For each asset on a `FeatureExtractor`, it outputs a `BasicResult` object. `FeatureAssembler` is used by a `QualityRunner` to assemble the vector of features to be used by a `TrainTestModel`.
444

445
### TrainTestModel
446

447
`TrainTestModel` is the base class for any concrete implementation of regressor, which must provide a `train()` method to perform training on a set of data and their groud-truth labels, and a `predict()` method to predict the labels on a set of data, and a `to_file()` and a `from_file()` method to save and load trained models. 
448

449
A `TrainTestModel` constructor must supply a dictionary of parameters (i.e. `param_dict`) that contains the regressor's hyper-parameters. The base class also provides shared functionalities such as input data normalization/output data denormalization, evaluating prediction performance, etc.
450

451
Like an `Executor`, a `TrainTestModel` extends `TypeVersionEnabled` and must specify a unique type and version combination (by the `TYPE` and `VERSION` attribute).
452

453
### CrossValidation
454

455
`CrossValidation` provides a collection of static methods to facilitate validation of a `TrainTestModel` object. As such, it also provides means to search the optimal hyper-parameter set for a `TrainTestModel` object.
456

457
### QualityRunner
458

459
`QualityRunner` subclasses `Executor`, and is specifically for evaluating the quality score for `Assets`. Any concrete implementation to generate the final quality score should extend the `QualityRunner` base class (e.g. `VmafQualityRunner`, `PsnrQualityRunner`).
460

461
There are two ways to extend a `QualityRunner` base class -- either by directly implementing the quality calculation (e.g. by calling a C executable, as in `PsnrQualityRunner`), or by calling a `FeatureAssembler` (with indirectly calls a `FeatureExtractor`) and a `TrainTestModel` subclass (as in `VmafQualityRunner`).