# Age & gender recognition. ## Data preparation The training procedure can be done using data in HDF5 format. Please, prepare images with faces and put it into some folder. Then create a special file () for 'train', 'val' and 'test' phase containing annotations with the following structure: ``` image_1_relative_path ... image_n_relative_path ``` The example images with a corresponding data file can be found in `./data/age_gender` directory and used in evaluation script. Once you have images and a data file, use the provided script to create database in HDF5 format. ### Create HDF5 files 1. Run docker in interactive session with mounted directory of your data ```Shell nvidia-docker run --rm -it --user=$(id -u) -v :/data ttcf bash ``` 2. Run the script to convert data to hdf5 format ```Shell python3 $CAFFE_ROOT/python/gen_hdf5_data.py /data/ images_db_train python3 $CAFFE_ROOT/python/gen_hdf5_data.py /data/ images_db_val python3 $CAFFE_ROOT/python/gen_hdf5_data.py /data/ images_db_test ``` 3. Close docker session by `ctrl+D` and check that you have `images_db_.hd5` and `images_db__list.txt` files in . ## Model training and evaluation ### Age-gender recognition model training On next stage we should train the Age-gender recognition model. To do this follow next steps: ```Shell cd ./models python3 train.py --model age_gender \ # name of model --weights age-gender-recognition-retail-0013.caffemodel \ # initialize weights from 'init_weights' directory --data_dir \ # path to directory with dataset --work_dir \ # directory to collect file from training process --gpu ``` ### Age-gender recognition model evaluation To evaluate the quality of trained Age-gender recognition model on your test data you can use provided scripts. ```Shell python3 evaluate.py --type ag \ --dir /age_gender/ \ --data_dir \ --annotation \ --iter ``` ### Export to IR format ```Shell python3 mo_convert.py --name age_gender --type ag \ --dir /age_gender/ \ --iter \ --data_type FP32 ``` ### Age & gender recognition demo You can use [this demo](https://github.com/opencv/open_model_zoo/tree/master/demos/interactive_face_detection_demo) to view how resulting model performs.