提交 4e866261 编写于 作者: G Gines Hidalgo

Simplified output doc and more examples in demo_quick_start

Signed-off-by: NGines Hidalgo <gineshidalgo99@gmail.com>
上级 8d2f3a88
......@@ -99,7 +99,7 @@ We show an inference time comparison between the 3 available pose estimation lib
- **Hardware compatibility**: CUDA (Nvidia GPU), OpenCL (AMD GPU), and non-GPU (CPU-only) versions.
- **Usage Alternatives**:
- [**Command-line demo**](doc/demo_quick_start.md) for built-in functionality.
- [**C++ API**](examples/tutorial_api_cpp/) and [**Python API**](doc/python_module.md) for custom functionality. E.g., adding your custom inputs, pre-processing, post-posprocessing, and output steps.
- [**C++ API**](examples/tutorial_api_cpp/) and [**Python API**](doc/python_api.md) for custom functionality. E.g., adding your custom inputs, pre-processing, post-posprocessing, and output steps.
For further details, check [all released features](doc/released_features.md) and [release notes](doc/release_notes.md).
......@@ -125,7 +125,8 @@ Most users do not need to know C++ or Python, they can simply use the OpenPose D
```
# Ubuntu
./build/examples/openpose/openpose.bin
```
```
:: Windows - Portable Demo
bin\OpenPoseDemo.exe --video examples\media\video.avi
```
......@@ -134,7 +135,8 @@ You can also add any of the available flags in any order. Do you also want to ad
```
# Ubuntu
./build/examples/openpose/openpose.bin --video examples/media/video.avi --face --hand --write_json output_json_folder/
```
```
:: Windows - Portable Demo
bin\OpenPoseDemo.exe --video examples\media\video.avi --face --hand --write_json output_json_folder/
```
......
OpenPose Library - Steps to Add a New Module
OpenPose - Steps to Add a New Module
====================================
## Developing Steps
......
OpenPose Library - How to Develop OpenPose
OpenPose - How to Develop OpenPose
====================================
If you intend to extend the functionality of our library:
1. Read the [README.md](../../../README.md) page.
2. Check the basic library overview doc on [doc/advanced/library_structure/library_overview.md](library_overview.md).
3. Read, understand and play with the basic real time pose demo source code [examples/openpose/openpose.cpp](../../../examples/openpose/openpose.cpp) and [examples/tutorial_api_cpp](../../../examples/tutorial_api_cpp). It includes all the functionality of our library, and it has been properly commented.
4. Read, understand and play with the other tutorials in [examples/](../../../examples/). It includes more specific examples.
5. Check the basic UML diagram on the [doc/advanced/library_structure/UML](UML/) to get an idea of each module relations.
6. Take a look to the stucuture of the already existing modules.
7. The C++ headers files add documentation in [Doxygen](http://www.doxygen.org/) format. Create this documentation by compiling the [include](../../../include/) folder with Doxygen. This documentation is slowly but continuously improved.
8. You can also take a look to the source code or ask us on GitHub.
OpenPose Library - Standalone Face Or Hand Keypoint Detector
OpenPose - Standalone Face Or Hand Keypoint Detector
====================================
In case of hand camera views at which the hands are visible but not the rest of the body, or if you do not need the body keypoint detector and want to speed up the process, you can use the OpenPose face or hand keypoint detectors with your own face or hand detectors, rather than using the body keypoint detector as initial detector for those.
......
......@@ -13,9 +13,8 @@ This document is a more detailed continuation of [doc/demo_quick_start.md](demo_
4. [Debugging Information](#debugging-information)
5. [Heat Maps Storing](#heat-maps-storing)
6. [BODY_25 vs. COCO vs. MPI Models](#body-25-vs-coco-vs-mpi-models)
2. [Main Flags](#main-flags)
3. [Help Flag](#help-flag)
4. [All Flags](#all-flags)
2. [Help Flag](#help-flag)
3. [All Flags](#all-flags)
......@@ -82,32 +81,10 @@ There is an exception, for CPU version, the COCO and MPI models seems to be fast
## Main Flags
We enumerate some of the most important flags, check the `Flags Detailed Description` section or run `./build/examples/openpose/openpose.bin --help` for a full description of all of them.
- `--face`: Enables face keypoint detection.
- `--hand`: Enables hand keypoint detection.
- `--video input.mp4`: Read video.
- `--camera 3`: Read webcam number 3.
- `--image_dir path_to_images/`: Run on a folder with images.
- `--ip_camera http://iris.not.iac.es/axis-cgi/mjpg/video.cgi?resolution=320x240?x.mjpeg`: Run on a streamed IP camera. See examples public IP cameras [here](http://www.webcamxp.com/publicipcams.aspx).
- `--write_video path.avi`: Save processed images as video.
- `--write_images folder_path`: Save processed images on a folder.
- `--write_keypoint path/`: Output JSON, XML or YML files with the people pose data on a folder.
- `--process_real_time`: For video, it might skip frames to display at real time.
- `--disable_blending`: If enabled, it will render the results (keypoint skeletons or heatmaps) on a black background, not showing the original image. Related: `part_to_show`, `alpha_pose`, and `alpha_pose`.
- `--part_to_show`: Prediction channel to visualize.
- `--display 0`: Display window not opened. Useful for servers and/or to slightly speed up OpenPose.
- `--num_gpu 2 --num_gpu_start 1`: Parallelize over this number of GPUs starting by the desired device id. By default it uses all the available GPUs.
- `--model_pose MPI`: Model to use, affects number keypoints, speed and accuracy.
- `--logging_level 3`: Logging messages threshold, range [0,255]: 0 will output any message & 255 will output none. Current messages in the range [1-4], 1 for low priority messages and 4 for important ones.
## Help Flag
Now that you are more familiar with OpenPose, you can add the flag `--help` to see all the available OpenPose flags. Check only the flags for `examples/openpose/openpose.cpp` itself (i.e., the ones in the section `Flags from examples/openpose/openpose.cpp:`). [All Flags](#all-flags) shows them all in this document.
We recommend [All Flags](#all-flags) shows them all in this document, which sorts all the flags by category.
However, you could add the flag `--help` at any point to see all the available OpenPose flags. Check only the flags for `examples/openpose/openpose.cpp` itself (i.e., the ones in the section `Flags from examples/openpose/openpose.cpp:`).
```
# Ubuntu and Mac
./build/examples/openpose/openpose.bin --help
......@@ -122,7 +99,7 @@ bin\OpenPoseDemo.exe --help
## All Flags
Each flag is divided into flag name, default value, and description.
Now that you are more familiar with OpenPose, this is a list with all the available flags. Each one is divided into flag name, default value, and description.
1. Debugging/Other
- DEFINE_int32(logging_level, 3, "The logging level. Integer in the range [0, 255]. 0 will output any opLog() message, while 255 will not output any. Current OpenPose library messages are in the range 0-4: 1 for low priority messages and 4 for important ones.");
......
......@@ -7,17 +7,16 @@ Forget about the OpenPose code, just download the portable Windows binaries (or
1. [Mac OSX Additional Step](#mac-osx-additional-step)
2. [Quick Start](#quick-start)
1. [Improving Memory and Speed but Decreasing Accuracy](#improving-memory-and-speed-but-decreasing-accuracy)
2. [Running on Video](#running-on-video)
3. [Running on Webcam](#running-on-webcam)
4. [Running on Images](#running-on-images)
5. [Face and Hands](#face-and-hands)
6. [Maximum Accuracy Configuration](#maximum-accuracy-configuration)
7. [3-D Reconstruction](#3-d-reconstruction)
8. [JSON Output](json-output)
9. [JSON Output with No Visualization](json-output-with-no-visualization)
10. [Not Running All GPUs](#not-running-all-gpus)
11. [Kinect 2.0 as Webcam on Windows 10](#kinect-20-as-webcam-on-windows-10)
12. [Tracking](#tracking)
2. [Running on Images, Video, or Webcam](#running-on-images-video-or-webcam)
3. [Face and Hands](#face-and-hands)
4. [Different Outputs (JSON, Images, Video, UI)](#different-outputs-json-images-video-ui)
5. [Only Skeleton without Background Image](#only-skeleton-without-background-image)
6. [Not Running All GPUs](#not-running-all-gpus)
7. [Maximum Accuracy Configuration](#maximum-accuracy-configuration)
8. [3-D Reconstruction](#3-d-reconstruction)
9. [Tracking](#tracking)
10. [Kinect 2.0 as Webcam on Windows 10](#kinect-20-as-webcam-on-windows-10)
11. [Main Flags](#main-flags)
3. [Advanced Quick Start](#advanced-quick-start)
......@@ -64,31 +63,59 @@ If these fail with an out of memory error, do not worry, the next example will f
### Improving Memory and Speed but Decreasing Accuracy
If you have a Nvidia GPU that does not goes out of memory when running, **you should skip this step!**
**If you have a Nvidia GPU that does not goes out of memory when running, you should skip this step!**
**Use at your own risk**: If your GPU runs out of memory or you do not have a Nvidia GPU, you can reduce `--net_resolution` to improve the speed and reduce the memory requirements, but it will also highly reduce accuracy! The lower the resolution, the lower accuracy, but also improved speed and memory:
**Use `net_resolution` at your own risk**: If your GPU runs out of memory or you do not have a Nvidia GPU, you can reduce `--net_resolution` to improve the speed and reduce the memory requirements, but it will also highly reduce accuracy! The lower the resolution, the lower accuracy but better speed/memory.
```
# Ubuntu and Mac
./build/examples/openpose/openpose.bin --video examples/media/video.avi --net_resolution -1x320
./build/examples/openpose/openpose.bin --video examples/media/video.avi --net_resolution -1x256
./build/examples/openpose/openpose.bin --video examples/media/video.avi --net_resolution -1x196
./build/examples/openpose/openpose.bin --video examples/media/video.avi --net_resolution -1x128
```
```
:: Windows - Portable Demo
bin\OpenPoseDemo.exe --video examples\media\video.avi --net_resolution -1x320
bin\OpenPoseDemo.exe --video examples\media\video.avi --net_resolution -1x256
bin\OpenPoseDemo.exe --video examples\media\video.avi --net_resolution -1x196
bin\OpenPoseDemo.exe --video examples\media\video.avi --net_resolution -1x128
```
```
:: Windows - Library - Assuming you copied the DLLs following doc/installation/README.md#windows
build\x64\Release\OpenPoseDemo.exe --video examples\media\video.avi --net_resolution -1x320
build\x64\Release\OpenPoseDemo.exe --video examples\media\video.avi --net_resolution -1x256
build\x64\Release\OpenPoseDemo.exe --video examples\media\video.avi --net_resolution -1x196
build\x64\Release\OpenPoseDemo.exe --video examples\media\video.avi --net_resolution -1x128
```
Additional notes:
- The default resolution is `-1x368`, any resolution smaller will improve speed.
- The `-1` means that that the resolution will be adapted to maintain the aspect ratio of the input source. E.g., `-1x368`, `656x-1`, and `656x368` will result in the same exact resolution for 720p and 1080p input images.
- For videos, using `-1` is recommended to let OpenPose find the ideal resolution. For a folder of images of different sizes, not adding `-1` and using images with completely different aspect ratios might result in out of memory issues. E.g., if a folder contains 2 images of resolution `100x11040` and `10000x368`. Then, using the default `-1x368` will result in the network output resolutions of `3x368` and `10000x368`, resulting in an obvious out of memory for the `10000x368` image.
### Running on Video
### Running on Images, Video, or Webcam
- Directory with images (`--image_dir {DIRECTORY_PATH}`):
```
# Ubuntu and Mac
./build/examples/openpose/openpose.bin --image_dir examples/media/
# With face and hands
./build/examples/openpose/openpose.bin --image_dir examples/media/ --face --hand
```
```
:: Windows - Portable Demo
bin\OpenPoseDemo.exe --image_dir examples\media\
:: With face and hands
bin\OpenPoseDemo.exe --image_dir examples\media\ --face --hand
```
```
:: Windows - Library - Assuming you copied the DLLs following doc/installation/README.md#windows
build\x64\Release\OpenPoseDemo.exe --image_dir examples\media\
:: With face and hands
build\x64\Release\OpenPoseDemo.exe --image_dir examples\media\ --face --hand
```
- Video (`--video {VIDEO_PATH}`):
```
# Ubuntu and Mac
./build/examples/openpose/openpose.bin --video examples/media/video.avi
......@@ -107,55 +134,106 @@ build\x64\Release\OpenPoseDemo.exe --video examples\media\video.avi
:: With face and hands
build\x64\Release\OpenPoseDemo.exe --video examples\media\video.avi --face --hand
```
### Running on Webcam
- Webcam is applied by default (i.e., if no `--image_dir` or `--video` flags used). Optionally, if you have more than 1 camera, you could use `--camera {CAMERA_NUMBER}` to select the right one:
```
# Ubuntu and Mac
./build/examples/openpose/openpose.bin
./build/examples/openpose/openpose.bin --camera 0
./build/examples/openpose/openpose.bin --camera 1
# With face and hands
./build/examples/openpose/openpose.bin --face --hand
```
```
:: Windows - Portable Demo
bin\OpenPoseDemo.exe
bin\OpenPoseDemo.exe --camera 0
bin\OpenPoseDemo.exe --camera 1
:: With face and hands
bin\OpenPoseDemo.exe --face --hand
```
```
:: Windows - Library - Assuming you copied the DLLs following doc/installation/README.md#windows
build\x64\Release\OpenPoseDemo.exe
build\x64\Release\OpenPoseDemo.exe --camera 0
build\x64\Release\OpenPoseDemo.exe --camera 1
:: With face and hands
build\x64\Release\OpenPoseDemo.exe --face --hand
```
### Running on Images
### Face and Hands
Simply add `--face` and/or `--hand` to any command:
```
# Ubuntu and Mac
./build/examples/openpose/openpose.bin --image_dir examples/media/
./build/examples/openpose/openpose.bin --image_dir examples\media\
./build/examples/openpose/openpose.bin --video examples\media\video.avi
./build/examples/openpose/openpose.bin
# With face and hands
./build/examples/openpose/openpose.bin --image_dir examples/media/ --face --hand
./build/examples/openpose/openpose.bin --image_dir examples\media\ --face --hand
./build/examples/openpose/openpose.bin --video examples\media\video.avi --face --hand
./build/examples/openpose/openpose.bin --face --hand
```
```
:: Windows - Portable Demo
bin\OpenPoseDemo.exe --image_dir examples\media\
bin\OpenPoseDemo.exe --video examples\media\video.avi
bin\OpenPoseDemo.exe
:: With face and hands
bin\OpenPoseDemo.exe --image_dir examples\media\ --face --hand
bin\OpenPoseDemo.exe --video examples\media\video.avi --face --hand
bin\OpenPoseDemo.exe --face --hand
```
## Different Outputs (JSON, Images, Video, UI)
All the output options are complementary to each other. E.g., whether you display the images with the skeletons on the UI (or not) is independent on whether you save them on disk (or not).
- Save the skeletons in a set of JSON files with `--write_json {OUTPUT_VIDEO_PATH}`. Omitting the flag (default) means no JSON saving. See [doc/output.md](output.md) to understand the output format of the JSON files.
```
:: Windows - Library - Assuming you copied the DLLs following doc/installation/README.md#windows
build\x64\Release\OpenPoseDemo.exe --image_dir examples\media\
:: With face and hands
build\x64\Release\OpenPoseDemo.exe --image_dir examples\media\ --face --hand
# Ubuntu and Mac (same flags for Windows)
./build/examples/openpose/openpose.bin --image_dir examples\media\ --write_json output_jsons/
./build/examples/openpose/openpose.bin --video examples/media/video.avi --write_json output_jsons/
./build/examples/openpose/openpose.bin --write_json output_jsons/
```
- Save on disk the visual output of OpenPose (the images with the skeletons overlaid) as an output video (`--write_video {OUTPUT_VIDEO_PATH}`) or a set of images (`--write_images {OUTPUT_IMAGE_DIRECTORY_PATH}`, where `--write_images_format {FORMAT}` could also come handy):
```
# Ubuntu and Mac (same flags for Windows)
./build/examples/openpose/openpose.bin --video examples/media/video.avi --write_video output/result.avi
./build/examples/openpose/openpose.bin --image_dir examples\media\ --write_video output/result.avi
./build/examples/openpose/openpose.bin --video examples/media/video.avi --write_images output_images/ --write_images_format png
./build/examples/openpose/openpose.bin --image_dir examples\media\ --write_images output_images/ --write_images_format jpg
```
- You can also disable the UI visualization with `--display 0`. However, OpenPose will check and make sure your application is generating some kind of output. I.e., one out of `--write_json`, `--write_video`, or `--write_images` must be set if `--display 0`).
```
# Ubuntu and Mac (same flags for Windows)
./build/examples/openpose/openpose.bin --video examples/media/video.avi --write_images output_images/ --display 0
```
- To speed up OpenPose even further when using `--display 0`, also add `--render_pose 0` if you are not using `--write_video` or `--write_images`. This way, OpenPose will not waste time overlaying skeletons with the input images.
```
# Ubuntu and Mac (same flags for Windows)
./build/examples/openpose/openpose.bin --video examples/media/video.avi --write_json output_jsons/ --display 0 --render_pose 0
```
### Face and Hands
Simply add `--face` and/or `--hand` to any command, as seeing in the exmaples above for video, webcam, and images.
## Only Skeleton without Background Image
You can also visualize/save the skeleton without the original image overlaid or blended by adding `--disable_blending`:
```
# Ubuntu and Mac (same flags for Windows)
# Only body
./build/examples/openpose/openpose.bin --video examples/media/video.avi --disable_blending
```
## Not Running All GPUs
By default, OpenPose will use all the GPUs available in your machine. The following example runs the demo video `video.avi`, parallelizes it over 2 GPUs, GPUs 1 and 2 (note that it will skip GPU 0):
```
:: Windows - Portable Demo (same flags for Ubuntu and Mac)
bin\OpenPoseDemo.exe --video examples/media/video.avi --num_gpu 2 --num_gpu_start 1
```
......@@ -163,8 +241,8 @@ Simply add `--face` and/or `--hand` to any command, as seeing in the exmaples ab
This command provides the most accurate results we have been able to achieve for body, hand and face keypoint detection.
However:
- This will not work on CPU given the huge ammount of memory required. Your only option with CPU-only versions is to manually crop the people to fit the whole area of the image that is fed into OpenPose.
- It will also need ~10.5 GB of GPU memory for body-foot (BODY_25) model (~6.7 GB for COCO model).
- This will only work on Nvidia GPUs with at least 8 GB of memory. It won't work on CPU or OpenCL settings. Your only option to maximize accuracy with those is to manually crop the people to fit the whole area of the image that is fed into OpenPose.
- It will need ~10.5 GB of GPU memory for the body-foot model (`BODY_25`) or ~6.7 GB for the `COCO` model.
- This requires GPUs like Titan X, Titan XP, some Quadro models, P100, V100, etc.
- Including hands and face will require >= 16GB GPUs (so the 12 GB GPUs like Titan X and XPs will no longer work).
- This command runs at ~2 FPS on a Titan X for the body-foot model (~1 FPS for COCO).
......@@ -222,13 +300,13 @@ build\x64\Release\OpenPoseDemo.exe --flir_camera --3d --number_people_max 1 --fa
2. Saving 3-D keypoints and video
```
# Ubuntu and Mac (same flags for Windows version)
# Ubuntu and Mac (same flags for Windows)
./build/examples/openpose/openpose.bin --flir_camera --3d --number_people_max 1 --write_json output_folder_path/ --write_video_3d output_folder_path/video_3d.avi
```
3. Fast stereo camera image saving (without keypoint detection) for later post-processing
```
# Ubuntu and Mac (same flags for Windows version)
# Ubuntu and Mac (same flags for Windows)
# Saving video
# Note: saving in PNG rather than JPG will improve image quality, but slow down FPS (depending on hard disk writing speed and camera number)
./build/examples/openpose/openpose.bin --flir_camera --num_gpu 0 --write_video output_folder_path/video.avi --write_video_fps 5
......@@ -239,7 +317,7 @@ build\x64\Release\OpenPoseDemo.exe --flir_camera --3d --number_people_max 1 --fa
4. Reading and processing previouly saved stereo camera images
```
# Ubuntu and Mac (same flags for Windows version)
# Ubuntu and Mac (same flags for Windows)
# Optionally add `--face` and/or `--hand` to include face and/or hands
# Assuming 3 cameras
# Note: We highly recommend to reduce `--output_resolution`. E.g., for 3 cameras recording at 1920x1080, the resulting image is (3x1920)x1080, so we recommend e.g. 640x360 (x3 reduction).
......@@ -251,7 +329,7 @@ build\x64\Release\OpenPoseDemo.exe --flir_camera --3d --number_people_max 1 --fa
5. Reconstruction when the keypoint is visible in at least `x` camera views out of the total `n` cameras
```
# Ubuntu and Mac (same flags for Windows version)
# Ubuntu and Mac (same flags for Windows)
# Reconstruction when a keypoint is visible in at least 2 camera views (assuming `n` >= 2)
./build/examples/openpose/openpose.bin --flir_camera --3d --number_people_max 1 --3d_min_views 2 --output_resolution {desired_output_resolution}
# Reconstruction when a keypoint is visible in at least max(2, min(4, n-1)) camera views
......@@ -260,32 +338,26 @@ build\x64\Release\OpenPoseDemo.exe --flir_camera --3d --number_people_max 1 --fa
## JSON Output
The following example runs the demo video `video.avi`, renders image frames on `output/result.avi`, and outputs JSON files in `output/`. Note: see [doc/output.md](output.md) to understand the format of the JSON files.
### Tracking
1. Runtime huge speed up by reducing the accuracy:
```
# Ubuntu and Mac (same flags for Windows version)
./build/examples/openpose/openpose.bin --video examples/media/video.avi --write_video output/result.avi --write_json output/
:: Windows - Portable Demo (same flags for Ubuntu and Mac)
# Using OpenPose 1 frame, tracking the following e.g., 5 frames
bin\OpenPoseDemo.exe --tracking 5 --number_people_max 1
```
## JSON Output with No Visualization
The following example runs the demo video `video.avi` and outputs JSON files in `output/`. Note: see [doc/output.md](output.md) to understand the format of the JSON files.
2. Runtime speed up while keeping most of the accuracy:
```
# Ubuntu and Mac (same flags for Windows version)
# Only body
./build/examples/openpose/openpose.bin --video examples/media/video.avi --write_json output/ --display 0 --render_pose 0
# Body + face + hands
./build/examples/openpose/openpose.bin --video examples/media/video.avi --write_json output/ --display 0 --render_pose 0 --face --hand
:: Windows - Portable Demo (same flags for Ubuntu and Mac)
# Using OpenPose 1 frame and tracking another frame
bin\OpenPoseDemo.exe --tracking 1 --number_people_max 1
```
## Not Running All GPUs
By default, OpenPose will use all the GPUs available in your machine. The following example runs the demo video `video.avi`, parallelizes it over 2 GPUs, GPUs 1 and 2 (note that it will skip GPU 0):
3. Visual smoothness:
```
# Ubuntu and Mac (same flags for Windows version)
./build/examples/openpose/openpose.bin --video examples/media/video.avi --num_gpu 2 --num_gpu_start 1
:: Windows - Portable Demo (same flags for Ubuntu and Mac)
# Running both OpenPose and tracking on each frame. Note: There is no speed up/slow down
bin\OpenPoseDemo.exe --tracking 0 --number_people_max 1
```
......@@ -295,27 +367,27 @@ Since the Windows 10 Anniversary, Kinect 2.0 can be read as a normal webcam. All
### Tracking
1. Runtime huge speed up by reducing the accuracy:
```
# Ubuntu and Mac (same flags for Windows version)
# Using OpenPose 1 frame, tracking the following e.g., 5 frames
./build/examples/openpose/openpose.bin --tracking 5 --number_people_max 1
```
2. Runtime speed up while keeping most of the accuracy:
```
# Ubuntu and Mac (same flags for Windows version)
# Using OpenPose 1 frame and tracking another frame
./build/examples/openpose/openpose.bin --tracking 1 --number_people_max 1
```
3. Visual smoothness:
```
# Ubuntu and Mac (same flags for Windows version)
# Running both OpenPose and tracking on each frame. Note: There is no speed up/slow down
./build/examples/openpose/openpose.bin --tracking 0 --number_people_max 1
```
### Main Flags
These are the most common flags, but check [doc/demo_not_quick_start.md](demo_not_quick_start.md) for a full list and description of all of them.
- `--face`: Enables face keypoint detection.
- `--hand`: Enables hand keypoint detection.
- `--video input.mp4`: Read video `input.mp4`.
- `--camera 3`: Read webcam number 3.
- `--image_dir path_with_images/`: Run on the directory `path_with_images/` with images.
- `--ip_camera http://iris.not.iac.es/axis-cgi/mjpg/video.cgi?resolution=320x240?x.mjpeg`: Run on a streamed IP camera. See examples public IP cameras [here](http://www.webcamxp.com/publicipcams.aspx).
- `--write_video path.avi`: Save processed images as video.
- `--write_images folder_path`: Save processed images on a folder.
- `--write_keypoint path/`: Output JSON, XML or YML files with the people pose data on a folder.
- `--process_real_time`: For video, it might skip frames to display at real time.
- `--disable_blending`: If enabled, it will render the results (keypoint skeletons or heatmaps) on a black background, not showing the original image. Related: `part_to_show`, `alpha_pose`, and `alpha_pose`.
- `--part_to_show`: Prediction channel to visualize.
- `--display 0`: Display window not opened. Useful for servers and/or to slightly speed up OpenPose.
- `--num_gpu 2 --num_gpu_start 1`: Parallelize over this number of GPUs starting by the desired device id. By default it uses all the available GPUs.
- `--model_pose MPI`: Model to use, affects number keypoints, speed and accuracy.
- `--logging_level 3`: Logging messages threshold, range [0,255]: 0 will output any message & 255 will output none. Current messages in the range [1-4], 1 for low priority messages and 4 for important ones.
......
OpenPose Demo - Output
OpenPose - Output
====================================
## Contents
1. [Expected Visual Results](#expected-visual-results)
2. [Output Format](#output-format)
1. [Keypoint Ordering in C++/Python](#keypoint-ordering-in-c-python)
2. [Heatmap Ordering](#heatmap-ordering)
3. [Heatmap Saving in Float Format](#heatmap-saving-in-float-format)
4. [Heatmap Scaling](#heatmap-scaling)
5. [Face and Hands](#face-and-hands)
6. [Pose Output Format](#pose-output-format)
7. [Face Output Format](#face-output-format)
8. [Hand Output Format](#hand-output-format)
3. [Reading Saved Results](#reading-saved-results)
4. [Keypoint Format in the C++ API](#keypoint-format-in-the-c-api)
5. [Camera Matrix Output Format](#camera-matrix-output-format)
## Expected Visual Results
1. [UI and Visual Output](#ui-and-visual-output)
2. [JSON-UI Mapping](#json-ui-mapping)
1. [Pose Output Format (BODY_25)](#pose-output-format-body_25)
2. [Pose Output Format (COCO)](#pose-output-format-coco)
3. [Face Output Format](#face-output-format)
4. [Hand Output Format](#hand-output-format)
3. [JSON Output Format](#output-format)
4. [Keypoints in C++/Python](#body-keypoints-in-c-python)
1. [Keypoint Ordering in C++/Python](#body-keypoint-ordering-in-c-python)
2. [Keypoint Format in Datum (Advanced)](#keypoint-format-in-datum-advanced)
5. [Reading Saved Results](#reading-saved-results)
6. [Advanced](#advanced)
1. [Camera Matrix Output Format](#camera-matrix-output-format)
2. [Heatmaps](#heatmaps)
## UI and Visual Output
The visual GUI should show the original image with the poses blended on it, similarly to the pose of this gif:
<p align="center">
<img src="../.github/media/shake.gif", width="720">
</p>
If you choose to visualize a body part or a PAF (Part Affinity Field) heat map with the command option `--part_to_show`, the result should be similar to one of the following images:
## JSON-UI Mapping
The output of the JSON files consist of a set of keypoints, whose ordering is related with the UI output as follows:
### Pose Output Format (BODY_25)
<p align="center">
<img src="../.github/media/keypoints_pose_25.png", width="480">
</p>
### Pose Output Format (COCO)
<p align="center">
<img src="../.github/media/keypoints_pose_18.png", width="480">
</p>
### Face Output Format
<p align="center">
<img src="../.github/media/body_heat_maps.png", width="720">
<img src="../.github/media/keypoints_face.png", width="480">
</p>
### Hand Output Format
<p align="center">
<img src="../.github/media/paf_heat_maps.png", width="720">
<img src="../.github/media/keypoints_hand.png", width="480">
</p>
## Output Format
There are 2 alternatives to save the OpenPose output.
1. The `write_json` flag saves the people pose data using a custom JSON writer. Each JSON file has a `people` array of objects, where each object has:
1. An array `pose_keypoints_2d` containing the body part locations and detection confidence formatted as `x1,y1,c1,x2,y2,c2,...`. The coordinates `x` and `y` can be normalized to the range [0,1], [-1,1], [0, source size], [0, output size], etc., depending on the flag `keypoint_scale` (see flag for more information), while `c` is the confidence score in the range [0,1].
2. The arrays `face_keypoints_2d`, `hand_left_keypoints_2d`, and `hand_right_keypoints_2d`, analogous to `pose_keypoints_2d`.
3. The analogous 3-D arrays `body_keypoints_3d`, `face_keypoints_3d`, `hand_left_keypoints_2d`, and `hand_right_keypoints_2d` (if `--3d` is enabled, otherwise they will be empty). Instead of `x1,y1,c1,x2,y2,c2,...`, their format is `x1,y1,z1,c1,x2,y2,z2,c2,...`, where `c` is simply 1 or 0 depending on whether the 3-D reconstruction was successful or not.
4. The body part candidates before being assembled into people (if `--part_candidates` is enabled).
## JSON Output Format
There are 2 alternatives to save the OpenPose output. But both of them follow the keypoint ordering described in the [Keypoint Ordering in C++/Python](#body-keypoints-in-c-python) section (which you should read next).
1. The `--write_json` flag saves the people pose data onto JSON files. Each file represents a frame, it has a `people` array of objects, where each object has:
1. `pose_keypoints_2d`: Body part locations (`x`, `y`) and detection confidence (`c`) formatted as `x0,y0,c0,x1,y1,c1,...`. The coordinates `x` and `y` can be normalized to the range [0,1], [-1,1], [0, source size], [0, output size], etc. (see the flag `--keypoint_scale` for more information), while the confidence score (`c`) in the range [0,1].
2. `face_keypoints_2d`, `hand_left_keypoints_2d`, and `hand_right_keypoints_2d` are analogous to `pose_keypoints_2d` but applied to the face and hand parts.
3. `body_keypoints_3d`, `face_keypoints_3d`, `hand_left_keypoints_2d`, and `hand_right_keypoints_2d` are analogous but applied to the 3-D parts. They are empty if `--3d` is not enabled. Their format is `x0,y0,z0,c0,x1,y1,z1,c1,...`, where `c` is 1 or 0 depending on whether the 3-D reconstruction was successful or not.
4. `part_candidates` (optional and advanced): The body part candidates before being assembled into people. Empty if `--part_candidates` is not enabled (see that flag for more details).
```
{
"version":1.1,
......@@ -86,14 +115,17 @@ There are 2 alternatives to save the OpenPose output.
}
```
2. (Deprecated) The `write_keypoint` flag uses the OpenCV cv::FileStorage default formats, i.e., JSON (available after OpenCV 3.0), XML, and YML. Note that it does not include any other information othern than keypoints.
2. (Deprecated) `--write_keypoint` uses the OpenCV `cv::FileStorage` default formats, i.e., JSON (if OpenCV 3 or higher), XML, and YML. It only prints 2D body information (no 3D or face/hands).
Both of them follow the keypoint ordering described in the [Keypoint Ordering in C++/Python](#keypoint-ordering-in-c-python) section.
## Keypoints in C++/Python
### Keypoint Ordering in C++/Python
The body part mapping order of any body model (e.g., COCO, MPI) can be extracted from the C++ API by using the `getPoseBodyPartMapping(const PoseModel poseModel)` function available in [poseParameters.hpp](../include/openpose/pose/poseParameters.hpp):
The body part mapping order of any body model (e.g., `BODY_25`, `COCO`, `MPI`) can be extracted from the C++ and Python APIs.
- In C++, `getPoseBodyPartMapping(const PoseModel poseModel)` is available in [poseParameters.hpp](../include/openpose/pose/poseParameters.hpp):
```
// C++ API call
#include <openpose/pose/poseParameters.hpp>
......@@ -134,7 +166,7 @@ const auto& poseBodyPartMappingBody135 = getPoseBodyPartMapping(PoseModel::BODY_
// };
```
In Python, you can check them with the following code:
- You can also check them on Python:
```
poseModel = op.PoseModel.BODY_25
print(op.getPoseBodyPartMapping(poseModel))
......@@ -145,89 +177,9 @@ print(op.getPoseMapIndex(poseModel))
### Heatmap Ordering
For the **heat maps storing format**, instead of saving each of the 67 heatmaps (18 body parts + background + 2 x 19 PAFs) individually, the library concatenates them into a huge (width x #heat maps) x (height) matrix (i.e., concatenated by columns). E.g., columns [0, individual heat map width] contain the first heat map, columns [individual heat map width + 1, 2 * individual heat map width] contain the second heat map, etc. Note that some image viewers are not able to display the resulting images due to the size. However, Chrome and Firefox are able to properly open them.
The saving order is body parts + background + PAFs. Any of them can be disabled with program flags. If background is disabled, then the final image will be body parts + PAFs. The body parts and background follow the order of `getPoseBodyPartMapping(const PoseModel poseModel)`.
The PAFs follow the order specified on `getPosePartPairs(const PoseModel poseModel)` together with `getPoseMapIndex(const PoseModel poseModel)`. E.g., assuming COCO (see example code below), the PAF channels in COCO start in 19 (smallest number in `getPoseMapIndex`, equal to #body parts + 1), and end up in 56 (highest one). Then, we can match its value from `getPosePartPairs`. For instance, 19 (x-channel) and 20 (y-channel) in `getPoseMapIndex` correspond to PAF from body part 1 to 8; 21 and 22 correspond to x,y channels in the joint from body part 8 to 9, etc. Note that if the smallest channel is odd (19), then all the x-channels are odd, and all the y-channels even. If the smallest channel is even, then the opposite will happen.
```
// C++ API call
#include <openpose/pose/poseParameters.hpp>
const auto& posePartPairsBody25 = getPosePartPairs(PoseModel::BODY_25);
const auto& posePartPairsCoco = getPosePartPairs(PoseModel::COCO_18);
const auto& posePartPairsMpi = getPosePartPairs(PoseModel::MPI_15);
// getPosePartPairs(PoseModel::BODY_25) result
// Each index is the key value corresponding to each body part in `getPoseBodyPartMapping`. E.g., 1 for "Neck", 2 for "RShoulder", etc.
// 1,8, 1,2, 1,5, 2,3, 3,4, 5,6, 6,7, 8,9, 9,10, 10,11, 8,12, 12,13, 13,14, 1,0, 0,15, 15,17, 0,16, 16,18, 2,17, 5,18, 14,19,19,20,14,21, 11,22,22,23,11,24
// getPoseMapIndex(PoseModel::BODY_25) result
// 0,1, 14,15, 22,23, 16,17, 18,19, 24,25, 26,27, 6,7, 2,3, 4,5, 8,9, 10,11, 12,13, 30,31, 32,33, 36,37, 34,35, 38,39, 20,21, 28,29, 40,41,42,43,44,45, 46,47,48,49,50,51
```
### Heatmap Saving in Float Format
If you save the heatmaps in floating format by using the flag `--write_heatmaps_format float`, you can later read them in Python with:
```
# Load custom float format - Example in Python, assuming a (18 x 300 x 500) size Array
x = np.fromfile(heatMapFullPath, dtype=np.float32)
assert x[0] == 3 # First parameter saves the number of dimensions (18x300x500 = 3 dimensions)
shape_x = x[1:1+int(x[0])]
assert len(shape_x[0]) == 3 # Number of dimensions
assert shape_x[0] == 18 # Size of the first dimension
assert shape_x[1] == 300 # Size of the second dimension
assert shape_x[2] == 500 # Size of the third dimension
arrayData = x[1+int(round(x[0])):]
```
### Heatmap Scaling
Note that `--net_resolution` sets the size of the network, thus also the size of the output heatmaps. This heatmaps are resized while keeping the aspect ratio. When aspect ratio of the the input and network are not the same, padding is added at the bottom and/or right part of the output heatmaps.
### Keypoint Format in Datum (Advanced)
This section is only for advance users that plan to use the C++ API. Not needed for the OpenPose demo and/or Python API.
### Face and Hands
The output format is analogous for hand (`hand_left_keypoints`, `hand_right_keypoints`) and face (`face_keypoints`) JSON files.
### Pose Output Format (BODY_25)
<p align="center">
<img src="../.github/media/keypoints_pose_25.png", width="480">
</p>
### Pose Output Format (COCO)
<p align="center">
<img src="../.github/media/keypoints_pose_18.png", width="480">
</p>
### Face Output Format
<p align="center">
<img src="../.github/media/keypoints_face.png", width="480">
</p>
### Hand Output Format
<p align="center">
<img src="../.github/media/keypoints_hand.png", width="480">
</p>
## Reading Saved Results
We use standard formats (JSON, XML, PNG, JPG, ...) to save our results, so there are many open-source libraries to read them in most programming languages. From C++, but you might the functions in [include/openpose/filestream/fileStream.hpp](../include/openpose/filestream/fileStream.hpp). In particular, `loadData` (for JSON, XML and YML files) and `loadImage` (for image formats such as PNG or JPG) to load the data into cv::Mat format.
## Keypoint Format in the C++ API
There are 3 different keypoint `Array<float>` elements in the `Datum` class:
1. Array<float> **poseKeypoints**: In order to access person `person` and body part `part` (where the index matches `POSE_COCO_BODY_PARTS` or `POSE_MPI_BODY_PARTS`), you can simply output:
......@@ -290,5 +242,22 @@ There are 3 different keypoint `Array<float>` elements in the `Datum` class:
const auto scoreR = handKeypoints[1][baseIndex + 2];
```
## Camera Matrix Output Format
Check [doc/advanced/calibration_module.md#camera-matrix-output-format](advanced/calibration_module.md#camera-matrix-output-format).
## Reading Saved Results
We use the standard formats (JSON, PNG, JPG, ...) to save our results, so there are many open-source libraries to read them in most programming languages (especially Python). For C++, you might want to check [include/openpose/filestream/fileStream.hpp](../include/openpose/filestream/fileStream.hpp). In particular, `loadData` (for JSON, XML and YML files) and `loadImage` (for image formats such as PNG or JPG) to load the data into cv::Mat format.
## Advanced
### Camera Matrix Output Format
If you need to use the camera calibration or 3D modules, the camera matrix output format is detailed in [doc/advanced/calibration_module.md#camera-matrix-output-format](advanced/calibration_module.md#camera-matrix-output-format).
### Heatmaps
If you need to use heatmaps, check [doc/output_advanced_heatmaps.md](output_advanced_heatmaps.md).
OpenPose - Heatmap Output (Advanced)
====================================
## Contents
1. [Keypoints](#keypoints)
2. [UI and Visual Heatmap Output](#ui-and-visual-heatmap-output)
3. [Heatmap Ordering](#heatmap-ordering)
4. [Heatmap Saving in Float Format](#heatmap-saving-in-float-format)
5. [Heatmap Scaling](#heatmap-scaling)
## Keypoints
Check [doc/output_keypoints.md](output_keypoints.md) for the basic output information. This document is for users that want to use the heatmaps.
## UI and Visual Heatmap Output
If you choose to visualize a body part or a PAF (Part Affinity Field) heat map with the command option `--part_to_show`, the visual GUI should show something similar to one of the following images:
<p align="center">
<img src="../.github/media/body_heat_maps.png", width="720">
</p>
<p align="center">
<img src="../.github/media/paf_heat_maps.png", width="720">
</p>
## Heatmap Ordering
For the **heat maps storing format**, instead of saving each of the 67 heatmaps (18 body parts + background + 2 x 19 PAFs) individually, the library concatenates them into a huge (width x #heat maps) x (height) matrix (i.e., concatenated by columns). E.g., columns [0, individual heat map width] contain the first heat map, columns [individual heat map width + 1, 2 * individual heat map width] contain the second heat map, etc. Note that some image viewers are not able to display the resulting images due to the size. However, Chrome and Firefox are able to properly open them.
The saving order is body parts + background + PAFs. Any of them can be disabled with program flags. If background is disabled, then the final image will be body parts + PAFs. The body parts and background follow the order of `getPoseBodyPartMapping(const PoseModel poseModel)`.
The PAFs follow the order specified on `getPosePartPairs(const PoseModel poseModel)` together with `getPoseMapIndex(const PoseModel poseModel)`. E.g., assuming COCO (see example code below), the PAF channels in COCO start in 19 (smallest number in `getPoseMapIndex`, equal to #body parts + 1), and end up in 56 (highest one). Then, we can match its value from `getPosePartPairs`. For instance, 19 (x-channel) and 20 (y-channel) in `getPoseMapIndex` correspond to PAF from body part 1 to 8; 21 and 22 correspond to x,y channels in the joint from body part 8 to 9, etc. Note that if the smallest channel is odd (19), then all the x-channels are odd, and all the y-channels even. If the smallest channel is even, then the opposite will happen.
```
// C++ API call
#include <openpose/pose/poseParameters.hpp>
const auto& posePartPairsBody25 = getPosePartPairs(PoseModel::BODY_25);
const auto& posePartPairsCoco = getPosePartPairs(PoseModel::COCO_18);
const auto& posePartPairsMpi = getPosePartPairs(PoseModel::MPI_15);
// getPosePartPairs(PoseModel::BODY_25) result
// Each index is the key value corresponding to each body part in `getPoseBodyPartMapping`. E.g., 1 for "Neck", 2 for "RShoulder", etc.
// 1,8, 1,2, 1,5, 2,3, 3,4, 5,6, 6,7, 8,9, 9,10, 10,11, 8,12, 12,13, 13,14, 1,0, 0,15, 15,17, 0,16, 16,18, 2,17, 5,18, 14,19,19,20,14,21, 11,22,22,23,11,24
// getPoseMapIndex(PoseModel::BODY_25) result
// 0,1, 14,15, 22,23, 16,17, 18,19, 24,25, 26,27, 6,7, 2,3, 4,5, 8,9, 10,11, 12,13, 30,31, 32,33, 36,37, 34,35, 38,39, 20,21, 28,29, 40,41,42,43,44,45, 46,47,48,49,50,51
```
## Heatmap Saving in Float Format
If you save the heatmaps in floating format by using the flag `--write_heatmaps_format float`, you can later read them in Python with:
```
# Load custom float format - Example in Python, assuming a (18 x 300 x 500) size Array
x = np.fromfile(heatMapFullPath, dtype=np.float32)
assert x[0] == 3 # First parameter saves the number of dimensions (18x300x500 = 3 dimensions)
shape_x = x[1:1+int(x[0])]
assert len(shape_x[0]) == 3 # Number of dimensions
assert shape_x[0] == 18 # Size of the first dimension
assert shape_x[1] == 300 # Size of the second dimension
assert shape_x[2] == 500 # Size of the third dimension
arrayData = x[1+int(round(x[0])):]
```
## Heatmap Scaling
Note that `--net_resolution` sets the size of the network, thus also the size of the output heatmaps. This heatmaps are resized while keeping the aspect ratio. When aspect ratio of the the input and network are not the same, padding is added at the bottom and/or right part of the output heatmaps.
# OpenPose Python Module and Demo
OpenPose - Python API
====================================
## Contents
1. [Introduction](#introduction)
......
OpenPose Library - Release Notes
OpenPose - Release Notes
====================================
......
OpenPose Library - All Major Released Features
OpenPose - All Major Released Features
====================================
- Nov 2020: [**Python API improved and included on Windows portable binaries**](https://github.com/CMU-Perceptual-Computing-Lab/openpose/releases)!
- Nov 2020: [Compatibility with Nvidia 30XX cards, CUDA 11, and Ubuntu 20](installation/README.md)!
- Sep 2019: [**Training code released**](https://github.com/CMU-Perceptual-Computing-Lab/openpose_train)!
- Jan 2019: [**Unity plugin released**](https://github.com/CMU-Perceptual-Computing-Lab/openpose_unity_plugin)!
- Jan 2019: [**Improved Python API**](doc/python_module.md) released! Including body, face, hands, and all the functionality of the C++ API!
- Jan 2019: [**Improved Python API**](doc/python_api.md) released! Including body, face, hands, and all the functionality of the C++ API!
- Dec 2018: [**Foot dataset released**](https://cmu-perceptual-computing-lab.github.io/foot_keypoint_dataset) and [**new paper released**](https://arxiv.org/abs/1812.08008)!
- Sep 2018: [**Experimental tracker**](demo_quick_start.md#tracking)!
- Jun 2018: [**Combined body-foot model released! 40% faster and 5% more accurate**](installation/README.md)!
- Jun 2018: [**Python API**](python_module.md) released!
- Jun 2018: [**Python API**](python_api.md) released!
- Jun 2018: [**OpenCL/AMD graphic card version**](installation/README.md) released!
- Jun 2018: [**Calibration toolbox**](advanced/calibration_module.md) released!
- Jun 2018: [**Mac OSX version (CPU)**](installation/README.md) released!
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册