提交 fd566ce9 编写于 作者: G gineshidalgo99

C++ API & examples improved

上级 fee4fb49
...@@ -26,10 +26,28 @@ if (NOT WIN32 AND NOT UNIX AND NOT APPLE) ...@@ -26,10 +26,28 @@ if (NOT WIN32 AND NOT UNIX AND NOT APPLE)
endif (NOT WIN32 AND NOT UNIX AND NOT APPLE) endif (NOT WIN32 AND NOT UNIX AND NOT APPLE)
### BUILD_TYPE ### CMAKE_BUILD_TYPE
# Default: Release # Default: Release
set(CMAKE_BUILD_TYPE "Release" CACHE STRING "Choose the type of build." FORCE) # Bug fixed: By default, it uses something different to Release, that provokes OpenPose to be about 15% slower than
set_property(CACHE CMAKE_BUILD_TYPE PROPERTY STRINGS "Debug" "Release" "MinSizeRel" "RelWithDebInfo") # it should be.
# Is CMAKE_BUILD_TYPE "Debug" or "MinSizeRel" or "RelWithDebInfo"?
set(CMAKE_BUILD_TYPE_KNOWN FALSE)
if (${CMAKE_BUILD_TYPE} MATCHES "Debug")
set(CMAKE_BUILD_TYPE_KNOWN TRUE)
endif (${CMAKE_BUILD_TYPE} MATCHES "Debug")
if (${CMAKE_BUILD_TYPE} MATCHES "MinSizeRel")
set(CMAKE_BUILD_TYPE_KNOWN TRUE)
endif (${CMAKE_BUILD_TYPE} MATCHES "MinSizeRel")
if (${CMAKE_BUILD_TYPE} MATCHES "RelWithDebInfo")
set(CMAKE_BUILD_TYPE_KNOWN TRUE)
endif (${CMAKE_BUILD_TYPE} MATCHES "RelWithDebInfo")
# Assign proper CMAKE_BUILD_TYPE
if (${CMAKE_BUILD_TYPE_KNOWN})
set(CMAKE_BUILD_TYPE "Release" CACHE STRING "Choose the type of build.")
else (${CMAKE_BUILD_TYPE_KNOWN})
set(CMAKE_BUILD_TYPE "Release" CACHE STRING "Choose the type of build." FORCE)
endif (${CMAKE_BUILD_TYPE_KNOWN})
set_property(CACHE CMAKE_BUILD_TYPE PROPERTY STRINGS "Release" "Debug" "MinSizeRel" "RelWithDebInfo")
### FLAGS ### FLAGS
......
...@@ -8,7 +8,7 @@ ...@@ -8,7 +8,7 @@
|-------------| |-------------|
|[![Build Status](https://travis-ci.org/CMU-Perceptual-Computing-Lab/openpose.svg?branch=master)](https://travis-ci.org/CMU-Perceptual-Computing-Lab/openpose)| |[![Build Status](https://travis-ci.org/CMU-Perceptual-Computing-Lab/openpose.svg?branch=master)](https://travis-ci.org/CMU-Perceptual-Computing-Lab/openpose)|
[OpenPose](https://github.com/CMU-Perceptual-Computing-Lab/openpose) represents the **first real-time multi-person system to jointly detect human body, hand, and facial keypoints (in total 135 keypoints) on single images**. [OpenPose](https://github.com/CMU-Perceptual-Computing-Lab/openpose) represents the **first real-time multi-person system to jointly detect human body, hand, facial, and foot keypoints (in total 135 keypoints) on single images**.
<p align="center"> <p align="center">
<img src="doc/media/pose_face_hands.gif", width="480"> <img src="doc/media/pose_face_hands.gif", width="480">
...@@ -113,9 +113,9 @@ bin\OpenPoseDemo.exe --video examples\media\video.avi ...@@ -113,9 +113,9 @@ bin\OpenPoseDemo.exe --video examples\media\video.avi
- **Calibration toolbox**: To easily calibrate your cameras for 3-D OpenPose or any other stereo vision task. See [doc/modules/calibration_module.md](doc/modules/calibration_module.md). - **Calibration toolbox**: To easily calibrate your cameras for 3-D OpenPose or any other stereo vision task. See [doc/modules/calibration_module.md](doc/modules/calibration_module.md).
- **OpenPose Wrapper**: If you want to read a specific input, and/or add your custom post-processing function, and/or implement your own display/saving, check the `Wrapper` tutorial on [examples/tutorial_wrapper/](examples/tutorial_wrapper/). You can create your custom code on [examples/user_code/](examples/user_code/) and quickly compile it with CMake when compiling the whole OpenPose project. Quickly **add your custom code**: See [examples/user_code/README.md](examples/user_code/README.md) for further details. - **OpenPose C++ API**: If you want to read a specific input, and/or add your custom post-processing function, and/or implement your own display/saving, check the C++ API tutorial on [examples/tutorial_api_cpp/](examples/tutorial_api_cpp/) and [doc/library_introduction.md](doc/library_introduction.md). You can create your custom code on [examples/user_code/](examples/user_code/) and quickly compile it with CMake when compiling the whole OpenPose project. Quickly **add your custom code**: See [examples/user_code/README.md](examples/user_code/README.md) for further details.
- **OpenPose C++ API**: See [doc/library_introduction.md](doc/library_introduction.md). - **OpenPose Python API**: Analogously to the C++ API, find the tutorial for the Python API on [examples/tutorial_api_python/](examples/tutorial_api_python/).
- **Adding an extra module**: Check [doc/library_add_new_module.md](./doc/library_add_new_module.md). - **Adding an extra module**: Check [doc/library_add_new_module.md](./doc/library_add_new_module.md).
......
...@@ -112,14 +112,20 @@ Any problem installing OpenPose? Check [doc/faq.md](./faq.md) and/or post a GitH ...@@ -112,14 +112,20 @@ Any problem installing OpenPose? Check [doc/faq.md](./faq.md) and/or post a GitH
- Windows: download and install the latest CMake win64-x64 msi installer from the [CMake website](https://cmake.org/download/), called `cmake-X.X.X-win64-x64.msi`. - Windows: download and install the latest CMake win64-x64 msi installer from the [CMake website](https://cmake.org/download/), called `cmake-X.X.X-win64-x64.msi`.
- Mac: `brew cask install cmake`. - Mac: `brew cask install cmake`.
3. Windows - **Microsoft Visual Studio (VS) 2015 Enterprise Update 3**: 3. Windows - **Microsoft Visual Studio (VS) 2015 Enterprise Update 3**:
- If **Visual Studio 2017 Community** is desired, we do not officially support it, but it might be compiled by firstly [enabling CUDA 8.0 in VS2017](https://stackoverflow.com/questions/43745099/using-cuda-with-visual-studio-2017?answertab=active#tab-top) or use **VS2017 with CUDA 9** by checking the `.vcxproj` file and changing the necessary paths from CUDA 8 to 9. - **IMPORTANT**: Enable all C++-related flags when selecting the components to install.
- VS 2015 Enterprise Update 1 will give some compiler errors and VS 2015 Community has not been tested. - Different VS versions:
- If **Visual Studio 2017 Community** is desired, we do not officially support it, but it might be compiled by firstly [enabling CUDA 8.0 in VS2017](https://stackoverflow.com/questions/43745099/using-cuda-with-visual-studio-2017?answertab=active#tab-top) or use **VS2017 with CUDA 9** by checking the `.vcxproj` file and changing the necessary paths from CUDA 8 to 9.
- VS 2015 Enterprise Update 1 will give some compiler errors.
- VS 2015 Community has not been tested.
4. Nvidia GPU version prerequisites: 4. Nvidia GPU version prerequisites:
1. [**CUDA 8**](https://developer.nvidia.com/cuda-80-ga2-download-archive): 1. [**CUDA 8**](https://developer.nvidia.com/cuda-80-ga2-download-archive):
- Ubuntu: Run `sudo ubuntu/install_cuda.sh` or alternatively download and install it from their website. - Ubuntu: Run `sudo ubuntu/install_cuda.sh` or alternatively download and install it from their website.
- Windows: Install CUDA 8.0 after Visual Studio 2015 is installed to assure that the CUDA installation will generate all necessary files for VS. If CUDA was already installed, re-install - **IMPORTANT 1/2**: Nvidia V, any Nvidia with Volta architecture, and newer Nvidia model GPUs require at least CUDA 9. - Windows: Install CUDA 8.0 after Visual Studio 2015 is installed to assure that the CUDA installation will generate all necessary files for VS. If CUDA was already installed, re-install it.
- **IMPORTANT 2/2**: As of a recent Windows update, you might want to download the Nvidia [drivers](http://www.nvidia.com/Download/index.aspx) first, and then install CUDA without the Graphics Driver flag or else your system might hang. - **Important installation tips**:
2. [**cuDNN 5.1**](https://developer.nvidia.com/cudnn): - New Nvidia model GPUs (e.g., Nvidia V, GTX 2080, any Nvidia with Volta or Turing architecture, etc.) require at least CUDA 9.
- (Windows issue, reported Sep 2018): If your computer hangs when installing CUDA drivers, try installing first the [Nvidia drivers](http://www.nvidia.com/Download/index.aspx), and then installing CUDA without the Graphics Driver flag.
- (Windows): If CMake returns and error message similar to `CUDA_TOOLKIT_ROOT_DIR not found or specified` or any other CUDA component missing, then: 1) Re-install Visual Studio 2015; 2) Reboot your PC; 3) Re-install CUDA.
2. [**cuDNN 5.1**](https://developer.nvidia.com/rdp/cudnn-archive):
- Ubuntu: Run `sudo ubuntu/install_cudnn.sh` or alternatively download and install it from their website. - Ubuntu: Run `sudo ubuntu/install_cudnn.sh` or alternatively download and install it from their website.
- Windows (and Ubuntu if manual installation): In order to manually install it, just unzip it and copy (merge) the contents on the CUDA folder, usually `/usr/local/cuda/` in Ubuntu and `C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0` in Windows. - Windows (and Ubuntu if manual installation): In order to manually install it, just unzip it and copy (merge) the contents on the CUDA folder, usually `/usr/local/cuda/` in Ubuntu and `C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0` in Windows.
5. AMD GPU version prerequisites: 5. AMD GPU version prerequisites:
......
...@@ -8,7 +8,7 @@ In order to add a new module, these are the recommended steps in order to develo ...@@ -8,7 +8,7 @@ In order to add a new module, these are the recommended steps in order to develo
2. Implement all the functionality in one `Worker` (i.e. inherit from `Worker` and implement all the functionality on that class). 2. Implement all the functionality in one `Worker` (i.e. inherit from `Worker` and implement all the functionality on that class).
1. The first letter of the class name should be `W` (e.g. `WHairExtractor`). 1. The first letter of the class name should be `W` (e.g. `WHairExtractor`).
2. To initially simplify development: 2. To initially simplify development:
1. Initialize the Worker class with the specific std::shared_ptr<std::vector<op::Datum>> instead of directly using a template class (following the `examples/tutorial_wrapper` synchronous examples). 1. Initialize the Worker class with the specific std::shared_ptr<std::vector<op::Datum>> instead of directly using a template class (following the `examples/tutorial_api_cpp` synchronous examples).
2. Use the whole op::Datum as unique argument of your auxiliary functions. 2. Use the whole op::Datum as unique argument of your auxiliary functions.
3. Use the OpenPose Wrapper in ThreadManagerMode::SingleThread mode (e.g. it allows you to directly use cv::imshow). 3. Use the OpenPose Wrapper in ThreadManagerMode::SingleThread mode (e.g. it allows you to directly use cv::imshow).
4. If you are using your own custom Caffe -> initially change the Caffe for your version. It should directly work. 4. If you are using your own custom Caffe -> initially change the Caffe for your version. It should directly work.
...@@ -20,6 +20,8 @@ In order to add a new module, these are the recommended steps in order to develo ...@@ -20,6 +20,8 @@ In order to add a new module, these are the recommended steps in order to develo
5. If the Workers need extra data from `Datum`, simply add into `Datum` the new variables required (without removing/modifying any previous variables!). 5. If the Workers need extra data from `Datum`, simply add into `Datum` the new variables required (without removing/modifying any previous variables!).
6. Read also the release steps before starting this developping phase. 6. Read also the release steps before starting this developping phase.
## Release Steps ## Release Steps
In order to release the new module: In order to release the new module:
......
...@@ -7,7 +7,7 @@ If you intend to extend the functionality of our library: ...@@ -7,7 +7,7 @@ If you intend to extend the functionality of our library:
2. Check the basic library overview doc on [library_overview.md](library_overview.md). 2. Check the basic library overview doc on [library_overview.md](library_overview.md).
3. Read, understand and play with the basic real time pose demo source code [examples/openpose/openpose.cpp](../examples/openpose/openpose.cpp) and [examples/tutorial_wrapper](../examples/tutorial_wrapper). It includes all the functionality of our library, and it has been properly commented. 3. Read, understand and play with the basic real time pose demo source code [examples/openpose/openpose.cpp](../examples/openpose/openpose.cpp) and [examples/tutorial_api_cpp](../examples/tutorial_api_cpp). It includes all the functionality of our library, and it has been properly commented.
4. Read, understand and play with the other tutorials in [examples/](../examples/). It includes more specific examples. 4. Read, understand and play with the other tutorials in [examples/](../examples/). It includes more specific examples.
...@@ -15,6 +15,6 @@ If you intend to extend the functionality of our library: ...@@ -15,6 +15,6 @@ If you intend to extend the functionality of our library:
6. Take a look to the stucuture of the already existing modules. 6. Take a look to the stucuture of the already existing modules.
7. The C++ headers files add documentation in [Doxygen](http://www.doxygen.org/) format. Create this documentation by compiling the [include](../include/) folder with Doxygen. This documentation will be completed during the next few weeks/months. 7. The C++ headers files add documentation in [Doxygen](http://www.doxygen.org/) format. Create this documentation by compiling the [include](../include/) folder with Doxygen. This documentation is slowly but continuously improved.
8. You can also take a look to the source code or ask us on GitHub. 8. You can also take a look to the source code or ask us on GitHub.
...@@ -109,7 +109,7 @@ It should be similar to the following image. ...@@ -109,7 +109,7 @@ It should be similar to the following image.
You can copy and modify the OpenPose 3-D demo to use any camera brand by: You can copy and modify the OpenPose 3-D demo to use any camera brand by:
1. You can optionally turn off the `WITH_FLIR_CAMERA` while compiling CMake. 1. You can optionally turn off the `WITH_FLIR_CAMERA` while compiling CMake.
2. Copy any of the `examples/tutorial_wrapper/*.cpp` examples (we recommend `2_user_synchronous.cpp`). 2. Copy `examples/tutorial_api_cpp/7_synchronous_custom_input.cpp` (or 9_synchronous_custom_all.cpp).
3. Modify `WUserInput` and add your custom code there. Your code should fill `Datum::name`, `Datum::cameraMatrix`, `Datum::cvInputData`, and `Datum::cvOutputData` (fill cvOutputData = cvInputData). 3. Modify `WUserInput` and add your custom code there. Your code should fill `Datum::name`, `Datum::cameraMatrix`, `Datum::cvInputData`, and `Datum::cvOutputData` (fill cvOutputData = cvInputData).
4. Remove `WUserPostProcessing` and `WUserOutput` (unless you want to have your custom post-processing and/or output). 4. Remove `WUserPostProcessing` and `WUserOutput` (unless you want to have your custom post-processing and/or output).
......
...@@ -35,21 +35,21 @@ pip install opencv-python ...@@ -35,21 +35,21 @@ pip install opencv-python
## Testing ## Testing
Two examples can be found in `build/examples/tutorial_python` in your build folder. Navigate directly to this path to run examples. Two examples can be found in `build/examples/tutorial_api_python` in your build folder. Navigate directly to this path to run examples.
- `1_extract_pose` demonstrates a simple use of the API. - `1_extract_pose` demonstrates a simple use of the API.
- `2_pose_from_heatmaps` demonstrates constructing pose from heatmaps from the caffe network. (Requires Python Caffe to be installed seperately) - `2_pose_from_heatmaps` demonstrates constructing pose from heatmaps from the caffe network (Requires Python Caffe to be installed seperately, only tested on Ubuntu).
``` ```
# From command line # From command line
cd build/examples/tutorial_python cd build/examples/tutorial_api_python
python 1_extract_pose.py python 1_extract_pose.py
``` ```
## Exporting Python OpenPose ## Exporting Python OpenPose
Note: This step is only required if you are moving the `*.py` files outside their original location, or writting new `*.py` scripts outside `build/examples/tutorial_python`. Note: This step is only required if you are moving the `*.py` files outside their original location, or writting new `*.py` scripts outside `build/examples/tutorial_api_python`.
- Option a, installing OpenPose: On an Ubuntu or OSX based system, you could install OpenPose by running `sudo make install`, you could then set the OpenPose path in your python scripts to the OpenPose installation path (default: `/usr/local/python`) and start using OpenPose at any location. Take a look at `build/examples/tutorial_pose/1_extract_pose.py` for an example. - Option a, installing OpenPose: On an Ubuntu or OSX based system, you could install OpenPose by running `sudo make install`, you could then set the OpenPose path in your python scripts to the OpenPose installation path (default: `/usr/local/python`) and start using OpenPose at any location. Take a look at `build/examples/tutorial_pose/1_extract_pose.py` for an example.
- Option b, not Installing OpenPose: To move the OpenPose Python API demos to a different folder, ensure that the line `sys.path.append('{OpenPose_path}/python')` is properly set in your `*.py` files, where `{OpenPose_path}` points to your build folder of OpenPose. Take a look at `build/examples/tutorial_pose/1_extract_pose.py` for an example. - Option b, not installing OpenPose: To move the OpenPose Python API demos to a different folder, ensure that the line `sys.path.append('{OpenPose_path}/python')` is properly set in your `*.py` files, where `{OpenPose_path}` points to your build folder of OpenPose. Take a look at `build/examples/tutorial_pose/1_extract_pose.py` for an example.
...@@ -57,7 +57,7 @@ OpenPose Library - Release Notes ...@@ -57,7 +57,7 @@ OpenPose Library - Release Notes
9. WCocoJsonSaver finished and removed its 3599-image limit. 9. WCocoJsonSaver finished and removed its 3599-image limit.
10. Added `--camera_fps` so generated video will use that frame rate. 10. Added `--camera_fps` so generated video will use that frame rate.
11. Reduced the number of printed information messages. Default logging priority threshold increased to Priority::Max. 11. Reduced the number of printed information messages. Default logging priority threshold increased to Priority::Max.
12. Google flags to OpenPose configuration parameters reader moved from each demo to utilities/flagsToOpenPose. 12. GFlags to OpenPose configuration parameters reader moved from each demo to utilities/flagsToOpenPose.
13. Nms classes do not use `numberParts` for `Reshape`, they deduce the value. 13. Nms classes do not use `numberParts` for `Reshape`, they deduce the value.
14. Improved documentation. 14. Improved documentation.
2. Functions or parameters renamed: 2. Functions or parameters renamed:
...@@ -259,15 +259,23 @@ OpenPose Library - Release Notes ...@@ -259,15 +259,23 @@ OpenPose Library - Release Notes
## Current version - future OpenPose 1.4.1 ## Current version - future OpenPose 1.5.0
1. Main improvements: 1. Main improvements:
1. Added initial single-person tracker for further speed up or visual smoothing (`--tracking` flag). 1. Added initial single-person tracker for further speed up or visual smoothing (`--tracking` flag).
2. Greedy body part connector implemented in CUDA: +~30% speed up in Nvidia (CUDA) version with default flags and +~10% in maximum accuracy configuration. In addition, it provides a small 0.5% boost in accuracy (default flags). 2. Greedy body part connector implemented in CUDA: +~30% speed up in Nvidia (CUDA) version with default flags and +~10% in maximum accuracy configuration. In addition, it provides a small 0.5% boost in accuracy (default flags).
3. OpenPose can be built as Unity plugin: Added flag `BUILD_UNITY_SUPPORT` and special Unity code. 3. OpenPose can be built as Unity plugin: Added flag `BUILD_UNITY_SUPPORT` and special Unity code.
4. If camera is unplugged, OpenPose GUI and command line will display a warning and try to reconnect it. 4. If camera is unplugged, OpenPose GUI and command line will display a warning and try to reconnect it.
5. Wrapper classes simplified and renamed.
6. API and examples improved:
1. New header file `flags.hpp` that includes all OpenPose flags, removing the need to copy them repeatedly on each OpenPose example file.
2. `tutorial_wrapper` renamed as `tutorial_api_cpp` as well as new examples were added.
2. `tutorial_python` renamed as `tutorial_api_python` as well as new examples were added.
3. `tutorial_pose` and `tutorial_thread` renamed as `tutorial_developer`, not meant to be used by users, but rather for OpenPose developers.
2. Functions or parameters renamed: 2. Functions or parameters renamed:
1. By default, python example `2_pose_from_heatmaps.py` was using 2 scales starting at -1x736, changed to 1 scale at -1x368. 1. By default, python example `tutorial_developer/python_2_pose_from_heatmaps.py` was using 2 scales starting at -1x736, changed to 1 scale at -1x368.
2. WrapperStructPose default parameters changed to match those of the OpenPose demo binary.
3. Main bugs fixed: 3. Main bugs fixed:
1. CMake-GUI was forcing to Release mode, allowed Debug modes too.
......
add_subdirectory(calibration) add_subdirectory(calibration)
add_subdirectory(openpose) add_subdirectory(openpose)
add_subdirectory(tutorial_add_module) add_subdirectory(tutorial_add_module)
add_subdirectory(tutorial_pose) add_subdirectory(tutorial_api_python)
add_subdirectory(tutorial_python) add_subdirectory(tutorial_api_cpp)
add_subdirectory(tutorial_thread) add_subdirectory(tutorial_developer)
add_subdirectory(tutorial_wrapper)
add_subdirectory(user_code) add_subdirectory(user_code)
if (UNIX OR APPLE) if (UNIX OR APPLE)
add_subdirectory(tests) add_subdirectory(tests)
......
set(EXAMPLE_FILES set(EXAMPLE_FILES
calibration.cpp) calibration.cpp)
foreach(EXAMPLE_FILE ${EXAMPLE_FILES}) foreach(EXAMPLE_FILE ${EXAMPLE_FILES})
......
...@@ -3,26 +3,12 @@ ...@@ -3,26 +3,12 @@
// Implemented on top of OpenCV. // Implemented on top of OpenCV.
// It computes and saves the intrinsics parameters of the input images. // It computes and saves the intrinsics parameters of the input images.
// C++ std library dependencies // Command-line user intraface
#include <chrono> // `std::chrono::` functions and classes, e.g. std::chrono::milliseconds #define OPENPOSE_FLAGS_DISABLE_POSE
#include <thread> // std::this_thread #include <openpose/flags.hpp>
// Other 3rdparty dependencies
// GFlags: DEFINE_bool, _int32, _int64, _uint64, _double, _string
#include <gflags/gflags.h>
// Allow Google Flags in Ubuntu 14
#ifndef GFLAGS_GFLAGS_H_
namespace gflags = google;
#endif
// OpenPose dependencies // OpenPose dependencies
#include <openpose/headers.hpp> #include <openpose/headers.hpp>
// See all the available parameter options withe the `--help` flag. E.g. `build/examples/openpose/openpose.bin --help`
// Note: This command will show you flags for other unnecessary 3rdparty files. Check only the flags for the OpenPose
// executable. E.g. for `openpose.bin`, look for `Flags from examples/openpose/openpose.cpp:`.
// Debugging/Other
DEFINE_int32(logging_level, 3, "The logging level. Integer in the range [0, 255]. 0 will output any log() message, while"
" 255 will not output any. Current OpenPose library messages are in the range 0-4: 1 for"
" low priority messages and 4 for important ones.");
// Calibration // Calibration
DEFINE_int32(mode, 1, "Select 1 for intrinsic camera parameter calibration, 2 for extrinsic calibration."); DEFINE_int32(mode, 1, "Select 1 for intrinsic camera parameter calibration, 2 for extrinsic calibration.");
DEFINE_string(calibration_image_dir, "images/intrinsics/", "Directory where the images for camera parameter calibration are placed."); DEFINE_string(calibration_image_dir, "images/intrinsics/", "Directory where the images for camera parameter calibration are placed.");
......
set(EXAMPLE_FILES set(EXAMPLE_FILES
openpose.cpp) openpose.cpp)
foreach(EXAMPLE_FILE ${EXAMPLE_FILES}) foreach(EXAMPLE_FILE ${EXAMPLE_FILES})
......
此差异已折叠。
# Low Level API Examples
**Disclaimer**: This folder is meant for internal OpenPose developers. The Examples might highly change, and we will not answer questions about them nor provide official support for them.
**If the OpenPose library does not compile for an error happening due to a file from this folder, notify us**.
// ------------------------- OpenPose Resize Layer Testing ------------------------- // ------------------------- OpenPose Resize Layer Testing -------------------------
// Command-line user intraface
#define OPENPOSE_FLAGS_DISABLE_POSE
#include <openpose/flags.hpp>
// OpenPose dependencies
#include <openpose/headers.hpp> #include <openpose/headers.hpp>
// OpenCL dependencies
#ifdef USE_OPENCL #ifdef USE_OPENCL
#include <openpose/gpu/opencl.hcl> #include <openpose/gpu/opencl.hcl>
#include <openpose/gpu/cl2.hpp> #include <openpose/gpu/cl2.hpp>
#include <chrono> // `std::chrono::` functions and classes, e.g. std::chrono::milliseconds
// GFlags: DEFINE_bool, _int32, _int64, _uint64, _double, _string
#include <gflags/gflags.h>
// Allow Google Flags in Ubuntu 14
#ifndef GFLAGS_GFLAGS_H_
namespace gflags = google;
#endif
#ifdef USE_CAFFE
#include <caffe/net.hpp>
#endif
DEFINE_string(image_path, "examples/media/COCO_val2014_000000000192.jpg", "Process the desired image."); DEFINE_string(image_path, "examples/media/COCO_val2014_000000000192.jpg", "Process the desired image.");
......
// ------------------------- OpenPose Library Tutorial - Hand Keypoint Detection from JSON Ground-Truth Data ------------------------- // ------------------------- OpenPose Library Tutorial - Hand Keypoint Detection from JSON Ground-Truth Data -------------------------
// Example to test hands accuracy given ground-truth bounding boxes. // Example to test hands accuracy given ground-truth bounding boxes.
#include <chrono> // `std::chrono::` functions and classes, e.g. std::chrono::milliseconds // Command-line user intraface
// GFlags: DEFINE_bool, _int32, _int64, _uint64, _double, _string #define OPENPOSE_FLAGS_DISABLE_POSE
#include <gflags/gflags.h> #include <openpose/flags.hpp>
// Allow Google Flags in Ubuntu 14 // OpenPose dependencies
#ifndef GFLAGS_GFLAGS_H_
namespace gflags = google;
#endif
#include <openpose/headers.hpp> #include <openpose/headers.hpp>
#include "wrapperHandFromJsonTest.hpp" #include "wrapperHandFromJsonTest.hpp"
// For info about the flags, check `examples/openpose/openpose.bin`. // For info about the flags, check `examples/openpose/openpose.bin`.
// Debugging/Other
DEFINE_int32(logging_level, 3, "");
// Producer // Producer
DEFINE_string(image_dir, "", ""); DEFINE_string(image_dir, "", "");
DEFINE_string(hand_ground_truth, "", ""); DEFINE_string(hand_ground_truth, "", "");
...@@ -44,7 +39,7 @@ int handFromJsonTest() ...@@ -44,7 +39,7 @@ int handFromJsonTest()
__LINE__, __FUNCTION__, __FILE__); __LINE__, __FUNCTION__, __FILE__);
op::ConfigureLog::setPriorityThreshold((op::Priority)FLAGS_logging_level); op::ConfigureLog::setPriorityThreshold((op::Priority)FLAGS_logging_level);
// Applying user defined configuration - Google flags to program variables // Applying user defined configuration - GFlags to program variables
// handNetInputSize // handNetInputSize
const auto handNetInputSize = op::flagsToPoint(FLAGS_hand_net_resolution, "368x368 (multiples of 16)"); const auto handNetInputSize = op::flagsToPoint(FLAGS_hand_net_resolution, "368x368 (multiples of 16)");
// producerType // producerType
......
// ------------------------- OpenPose Resize Layer Testing ------------------------- // ------------------------- OpenPose Resize Layer Testing -------------------------
// Command-line user intraface
#define OPENPOSE_FLAGS_DISABLE_POSE
#include <openpose/flags.hpp>
// OpenPose dependencies
#include <openpose/headers.hpp> #include <openpose/headers.hpp>
#ifdef USE_CUDA #ifdef USE_CUDA
#include <chrono> // `std::chrono::` functions and classes, e.g. std::chrono::milliseconds
// GFlags: DEFINE_bool, _int32, _int64, _uint64, _double, _string
#include <gflags/gflags.h>
// Allow Google Flags in Ubuntu 14
#ifndef GFLAGS_GFLAGS_H_
namespace gflags = google;
#endif
#ifdef USE_CAFFE #ifdef USE_CAFFE
#include <caffe/net.hpp> #include <caffe/net.hpp>
#endif #endif
......
// ------------------------- OpenPose API Tutorial - Example 1 - Body from image -------------------------
// It reads an image, process it, and displays it with the pose keypoints.
// Command-line user intraface
#define OPENPOSE_FLAGS_DISABLE_POSE
#include <openpose/flags.hpp>
// OpenPose dependencies
#include <openpose/headers.hpp>
// Custom OpenPose flags
// Producer
DEFINE_string(image_path, "examples/media/COCO_val2014_000000000192.jpg",
"Process an image. Read all standard formats (jpg, png, bmp, etc.).");
// This worker will just read and return all the jpg files in a directory
void display(const std::shared_ptr<std::vector<op::Datum>>& datumsPtr)
{
// User's displaying/saving/other processing here
// datum.cvOutputData: rendered frame with pose or heatmaps
// datum.poseKeypoints: Array<float> with the estimated pose
if (datumsPtr != nullptr && !datumsPtr->empty())
{
// Display image
cv::imshow("User worker GUI", datumsPtr->at(0).cvOutputData);
cv::waitKey(0);
}
else
op::log("Nullptr or empty datumsPtr found.", op::Priority::High);
}
void printKeypoints(const std::shared_ptr<std::vector<op::Datum>>& datumsPtr)
{
// Example: How to use the pose keypoints
if (datumsPtr != nullptr && !datumsPtr->empty())
{
// Alternative 1
op::log("Body keypoints: " + datumsPtr->at(0).poseKeypoints.toString());
// // Alternative 2
// op::log(datumsPtr->at(0).poseKeypoints);
// // Alternative 3
// std::cout << datumsPtr->at(0).poseKeypoints << std::endl;
// // Alternative 4 - Accesing each element of the keypoints
// op::log("\nKeypoints:");
// const auto& poseKeypoints = datumsPtr->at(0).poseKeypoints;
// op::log("Person pose keypoints:");
// for (auto person = 0 ; person < poseKeypoints.getSize(0) ; person++)
// {
// op::log("Person " + std::to_string(person) + " (x, y, score):");
// for (auto bodyPart = 0 ; bodyPart < poseKeypoints.getSize(1) ; bodyPart++)
// {
// std::string valueToPrint;
// for (auto xyscore = 0 ; xyscore < poseKeypoints.getSize(2) ; xyscore++)
// valueToPrint += std::to_string( poseKeypoints[{person, bodyPart, xyscore}] ) + " ";
// op::log(valueToPrint);
// }
// }
// op::log(" ");
}
else
op::log("Nullptr or empty datumsPtr found.", op::Priority::High);
}
int bodyFromImage()
{
op::log("Starting OpenPose demo...", op::Priority::High);
// Configuring OpenPose
op::log("Configuring OpenPose...", op::Priority::High);
op::Wrapper<std::vector<op::Datum>> opWrapper{op::ThreadManagerMode::Asynchronous};
// Set to single-thread (for sequential processing and/or debugging and/or reducing latency)
if (FLAGS_disable_multi_thread)
opWrapper.disableMultiThreading();
// Starting OpenPose
op::log("Starting thread(s)...", op::Priority::High);
opWrapper.start();
// Process and display image
const auto imageToProcess = cv::imread(FLAGS_image_path);
auto datumProcessed = opWrapper.emplaceAndPop(imageToProcess);
if (datumProcessed != nullptr)
{
printKeypoints(datumProcessed);
display(datumProcessed);
}
else
op::log("Image could not be processed.", op::Priority::High);
// Return successful message
op::log("Stopping OpenPose...", op::Priority::High);
return 0;
}
int main(int argc, char *argv[])
{
// Parsing command line flags
gflags::ParseCommandLineFlags(&argc, &argv, true);
// Running bodyFromImage
return bodyFromImage();
}
// ------------------------- OpenPose API Tutorial - Example 2 - Whole body from image -------------------------
// It reads an image, process it, and displays it with the pose, hand, and face keypoints.
// Command-line user intraface
#define OPENPOSE_FLAGS_DISABLE_POSE
#include <openpose/flags.hpp>
// OpenPose dependencies
#include <openpose/headers.hpp>
// Custom OpenPose flags
// Producer
DEFINE_string(image_path, "examples/media/COCO_val2014_000000000241.jpg",
"Process an image. Read all standard formats (jpg, png, bmp, etc.).");
// This worker will just read and return all the jpg files in a directory
void display(const std::shared_ptr<std::vector<op::Datum>>& datumsPtr)
{
// User's displaying/saving/other processing here
// datum.cvOutputData: rendered frame with pose or heatmaps
// datum.poseKeypoints: Array<float> with the estimated pose
if (datumsPtr != nullptr && !datumsPtr->empty())
{
// Display image
cv::imshow("User worker GUI", datumsPtr->at(0).cvOutputData);
cv::waitKey(0);
}
else
op::log("Nullptr or empty datumsPtr found.", op::Priority::High);
}
void printKeypoints(const std::shared_ptr<std::vector<op::Datum>>& datumsPtr)
{
// Example: How to use the pose keypoints
if (datumsPtr != nullptr && !datumsPtr->empty())
{
op::log("Body keypoints: " + datumsPtr->at(0).poseKeypoints.toString());
op::log("Face keypoints: " + datumsPtr->at(0).faceKeypoints.toString());
op::log("Left hand keypoints: " + datumsPtr->at(0).handKeypoints[0].toString());
op::log("Right hand keypoints: " + datumsPtr->at(0).handKeypoints[1].toString());
}
else
op::log("Nullptr or empty datumsPtr found.", op::Priority::High);
}
int wholeBodyFromImage()
{
op::log("Starting OpenPose demo...", op::Priority::High);
// Configuring OpenPose
op::log("Configuring OpenPose...", op::Priority::High);
op::Wrapper<std::vector<op::Datum>> opWrapper{op::ThreadManagerMode::Asynchronous};
// Add hand and face
opWrapper.configure(op::WrapperStructFace{true});
opWrapper.configure(op::WrapperStructHand{true});
// Set to single-thread (for sequential processing and/or debugging and/or reducing latency)
if (FLAGS_disable_multi_thread)
opWrapper.disableMultiThreading();
// Starting OpenPose
op::log("Starting thread(s)...", op::Priority::High);
opWrapper.start();
// Process and display image
const auto imageToProcess = cv::imread(FLAGS_image_path);
auto datumProcessed = opWrapper.emplaceAndPop(imageToProcess);
if (datumProcessed != nullptr)
{
printKeypoints(datumProcessed);
display(datumProcessed);
}
else
op::log("Image could not be processed.", op::Priority::High);
// Return successful message
op::log("Stopping OpenPose...", op::Priority::High);
return 0;
}
int main(int argc, char *argv[])
{
// Parsing command line flags
gflags::ParseCommandLineFlags(&argc, &argv, true);
// Running wholeBodyFromImage
return wholeBodyFromImage();
}
// ------------------------- OpenPose API Tutorial - Example 3 - Body from image configurable -------------------------
// It reads an image, process it, and displays it with the pose (and optionally hand and face) keypoints. In addition,
// it includes all the OpenPose configuration flags (enable/disable hand, face, output saving, etc.).
// Command-line user intraface
#define OPENPOSE_FLAGS_DISABLE_PRODUCER
#define OPENPOSE_FLAGS_DISABLE_DISPLAY
#include <openpose/flags.hpp>
// OpenPose dependencies
#include <openpose/headers.hpp>
// Custom OpenPose flags
// Producer
DEFINE_string(image_path, "examples/media/COCO_val2014_000000000294.jpg",
"Process an image. Read all standard formats (jpg, png, bmp, etc.).");
// This worker will just read and return all the jpg files in a directory
void display(const std::shared_ptr<std::vector<op::Datum>>& datumsPtr)
{
// User's displaying/saving/other processing here
// datum.cvOutputData: rendered frame with pose or heatmaps
// datum.poseKeypoints: Array<float> with the estimated pose
if (datumsPtr != nullptr && !datumsPtr->empty())
{
// Display image
cv::imshow("User worker GUI", datumsPtr->at(0).cvOutputData);
cv::waitKey(0);
}
else
op::log("Nullptr or empty datumsPtr found.", op::Priority::High);
}
void printKeypoints(const std::shared_ptr<std::vector<op::Datum>>& datumsPtr)
{
// Example: How to use the pose keypoints
if (datumsPtr != nullptr && !datumsPtr->empty())
{
op::log("Body keypoints: " + datumsPtr->at(0).poseKeypoints.toString());
op::log("Face keypoints: " + datumsPtr->at(0).faceKeypoints.toString());
op::log("Left hand keypoints: " + datumsPtr->at(0).handKeypoints[0].toString());
op::log("Right hand keypoints: " + datumsPtr->at(0).handKeypoints[1].toString());
}
else
op::log("Nullptr or empty datumsPtr found.", op::Priority::High);
}
int wholeBodyFromImage()
{
op::log("Starting OpenPose demo...", op::Priority::High);
// logging_level
op::check(0 <= FLAGS_logging_level && FLAGS_logging_level <= 255, "Wrong logging_level value.",
__LINE__, __FUNCTION__, __FILE__);
op::ConfigureLog::setPriorityThreshold((op::Priority)FLAGS_logging_level);
op::Profiler::setDefaultX(FLAGS_profile_speed);
// Applying user defined configuration - GFlags to program variables
// outputSize
const auto outputSize = op::flagsToPoint(FLAGS_output_resolution, "-1x-1");
// netInputSize
const auto netInputSize = op::flagsToPoint(FLAGS_net_resolution, "-1x368");
// faceNetInputSize
const auto faceNetInputSize = op::flagsToPoint(FLAGS_face_net_resolution, "368x368 (multiples of 16)");
// handNetInputSize
const auto handNetInputSize = op::flagsToPoint(FLAGS_hand_net_resolution, "368x368 (multiples of 16)");
// poseModel
const auto poseModel = op::flagsToPoseModel(FLAGS_model_pose);
// JSON saving
if (!FLAGS_write_keypoint.empty())
op::log("Flag `write_keypoint` is deprecated and will eventually be removed."
" Please, use `write_json` instead.", op::Priority::Max);
// keypointScale
const auto keypointScale = op::flagsToScaleMode(FLAGS_keypoint_scale);
// heatmaps to add
const auto heatMapTypes = op::flagsToHeatMaps(FLAGS_heatmaps_add_parts, FLAGS_heatmaps_add_bkg,
FLAGS_heatmaps_add_PAFs);
const auto heatMapScale = op::flagsToHeatMapScaleMode(FLAGS_heatmaps_scale);
// >1 camera view?
const auto multipleView = (FLAGS_3d || FLAGS_3d_views > 1);
// Enabling Google Logging
const bool enableGoogleLogging = true;
// Logging
op::log("", op::Priority::Low, __LINE__, __FUNCTION__, __FILE__);
// Configuring OpenPose
op::log("Configuring OpenPose...", op::Priority::High);
op::Wrapper<std::vector<op::Datum>> opWrapper{op::ThreadManagerMode::Asynchronous};
// Pose configuration (use WrapperStructPose{} for default and recommended configuration)
const op::WrapperStructPose wrapperStructPose{
!FLAGS_body_disable, netInputSize, outputSize, keypointScale, FLAGS_num_gpu, FLAGS_num_gpu_start,
FLAGS_scale_number, (float)FLAGS_scale_gap, op::flagsToRenderMode(FLAGS_render_pose, multipleView),
poseModel, !FLAGS_disable_blending, (float)FLAGS_alpha_pose, (float)FLAGS_alpha_heatmap,
FLAGS_part_to_show, FLAGS_model_folder, heatMapTypes, heatMapScale, FLAGS_part_candidates,
(float)FLAGS_render_threshold, FLAGS_number_people_max, enableGoogleLogging};
// Face configuration (use op::WrapperStructFace{} to disable it)
const op::WrapperStructFace wrapperStructFace{
FLAGS_face, faceNetInputSize, op::flagsToRenderMode(FLAGS_face_render, multipleView, FLAGS_render_pose),
(float)FLAGS_face_alpha_pose, (float)FLAGS_face_alpha_heatmap, (float)FLAGS_face_render_threshold};
// Hand configuration (use op::WrapperStructHand{} to disable it)
const op::WrapperStructHand wrapperStructHand{
FLAGS_hand, handNetInputSize, FLAGS_hand_scale_number, (float)FLAGS_hand_scale_range, FLAGS_hand_tracking,
op::flagsToRenderMode(FLAGS_hand_render, multipleView, FLAGS_render_pose), (float)FLAGS_hand_alpha_pose,
(float)FLAGS_hand_alpha_heatmap, (float)FLAGS_hand_render_threshold};
// Extra functionality configuration (use op::WrapperStructExtra{} to disable it)
const op::WrapperStructExtra wrapperStructExtra{
FLAGS_3d, FLAGS_3d_min_views, FLAGS_identification, FLAGS_tracking, FLAGS_ik_threads};
// Consumer (comment or use default argument to disable any output)
const auto displayMode = op::DisplayMode::NoDisplay;
const bool guiVerbose = false;
const bool fullScreen = false;
const op::WrapperStructOutput wrapperStructOutput{
displayMode, guiVerbose, fullScreen, FLAGS_write_keypoint,
op::stringToDataFormat(FLAGS_write_keypoint_format), FLAGS_write_json, FLAGS_write_coco_json,
FLAGS_write_coco_foot_json, FLAGS_write_images, FLAGS_write_images_format, FLAGS_write_video,
FLAGS_camera_fps, FLAGS_write_heatmaps, FLAGS_write_heatmaps_format, FLAGS_write_video_adam,
FLAGS_write_bvh, FLAGS_udp_host, FLAGS_udp_port};
// Configure wrapper
opWrapper.configure(wrapperStructPose, wrapperStructFace, wrapperStructHand, wrapperStructExtra,
op::WrapperStructInput{}, wrapperStructOutput);
// Set to single-thread (for sequential processing and/or debugging and/or reducing latency)
if (FLAGS_disable_multi_thread)
opWrapper.disableMultiThreading();
// Starting OpenPose
op::log("Starting thread(s)...", op::Priority::High);
opWrapper.start();
// Process and display image
const auto imageToProcess = cv::imread(FLAGS_image_path);
auto datumProcessed = opWrapper.emplaceAndPop(imageToProcess);
if (datumProcessed != nullptr)
{
printKeypoints(datumProcessed);
display(datumProcessed);
}
else
op::log("Image could not be processed.", op::Priority::High);
// Return successful message
op::log("Stopping OpenPose...", op::Priority::High);
return 0;
}
int main(int argc, char *argv[])
{
// Parsing command line flags
gflags::ParseCommandLineFlags(&argc, &argv, true);
// Running wholeBodyFromImage
return wholeBodyFromImage();
}
// ------------------------- OpenPose Library Tutorial - Real Time Pose Estimation -------------------------
// If the user wants to learn to use the OpenPose library, we highly recommend to start with the
// examples in `examples/tutorial_api_cpp/`.
// This example summarizes all the functionality of the OpenPose library:
// 1. Read folder of images / video / webcam (`producer` module)
// 2. Extract and render body keypoint / heatmap / PAF of that image (`pose` module)
// 3. Extract and render face keypoint / heatmap / PAF of that image (`face` module)
// 4. Save the results on disk (`filestream` module)
// 5. Display the rendered pose (`gui` module)
// Everything in a multi-thread scenario (`thread` module)
// Points 2 to 5 are included in the `wrapper` module
// In addition to the previous OpenPose modules, we also need to use:
// 1. `core` module:
// For the Array<float> class that the `pose` module needs
// For the Datum struct that the `thread` module sends between the queues
// 2. `utilities` module: for the error & logging functions, i.e. op::error & op::log respectively
// This file should only be used for the user to take specific examples.
// Command-line user intraface
#include <openpose/flags.hpp>
// OpenPose dependencies
#include <openpose/headers.hpp>
// If the user needs his own variables, he can inherit the op::Datum struct and add them
// UserDatum can be directly used by the OpenPose wrapper because it inherits from op::Datum, just define
// Wrapper<UserDatum> instead of Wrapper<op::Datum>
struct UserDatum : public op::Datum
{
bool boolThatUserNeedsForSomeReason;
UserDatum(const bool boolThatUserNeedsForSomeReason_ = false) :
boolThatUserNeedsForSomeReason{boolThatUserNeedsForSomeReason_}
{}
};
// The W-classes can be implemented either as a template or as simple classes given
// that the user usually knows which kind of data he will move between the queues,
// in this case we assume a std::shared_ptr of a std::vector of UserDatum
// This worker will just invert the image
class WUserPostProcessing : public op::Worker<std::shared_ptr<std::vector<UserDatum>>>
{
public:
WUserPostProcessing()
{
// User's constructor here
}
void initializationOnThread() {}
void work(std::shared_ptr<std::vector<UserDatum>>& datumsPtr)
{
// User's post-processing (after OpenPose processing & before OpenPose outputs) here
// datum.cvOutputData: rendered frame with pose or heatmaps
// datum.poseKeypoints: Array<float> with the estimated pose
try
{
if (datumsPtr != nullptr && !datumsPtr->empty())
for (auto& datum : *datumsPtr)
cv::bitwise_not(datum.cvOutputData, datum.cvOutputData);
}
catch (const std::exception& e)
{
this->stop();
op::error(e.what(), __LINE__, __FUNCTION__, __FILE__);
}
}
};
int openPoseDemo()
{
try
{
op::log("Starting OpenPose demo...", op::Priority::High);
const auto timerBegin = std::chrono::high_resolution_clock::now();
// logging_level
op::check(0 <= FLAGS_logging_level && FLAGS_logging_level <= 255, "Wrong logging_level value.",
__LINE__, __FUNCTION__, __FILE__);
op::ConfigureLog::setPriorityThreshold((op::Priority)FLAGS_logging_level);
op::Profiler::setDefaultX(FLAGS_profile_speed);
// // For debugging
// // Print all logging messages
// op::ConfigureLog::setPriorityThreshold(op::Priority::None);
// // Print out speed values faster
// op::Profiler::setDefaultX(100);
// Applying user defined configuration - GFlags to program variables
// outputSize
const auto outputSize = op::flagsToPoint(FLAGS_output_resolution, "-1x-1");
// netInputSize
const auto netInputSize = op::flagsToPoint(FLAGS_net_resolution, "-1x368");
// faceNetInputSize
const auto faceNetInputSize = op::flagsToPoint(FLAGS_face_net_resolution, "368x368 (multiples of 16)");
// handNetInputSize
const auto handNetInputSize = op::flagsToPoint(FLAGS_hand_net_resolution, "368x368 (multiples of 16)");
// producerType
const auto producerSharedPtr = op::flagsToProducer(FLAGS_image_dir, FLAGS_video, FLAGS_ip_camera, FLAGS_camera,
FLAGS_flir_camera, FLAGS_camera_resolution, FLAGS_camera_fps,
FLAGS_camera_parameter_folder, !FLAGS_frame_keep_distortion,
(unsigned int) FLAGS_3d_views, FLAGS_flir_camera_index);
// poseModel
const auto poseModel = op::flagsToPoseModel(FLAGS_model_pose);
// JSON saving
if (!FLAGS_write_keypoint.empty())
op::log("Flag `write_keypoint` is deprecated and will eventually be removed."
" Please, use `write_json` instead.", op::Priority::Max);
// keypointScale
const auto keypointScale = op::flagsToScaleMode(FLAGS_keypoint_scale);
// heatmaps to add
const auto heatMapTypes = op::flagsToHeatMaps(FLAGS_heatmaps_add_parts, FLAGS_heatmaps_add_bkg,
FLAGS_heatmaps_add_PAFs);
const auto heatMapScale = op::flagsToHeatMapScaleMode(FLAGS_heatmaps_scale);
// >1 camera view?
const auto multipleView = (FLAGS_3d || FLAGS_3d_views > 1 || FLAGS_flir_camera);
// Enabling Google Logging
const bool enableGoogleLogging = true;
// Logging
op::log("", op::Priority::Low, __LINE__, __FUNCTION__, __FILE__);
// OpenPose wrapper
op::log("Configuring OpenPose wrapper...", op::Priority::Low, __LINE__, __FUNCTION__, __FILE__);
// op::Wrapper<std::vector<op::Datum>> opWrapper;
op::Wrapper<std::vector<UserDatum>> opWrapper;
// Initializing the user custom classes
// Processing
auto wUserPostProcessing = std::make_shared<WUserPostProcessing>();
// Add custom processing
const auto workerProcessingOnNewThread = true;
opWrapper.setWorker(op::WorkerType::PostProcessing, wUserPostProcessing, workerProcessingOnNewThread);
// Pose configuration (use WrapperStructPose{} for default and recommended configuration)
const op::WrapperStructPose wrapperStructPose{
!FLAGS_body_disable, netInputSize, outputSize, keypointScale, FLAGS_num_gpu, FLAGS_num_gpu_start,
FLAGS_scale_number, (float)FLAGS_scale_gap, op::flagsToRenderMode(FLAGS_render_pose, multipleView),
poseModel, !FLAGS_disable_blending, (float)FLAGS_alpha_pose, (float)FLAGS_alpha_heatmap,
FLAGS_part_to_show, FLAGS_model_folder, heatMapTypes, heatMapScale, FLAGS_part_candidates,
(float)FLAGS_render_threshold, FLAGS_number_people_max, enableGoogleLogging};
// Face configuration (use op::WrapperStructFace{} to disable it)
const op::WrapperStructFace wrapperStructFace{
FLAGS_face, faceNetInputSize, op::flagsToRenderMode(FLAGS_face_render, multipleView, FLAGS_render_pose),
(float)FLAGS_face_alpha_pose, (float)FLAGS_face_alpha_heatmap, (float)FLAGS_face_render_threshold};
// Hand configuration (use op::WrapperStructHand{} to disable it)
const op::WrapperStructHand wrapperStructHand{
FLAGS_hand, handNetInputSize, FLAGS_hand_scale_number, (float)FLAGS_hand_scale_range, FLAGS_hand_tracking,
op::flagsToRenderMode(FLAGS_hand_render, multipleView, FLAGS_render_pose), (float)FLAGS_hand_alpha_pose,
(float)FLAGS_hand_alpha_heatmap, (float)FLAGS_hand_render_threshold};
// Producer (use default to disable any input)
const op::WrapperStructInput wrapperStructInput{
producerSharedPtr, FLAGS_frame_first, FLAGS_frame_last, FLAGS_process_real_time, FLAGS_frame_flip,
FLAGS_frame_rotate, FLAGS_frames_repeat};
// Extra functionality configuration (use op::WrapperStructExtra{} to disable it)
const op::WrapperStructExtra wrapperStructExtra{
FLAGS_3d, FLAGS_3d_min_views, FLAGS_identification, FLAGS_tracking, FLAGS_ik_threads};
// Consumer (comment or use default argument to disable any output)
const op::WrapperStructOutput wrapperStructOutput{
op::flagsToDisplayMode(FLAGS_display, FLAGS_3d), !FLAGS_no_gui_verbose, FLAGS_fullscreen,
FLAGS_write_keypoint, op::stringToDataFormat(FLAGS_write_keypoint_format), FLAGS_write_json,
FLAGS_write_coco_json, FLAGS_write_coco_foot_json, FLAGS_write_images, FLAGS_write_images_format,
FLAGS_write_video, FLAGS_camera_fps, FLAGS_write_heatmaps, FLAGS_write_heatmaps_format,
FLAGS_write_video_adam, FLAGS_write_bvh, FLAGS_udp_host, FLAGS_udp_port};
// Configure wrapper
opWrapper.configure(wrapperStructPose, wrapperStructFace, wrapperStructHand, wrapperStructExtra,
wrapperStructInput, wrapperStructOutput);
// Set to single-thread (for sequential processing and/or debugging and/or reducing latency)
if (FLAGS_disable_multi_thread)
opWrapper.disableMultiThreading();
// Start processing
// Two different ways of running the program on multithread environment
op::log("Starting thread(s)...", op::Priority::High);
// Start, run & stop threads - it blocks this thread until all others have finished
opWrapper.exec();
// // Option b) Keeping this thread free in case you want to do something else meanwhile, e.g. profiling the GPU
// memory
// // VERY IMPORTANT NOTE: if OpenCV is compiled with Qt support, this option will not work. Qt needs the main
// // thread to plot visual results, so the final GUI (which uses OpenCV) would return an exception similar to:
// // `QMetaMethod::invoke: Unable to invoke methods with return values in queued connections`
// // Start threads
// opWrapper.start();
// // Profile used GPU memory
// // 1: wait ~10sec so the memory has been totally loaded on GPU
// // 2: profile the GPU memory
// const auto sleepTimeMs = 10;
// for (auto i = 0 ; i < 10000/sleepTimeMs && opWrapper.isRunning() ; i++)
// std::this_thread::sleep_for(std::chrono::milliseconds{sleepTimeMs});
// op::Profiler::profileGpuMemory(__LINE__, __FUNCTION__, __FILE__);
// // Keep program alive while running threads
// while (opWrapper.isRunning())
// std::this_thread::sleep_for(std::chrono::milliseconds{sleepTimeMs});
// // Stop and join threads
// op::log("Stopping thread(s)", op::Priority::High);
// opWrapper.stop();
// Measuring total time
const auto now = std::chrono::high_resolution_clock::now();
const auto totalTimeSec = (double)std::chrono::duration_cast<std::chrono::nanoseconds>(now-timerBegin).count()
* 1e-9;
const auto message = "OpenPose demo successfully finished. Total time: "
+ std::to_string(totalTimeSec) + " seconds.";
op::log(message, op::Priority::High);
// Return successful message
return 0;
}
catch (const std::exception& e)
{
op::error(e.what(), __LINE__, __FUNCTION__, __FILE__);
return -1;
}
}
int main(int argc, char *argv[])
{
// Parsing command line flags
gflags::ParseCommandLineFlags(&argc, &argv, true);
// Running openPoseDemo
return openPoseDemo();
}
set(EXAMPLE_FILES set(EXAMPLE_FILES
1_user_synchronous_postprocessing.cpp 1_body_from_image.cpp
2_user_synchronous_input.cpp 2_whole_body_from_image.cpp
3_user_synchronous_output.cpp 3_keypoints_from_image_configurable.cpp
4_user_synchronous_all.cpp 4_asynchronous_loop_custom_input_and_output.cpp
5_user_asynchronous.cpp 5_asynchronous_loop_custom_output.cpp
6_user_asynchronous_output.cpp) 6_synchronous_custom_postprocessing.cpp
7_synchronous_custom_input.cpp
include(${CMAKE_SOURCE_DIR}/cmake/Utils.cmake) 8_synchronous_custom_output.cpp
9_synchronous_custom_all.cpp)
foreach(EXAMPLE_FILE ${EXAMPLE_FILES})
include(${CMAKE_SOURCE_DIR}/cmake/Utils.cmake)
get_filename_component(SOURCE_NAME ${EXAMPLE_FILE} NAME_WE)
foreach(EXAMPLE_FILE ${EXAMPLE_FILES})
if (UNIX OR APPLE)
set(EXE_NAME "${SOURCE_NAME}.bin") get_filename_component(SOURCE_NAME ${EXAMPLE_FILE} NAME_WE)
elseif (WIN32)
set(EXE_NAME "${SOURCE_NAME}") if (UNIX OR APPLE)
endif () set(EXE_NAME "${SOURCE_NAME}.bin")
elseif (WIN32)
message(STATUS "Adding Example ${EXE_NAME}") set(EXE_NAME "${SOURCE_NAME}")
add_executable(${EXE_NAME} ${EXAMPLE_FILE}) endif ()
target_link_libraries(${EXE_NAME} openpose ${examples_3rdparty_libraries})
message(STATUS "Adding Example ${EXE_NAME}")
if (WIN32) add_executable(${EXE_NAME} ${EXAMPLE_FILE})
set_property(TARGET ${EXE_NAME} PROPERTY FOLDER "Examples/Tutorial/Wrapper") target_link_libraries(${EXE_NAME} openpose ${examples_3rdparty_libraries})
configure_file(${CMAKE_SOURCE_DIR}/cmake/OpenPose${VCXPROJ_FILE_GPU_MODE}.vcxproj.user
${CMAKE_CURRENT_BINARY_DIR}/${EXE_NAME}.vcxproj.user @ONLY) if (WIN32)
# Properties->General->Output Directory set_property(TARGET ${EXE_NAME} PROPERTY FOLDER "Examples/Tutorial/C++ API")
set_property(TARGET ${EXE_NAME} PROPERTY RUNTIME_OUTPUT_DIRECTORY_RELEASE ${PROJECT_BINARY_DIR}/$(Platform)/$(Configuration)) configure_file(${CMAKE_SOURCE_DIR}/cmake/OpenPose${VCXPROJ_FILE_GPU_MODE}.vcxproj.user
set_property(TARGET ${EXE_NAME} PROPERTY RUNTIME_OUTPUT_DIRECTORY_DEBUG ${PROJECT_BINARY_DIR}/$(Platform)/$(Configuration)) ${CMAKE_CURRENT_BINARY_DIR}/${EXE_NAME}.vcxproj.user @ONLY)
endif (WIN32) # Properties->General->Output Directory
set_property(TARGET ${EXE_NAME} PROPERTY RUNTIME_OUTPUT_DIRECTORY_RELEASE ${PROJECT_BINARY_DIR}/$(Platform)/$(Configuration))
endforeach() set_property(TARGET ${EXE_NAME} PROPERTY RUNTIME_OUTPUT_DIRECTORY_DEBUG ${PROJECT_BINARY_DIR}/$(Platform)/$(Configuration))
endif (WIN32)
endforeach()
# C++ API Examples
This folder provides examples to the basic OpenPose C++ API. The analogous Python API is exposed in [examples/tutorial_api_python/](../tutorial_api_python/).
### Add Python Test ### Add Python Test
configure_file(1_extract_pose.py 1_extract_pose.py) configure_file(1_extract_pose.py 1_extract_pose.py)
configure_file(2_pose_from_heatmaps.py 2_pose_from_heatmaps.py)
# Python API Examples
This folder provides examples to the basic OpenPose Python API. The analogous C++ API is exposed in [examples/tutorial_api_cpp/](../tutorial_api_cpp/).
set(EXAMPLE_FILES set(EXAMPLE_FILES
1_openpose_read_and_display.cpp pose_1_extract_from_image.cpp
2_user_processing_function.cpp pose_2_extract_pose_or_heatmat_from_image.cpp
3_user_input_processing_and_output.cpp thread_1_openpose_read_and_display.cpp
4_user_input_processing_output_and_datum.cpp) thread_2_user_processing_function.cpp
thread_3_user_input_processing_and_output.cpp
foreach(EXAMPLE_FILE ${EXAMPLE_FILES}) thread_4_user_input_processing_output_and_datum.cpp)
get_filename_component(SOURCE_NAME ${EXAMPLE_FILE} NAME_WE) foreach(EXAMPLE_FILE ${EXAMPLE_FILES})
if (UNIX OR APPLE) get_filename_component(SOURCE_NAME ${EXAMPLE_FILE} NAME_WE)
set(EXE_NAME "${SOURCE_NAME}.bin")
elseif (WIN32) if (UNIX OR APPLE)
set(EXE_NAME "${SOURCE_NAME}") set(EXE_NAME "${SOURCE_NAME}.bin")
endif () elseif (WIN32)
set(EXE_NAME "${SOURCE_NAME}")
message(STATUS "Adding Example ${EXE_NAME}") endif ()
add_executable(${EXE_NAME} ${EXAMPLE_FILE})
target_link_libraries(${EXE_NAME} openpose ${examples_3rdparty_libraries}) message(STATUS "Adding Example ${EXE_NAME}")
add_executable(${EXE_NAME} ${EXAMPLE_FILE})
if (WIN32) target_link_libraries(${EXE_NAME} openpose ${examples_3rdparty_libraries})
set_property(TARGET ${EXE_NAME} PROPERTY FOLDER "Examples/Tutorial/Thread")
configure_file(${CMAKE_SOURCE_DIR}/cmake/OpenPose${VCXPROJ_FILE_GPU_MODE}.vcxproj.user if (WIN32)
${CMAKE_CURRENT_BINARY_DIR}/${EXE_NAME}.vcxproj.user @ONLY) set_property(TARGET ${EXE_NAME} PROPERTY FOLDER "Examples/Tutorial/Developer Examples")
# Properties->General->Output Directory configure_file(${CMAKE_SOURCE_DIR}/cmake/OpenPose${VCXPROJ_FILE_GPU_MODE}.vcxproj.user
set_property(TARGET ${EXE_NAME} PROPERTY RUNTIME_OUTPUT_DIRECTORY_RELEASE ${PROJECT_BINARY_DIR}/$(Platform)/$(Configuration)) ${CMAKE_CURRENT_BINARY_DIR}/${EXE_NAME}.vcxproj.user @ONLY)
set_property(TARGET ${EXE_NAME} PROPERTY RUNTIME_OUTPUT_DIRECTORY_DEBUG ${PROJECT_BINARY_DIR}/$(Platform)/$(Configuration)) # Properties->General->Output Directory
endif (WIN32) set_property(TARGET ${EXE_NAME} PROPERTY RUNTIME_OUTPUT_DIRECTORY_RELEASE ${PROJECT_BINARY_DIR}/$(Platform)/$(Configuration))
set_property(TARGET ${EXE_NAME} PROPERTY RUNTIME_OUTPUT_DIRECTORY_DEBUG ${PROJECT_BINARY_DIR}/$(Platform)/$(Configuration))
endforeach() endif (WIN32)
endforeach()
### Add Python files
configure_file(python_1_pose_from_heatmaps.py python_1_pose_from_heatmaps.py)
# Developer Examples
**Disclaimer**: This folder is meant for internal OpenPose developers. The Examples might highly change, and we will not answer questions about them nor provide official support for them.
**If the OpenPose library does not compile for an error happening due to a file from this folder, notify us**.
...@@ -74,7 +74,7 @@ int openPoseTutorialPose1() ...@@ -74,7 +74,7 @@ int openPoseTutorialPose1()
__LINE__, __FUNCTION__, __FILE__); __LINE__, __FUNCTION__, __FILE__);
op::ConfigureLog::setPriorityThreshold((op::Priority)FLAGS_logging_level); op::ConfigureLog::setPriorityThreshold((op::Priority)FLAGS_logging_level);
op::log("", op::Priority::Low, __LINE__, __FUNCTION__, __FILE__); op::log("", op::Priority::Low, __LINE__, __FUNCTION__, __FILE__);
// Step 2 - Read Google flags (user defined configuration) // Step 2 - Read GFlags (user defined configuration)
// outputSize // outputSize
const auto outputSize = op::flagsToPoint(FLAGS_output_resolution, "-1x-1"); const auto outputSize = op::flagsToPoint(FLAGS_output_resolution, "-1x-1");
// netInputSize // netInputSize
......
...@@ -79,7 +79,7 @@ int openPoseTutorialPose2() ...@@ -79,7 +79,7 @@ int openPoseTutorialPose2()
__LINE__, __FUNCTION__, __FILE__); __LINE__, __FUNCTION__, __FILE__);
op::ConfigureLog::setPriorityThreshold((op::Priority)FLAGS_logging_level); op::ConfigureLog::setPriorityThreshold((op::Priority)FLAGS_logging_level);
op::log("", op::Priority::Low, __LINE__, __FUNCTION__, __FILE__); op::log("", op::Priority::Low, __LINE__, __FUNCTION__, __FILE__);
// Step 2 - Read Google flags (user defined configuration) // Step 2 - Read GFlags (user defined configuration)
// outputSize // outputSize
const auto outputSize = op::flagsToPoint(FLAGS_output_resolution, "-1x-1"); const auto outputSize = op::flagsToPoint(FLAGS_output_resolution, "-1x-1");
// netInputSize // netInputSize
......
...@@ -6,7 +6,7 @@ except ImportError: ...@@ -6,7 +6,7 @@ except ImportError:
print("This sample can only be run if Python Caffe if available on your system") print("This sample can only be run if Python Caffe if available on your system")
print("Currently OpenPose does not compile Python Caffe. This may be supported in the future") print("Currently OpenPose does not compile Python Caffe. This may be supported in the future")
sys.exit(-1) sys.exit(-1)
import os import os
os.environ["GLOG_minloglevel"] = "1" os.environ["GLOG_minloglevel"] = "1"
import caffe import caffe
......
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
// This third example shows the user how to: // This third example shows the user how to:
// 1. Read folder of images / video / webcam (`producer` module) // 1. Read folder of images / video / webcam (`producer` module)
// 2. Display the rendered pose (`gui` module) // 2. Display the rendered pose (`gui` module)
// Everything in a multi-thread scenario (`thread` module) // Everything in a multi-thread scenario (`thread` module)
// In addition to the previous OpenPose modules, we also need to use: // In addition to the previous OpenPose modules, we also need to use:
// 1. `core` module: for the Datum struct that the `thread` module sends between the queues // 1. `core` module: for the Datum struct that the `thread` module sends between the queues
// 2. `utilities` module: for the error & logging functions, i.e. op::error & op::log respectively // 2. `utilities` module: for the error & logging functions, i.e. op::error & op::log respectively
...@@ -75,7 +75,7 @@ int openPoseTutorialThread1() ...@@ -75,7 +75,7 @@ int openPoseTutorialThread1()
op::check(0 <= FLAGS_logging_level && FLAGS_logging_level <= 255, "Wrong logging_level value.", op::check(0 <= FLAGS_logging_level && FLAGS_logging_level <= 255, "Wrong logging_level value.",
__LINE__, __FUNCTION__, __FILE__); __LINE__, __FUNCTION__, __FILE__);
op::ConfigureLog::setPriorityThreshold((op::Priority)FLAGS_logging_level); op::ConfigureLog::setPriorityThreshold((op::Priority)FLAGS_logging_level);
// Step 2 - Read Google flags (user defined configuration) // Step 2 - Read GFlags (user defined configuration)
// outputSize // outputSize
const auto outputSize = op::flagsToPoint(FLAGS_output_resolution, "-1x-1"); const auto outputSize = op::flagsToPoint(FLAGS_output_resolution, "-1x-1");
// producerType // producerType
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
// 1. Read folder of images / video / webcam (`producer` module) // 1. Read folder of images / video / webcam (`producer` module)
// 2. Use the processing implemented by the user // 2. Use the processing implemented by the user
// 3. Display the rendered pose (`gui` module) // 3. Display the rendered pose (`gui` module)
// Everything in a multi-thread scenario (`thread` module) // Everything in a multi-thread scenario (`thread` module)
// In addition to the previous OpenPose modules, we also need to use: // In addition to the previous OpenPose modules, we also need to use:
// 1. `core` module: for the Datum struct that the `thread` module sends between the queues // 1. `core` module: for the Datum struct that the `thread` module sends between the queues
// 2. `utilities` module: for the error & logging functions, i.e. op::error & op::log respectively // 2. `utilities` module: for the error & logging functions, i.e. op::error & op::log respectively
...@@ -109,7 +109,7 @@ int openPoseTutorialThread2() ...@@ -109,7 +109,7 @@ int openPoseTutorialThread2()
op::check(0 <= FLAGS_logging_level && FLAGS_logging_level <= 255, "Wrong logging_level value.", op::check(0 <= FLAGS_logging_level && FLAGS_logging_level <= 255, "Wrong logging_level value.",
__LINE__, __FUNCTION__, __FILE__); __LINE__, __FUNCTION__, __FILE__);
op::ConfigureLog::setPriorityThreshold((op::Priority)FLAGS_logging_level); op::ConfigureLog::setPriorityThreshold((op::Priority)FLAGS_logging_level);
// Step 2 - Read Google flags (user defined configuration) // Step 2 - Read GFlags (user defined configuration)
// outputSize // outputSize
const auto outputSize = op::flagsToPoint(FLAGS_output_resolution, "-1x-1"); const auto outputSize = op::flagsToPoint(FLAGS_output_resolution, "-1x-1");
// producerType // producerType
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
// 1. Read folder of images / video / webcam (`producer` module) // 1. Read folder of images / video / webcam (`producer` module)
// 2. Use the processing implemented by the user // 2. Use the processing implemented by the user
// 3. Display the rendered pose (`gui` module) // 3. Display the rendered pose (`gui` module)
// Everything in a multi-thread scenario (`thread` module) // Everything in a multi-thread scenario (`thread` module)
// In addition to the previous OpenPose modules, we also need to use: // In addition to the previous OpenPose modules, we also need to use:
// 1. `core` module: for the Datum struct that the `thread` module sends between the queues // 1. `core` module: for the Datum struct that the `thread` module sends between the queues
// 2. `utilities` module: for the error & logging functions, i.e. op::error & op::log respectively // 2. `utilities` module: for the error & logging functions, i.e. op::error & op::log respectively
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
// 1. Read folder of images / video / webcam (`producer` module) // 1. Read folder of images / video / webcam (`producer` module)
// 2. Use the processing implemented by the user // 2. Use the processing implemented by the user
// 3. Display the rendered pose (`gui` module) // 3. Display the rendered pose (`gui` module)
// Everything in a multi-thread scenario (`thread` module) // Everything in a multi-thread scenario (`thread` module)
// In addition to the previous OpenPose modules, we also need to use: // In addition to the previous OpenPose modules, we also need to use:
// 1. `core` module: for the Datum struct that the `thread` module sends between the queues // 1. `core` module: for the Datum struct that the `thread` module sends between the queues
// 2. `utilities` module: for the error & logging functions, i.e. op::error & op::log respectively // 2. `utilities` module: for the error & logging functions, i.e. op::error & op::log respectively
......
set(EXAMPLE_FILES
1_extract_from_image.cpp
2_extract_pose_or_heatmat_from_image.cpp)
foreach(EXAMPLE_FILE ${EXAMPLE_FILES})
get_filename_component(SOURCE_NAME ${EXAMPLE_FILE} NAME_WE)
if (UNIX OR APPLE)
set(EXE_NAME "${SOURCE_NAME}.bin")
elseif (WIN32)
set(EXE_NAME "${SOURCE_NAME}")
endif ()
message(STATUS "Adding Example ${EXE_NAME}")
add_executable(${EXE_NAME} ${EXAMPLE_FILE})
target_link_libraries(${EXE_NAME} openpose ${examples_3rdparty_libraries})
if (WIN32)
set_property(TARGET ${EXE_NAME} PROPERTY FOLDER "Examples/Tutorial/Pose")
configure_file(${CMAKE_SOURCE_DIR}/cmake/OpenPose${VCXPROJ_FILE_GPU_MODE}.vcxproj.user
${CMAKE_CURRENT_BINARY_DIR}/${EXE_NAME}.vcxproj.user @ONLY)
# Properties->General->Output Directory
set_property(TARGET ${EXE_NAME} PROPERTY RUNTIME_OUTPUT_DIRECTORY_RELEASE ${PROJECT_BINARY_DIR}/$(Platform)/$(Configuration))
set_property(TARGET ${EXE_NAME} PROPERTY RUNTIME_OUTPUT_DIRECTORY_DEBUG ${PROJECT_BINARY_DIR}/$(Platform)/$(Configuration))
endif (WIN32)
endforeach()
...@@ -10,7 +10,7 @@ You can quickly add your custom code into this folder so that quick prototypes c ...@@ -10,7 +10,7 @@ You can quickly add your custom code into this folder so that quick prototypes c
## How-to ## How-to
1. Install/compile OpenPose as usual. 1. Install/compile OpenPose as usual.
2. Add your custom *.cpp / *.hpp files here,. Hint: You might want to start by copying the [OpenPoseDemo](../openpose/openpose.cpp) example or any of the [examples/tutorial_wrapper/](../tutorial_wrapper/) examples. Then, you can simply modify their content. 2. Add your custom *.cpp / *.hpp files here,. Hint: You might want to start by copying the [OpenPoseDemo](../openpose/openpose.cpp) example or any of the [examples/tutorial_api_cpp/](../tutorial_api_cpp/) examples. Then, you can simply modify their content.
3. Add the name of your custom *.cpp / *.hpp files at the top of the [examples/user_code/CMakeLists.txt](./CMakeLists.txt) file. 3. Add the name of your custom *.cpp / *.hpp files at the top of the [examples/user_code/CMakeLists.txt](./CMakeLists.txt) file.
4. Re-compile OpenPose. 4. Re-compile OpenPose.
``` ```
......
...@@ -106,7 +106,7 @@ namespace op ...@@ -106,7 +106,7 @@ namespace op
* Order heatmaps: body parts + background (as appears in POSE_BODY_PART_MAPPING) + (x,y) channel of each PAF * Order heatmaps: body parts + background (as appears in POSE_BODY_PART_MAPPING) + (x,y) channel of each PAF
* (sorted as appears in POSE_BODY_PART_PAIRS). See `pose/poseParameters.hpp`. * (sorted as appears in POSE_BODY_PART_PAIRS). See `pose/poseParameters.hpp`.
* The user can choose the heatmaps normalization: ranges [0, 1], [-1, 1] or [0, 255]. Check the * The user can choose the heatmaps normalization: ranges [0, 1], [-1, 1] or [0, 255]. Check the
* `heatmaps_scale` flag in the examples/tutorial_wrapper/ for more details. * `heatmaps_scale` flag in {OpenPose_path}doc/demo_overview.md for more details.
* Size: #heatmaps x output_net_height x output_net_width * Size: #heatmaps x output_net_height x output_net_width
*/ */
Array<float> poseHeatMaps; Array<float> poseHeatMaps;
......
#ifndef OPENPOSE_CORE_MACROS_HPP #ifndef OPENPOSE_CORE_MACROS_HPP
#define OPENPOSE_CORE_MACROS_HPP #define OPENPOSE_CORE_MACROS_HPP
#include <chrono> // std::chrono:: functionaligy, e.g., std::chrono::milliseconds
#include <memory> // std::shared_ptr #include <memory> // std::shared_ptr
#include <ostream> #include <ostream>
#include <string> #include <string>
#include <thread> // std::this_thread
#include <vector> #include <vector>
// OpenPose name and version // OpenPose name and version
......
...@@ -2,7 +2,6 @@ ...@@ -2,7 +2,6 @@
#define OPENPOSE_FACE_FACE_EXTRACTOR_HPP #define OPENPOSE_FACE_FACE_EXTRACTOR_HPP
#include <atomic> #include <atomic>
#include <thread>
#include <opencv2/core/core.hpp> // cv::Mat #include <opencv2/core/core.hpp> // cv::Mat
#include <openpose/core/common.hpp> #include <openpose/core/common.hpp>
#include <openpose/core/enumClasses.hpp> #include <openpose/core/enumClasses.hpp>
......
// ------------------------- OpenPose Library Tutorial - Real Time Pose Estimation ------------------------- #ifndef OPENPOSE_FLAGS_HPP
// If the user wants to learn to use the OpenPose library, we highly recommend to start with the `examples/tutorial_*/` #define OPENPOSE_FLAGS_HPP
// folders.
// This example summarizes all the funcitonality of the OpenPose library: // Note: This class is not included within the basic OpenPose `headers.hpp` and must be explicitly included. In
// 1. Read folder of images / video / webcam (`producer` module) // addition, Google Flags library must also be linked to the resulting binary or library. OpenPose library does
// 2. Extract and render body keypoint / heatmap / PAF of that image (`pose` module) // not force to use Google Flags, but the OpenPose examples do so.
// 3. Extract and render face keypoint / heatmap / PAF of that image (`face` module)
// 4. Save the results on disk (`filestream` module)
// 5. Display the rendered pose (`gui` module)
// Everything in a multi-thread scenario (`thread` module)
// Points 2 to 5 are included in the `wrapper` module
// In addition to the previous OpenPose modules, we also need to use:
// 1. `core` module:
// For the Array<float> class that the `pose` module needs
// For the Datum struct that the `thread` module sends between the queues
// 2. `utilities` module: for the error & logging functions, i.e. op::error & op::log respectively
// This file should only be used for the user to take specific examples.
// C++ std library dependencies
#include <chrono> // `std::chrono::` functions and classes, e.g. std::chrono::milliseconds
#include <thread> // std::this_thread
// Other 3rdparty dependencies
// GFlags: DEFINE_bool, _int32, _int64, _uint64, _double, _string // GFlags: DEFINE_bool, _int32, _int64, _uint64, _double, _string
#include <gflags/gflags.h> #include <gflags/gflags.h>
// Allow Google Flags in Ubuntu 14 // Allow Google Flags in Ubuntu 14
#ifndef GFLAGS_GFLAGS_H_ #ifndef GFLAGS_GFLAGS_H_
namespace gflags = google; namespace gflags = google;
#endif #endif
// OpenPose dependencies
#include <openpose/headers.hpp>
// See all the available parameter options withe the `--help` flag. E.g. `build/examples/openpose/openpose.bin --help` // See all the available parameter options withe the `--help` flag. E.g. `build/examples/openpose/openpose.bin --help`
// Note: This command will show you flags for other unnecessary 3rdparty files. Check only the flags for the OpenPose // Note: This command will show you flags for other unnecessary 3rdparty files. Check only the flags for the OpenPose
...@@ -42,14 +25,18 @@ DEFINE_bool(disable_multi_thread, false, "It would slightly reduc ...@@ -42,14 +25,18 @@ DEFINE_bool(disable_multi_thread, false, "It would slightly reduc
" error."); " error.");
DEFINE_int32(profile_speed, 1000, "If PROFILER_ENABLED was set in CMake or Makefile.config files, OpenPose will show some" DEFINE_int32(profile_speed, 1000, "If PROFILER_ENABLED was set in CMake or Makefile.config files, OpenPose will show some"
" runtime statistics at this frame number."); " runtime statistics at this frame number.");
#ifndef OPENPOSE_FLAGS_DISABLE_POSE
#ifndef OPENPOSE_FLAGS_DISABLE_PRODUCER
// Producer // Producer
DEFINE_int32(camera, -1, "The camera index for cv::VideoCapture. Integer in the range [0, 9]. Select a negative" DEFINE_int32(camera, -1, "The camera index for cv::VideoCapture. Integer in the range [0, 9]. Select a negative"
" number (by default), to auto-detect and open the first available camera."); " number (by default), to auto-detect and open the first available camera.");
DEFINE_string(camera_resolution, "-1x-1", "Set the camera resolution (either `--camera` or `--flir_camera`). `-1x-1` will use the" DEFINE_string(camera_resolution, "-1x-1", "Set the camera resolution (either `--camera` or `--flir_camera`). `-1x-1` will use the"
" default 1280x720 for `--camera`, or the maximum flir camera resolution available for" " default 1280x720 for `--camera`, or the maximum flir camera resolution available for"
" `--flir_camera`"); " `--flir_camera`");
#endif // OPENPOSE_FLAGS_DISABLE_PRODUCER
DEFINE_double(camera_fps, 30.0, "Frame rate for the webcam (also used when saving video). Set this value to the minimum" DEFINE_double(camera_fps, 30.0, "Frame rate for the webcam (also used when saving video). Set this value to the minimum"
" value between the OpenPose displayed speed and the webcam real frame rate."); " value between the OpenPose displayed speed and the webcam real frame rate.");
#ifndef OPENPOSE_FLAGS_DISABLE_PRODUCER
DEFINE_string(video, "", "Use a video file instead of the camera. Use `examples/media/video.avi` for our default" DEFINE_string(video, "", "Use a video file instead of the camera. Use `examples/media/video.avi` for our default"
" example video."); " example video.");
DEFINE_string(image_dir, "", "Process a directory of images. Use `examples/media/` for our default example folder with 20" DEFINE_string(image_dir, "", "Process a directory of images. Use `examples/media/` for our default example folder with 20"
...@@ -71,6 +58,7 @@ DEFINE_string(camera_parameter_folder, "models/cameraParameters/flir/", "String ...@@ -71,6 +58,7 @@ DEFINE_string(camera_parameter_folder, "models/cameraParameters/flir/", "String
DEFINE_bool(frame_keep_distortion, false, "If false (default), it will undistortionate the image based on the" DEFINE_bool(frame_keep_distortion, false, "If false (default), it will undistortionate the image based on the"
" `camera_parameter_folder` camera parameters; if true, it will not undistortionate, i.e.," " `camera_parameter_folder` camera parameters; if true, it will not undistortionate, i.e.,"
" it will leave it as it is."); " it will leave it as it is.");
#endif // OPENPOSE_FLAGS_DISABLE_PRODUCER
// OpenPose // OpenPose
DEFINE_string(model_folder, "models/", "Folder path (absolute or relative) where the models (pose, face, ...) are located."); DEFINE_string(model_folder, "models/", "Folder path (absolute or relative) where the models (pose, face, ...) are located.");
DEFINE_string(output_resolution, "-1x-1", "The image resolution (display and output). Use \"-1x-1\" to force the program to use the" DEFINE_string(output_resolution, "-1x-1", "The image resolution (display and output). Use \"-1x-1\" to force the program to use the"
...@@ -203,6 +191,7 @@ DEFINE_int32(hand_render, -1, "Analogous to `render_po ...@@ -203,6 +191,7 @@ DEFINE_int32(hand_render, -1, "Analogous to `render_po
" configuration that `render_pose` is using."); " configuration that `render_pose` is using.");
DEFINE_double(hand_alpha_pose, 0.6, "Analogous to `alpha_pose` but applied to hand."); DEFINE_double(hand_alpha_pose, 0.6, "Analogous to `alpha_pose` but applied to hand.");
DEFINE_double(hand_alpha_heatmap, 0.7, "Analogous to `alpha_heatmap` but applied to hand."); DEFINE_double(hand_alpha_heatmap, 0.7, "Analogous to `alpha_heatmap` but applied to hand.");
#ifndef OPENPOSE_FLAGS_DISABLE_DISPLAY
// Display // Display
DEFINE_bool(fullscreen, false, "Run in full-screen mode (press f during runtime to toggle)."); DEFINE_bool(fullscreen, false, "Run in full-screen mode (press f during runtime to toggle).");
DEFINE_bool(no_gui_verbose, false, "Do not write text on output images on GUI (e.g. number of current frame and people). It" DEFINE_bool(no_gui_verbose, false, "Do not write text on output images on GUI (e.g. number of current frame and people). It"
...@@ -210,6 +199,7 @@ DEFINE_bool(no_gui_verbose, false, "Do not write text on ou ...@@ -210,6 +199,7 @@ DEFINE_bool(no_gui_verbose, false, "Do not write text on ou
DEFINE_int32(display, -1, "Display mode: -1 for automatic selection; 0 for no display (useful if there is no X server" DEFINE_int32(display, -1, "Display mode: -1 for automatic selection; 0 for no display (useful if there is no X server"
" and/or to slightly speed up the processing if visual output is not required); 2 for 2-D" " and/or to slightly speed up the processing if visual output is not required); 2 for 2-D"
" display; 3 for 3-D display (if `--3d` enabled); and 1 for both 2-D and 3-D display."); " display; 3 for 3-D display (if `--3d` enabled); and 1 for both 2-D and 3-D display.");
#endif // OPENPOSE_FLAGS_DISABLE_DISPLAY
// Result Saving // Result Saving
DEFINE_string(write_images, "", "Directory to write rendered frames in `write_images_format` image format."); DEFINE_string(write_images, "", "Directory to write rendered frames in `write_images_format` image format.");
DEFINE_string(write_images_format, "png", "File extension and format for `write_images`, e.g. png, jpg or bmp. Check the OpenCV" DEFINE_string(write_images_format, "png", "File extension and format for `write_images`, e.g. png, jpg or bmp. Check the OpenCV"
...@@ -237,204 +227,6 @@ DEFINE_string(write_bvh, "", "Experimental, not avail ...@@ -237,204 +227,6 @@ DEFINE_string(write_bvh, "", "Experimental, not avail
// UDP communication // UDP communication
DEFINE_string(udp_host, "", "Experimental, not available yet. IP for UDP communication. E.g., `192.168.0.1`."); DEFINE_string(udp_host, "", "Experimental, not available yet. IP for UDP communication. E.g., `192.168.0.1`.");
DEFINE_string(udp_port, "8051", "Experimental, not available yet. Port number for UDP communication."); DEFINE_string(udp_port, "8051", "Experimental, not available yet. Port number for UDP communication.");
#endif // OPENPOSE_FLAGS_DISABLE_POSE
#endif // OPENPOSE_FLAGS_HPP
// If the user needs his own variables, he can inherit the op::Datum struct and add them
// UserDatum can be directly used by the OpenPose wrapper because it inherits from op::Datum, just define
// Wrapper<UserDatum> instead of Wrapper<op::Datum>
struct UserDatum : public op::Datum
{
bool boolThatUserNeedsForSomeReason;
UserDatum(const bool boolThatUserNeedsForSomeReason_ = false) :
boolThatUserNeedsForSomeReason{boolThatUserNeedsForSomeReason_}
{}
};
// The W-classes can be implemented either as a template or as simple classes given
// that the user usually knows which kind of data he will move between the queues,
// in this case we assume a std::shared_ptr of a std::vector of UserDatum
// This worker will just invert the image
class WUserPostProcessing : public op::Worker<std::shared_ptr<std::vector<UserDatum>>>
{
public:
WUserPostProcessing()
{
// User's constructor here
}
void initializationOnThread() {}
void work(std::shared_ptr<std::vector<UserDatum>>& datumsPtr)
{
// User's post-processing (after OpenPose processing & before OpenPose outputs) here
// datum.cvOutputData: rendered frame with pose or heatmaps
// datum.poseKeypoints: Array<float> with the estimated pose
try
{
if (datumsPtr != nullptr && !datumsPtr->empty())
for (auto& datum : *datumsPtr)
cv::bitwise_not(datum.cvOutputData, datum.cvOutputData);
}
catch (const std::exception& e)
{
this->stop();
op::error(e.what(), __LINE__, __FUNCTION__, __FILE__);
}
}
};
int openPoseDemo()
{
try
{
op::log("Starting OpenPose demo...", op::Priority::High);
const auto timerBegin = std::chrono::high_resolution_clock::now();
// logging_level
op::check(0 <= FLAGS_logging_level && FLAGS_logging_level <= 255, "Wrong logging_level value.",
__LINE__, __FUNCTION__, __FILE__);
op::ConfigureLog::setPriorityThreshold((op::Priority)FLAGS_logging_level);
op::Profiler::setDefaultX(FLAGS_profile_speed);
// // For debugging
// // Print all logging messages
// op::ConfigureLog::setPriorityThreshold(op::Priority::None);
// // Print out speed values faster
// op::Profiler::setDefaultX(100);
// Applying user defined configuration - Google flags to program variables
// outputSize
const auto outputSize = op::flagsToPoint(FLAGS_output_resolution, "-1x-1");
// netInputSize
const auto netInputSize = op::flagsToPoint(FLAGS_net_resolution, "-1x368");
// faceNetInputSize
const auto faceNetInputSize = op::flagsToPoint(FLAGS_face_net_resolution, "368x368 (multiples of 16)");
// handNetInputSize
const auto handNetInputSize = op::flagsToPoint(FLAGS_hand_net_resolution, "368x368 (multiples of 16)");
// producerType
const auto producerSharedPtr = op::flagsToProducer(FLAGS_image_dir, FLAGS_video, FLAGS_ip_camera, FLAGS_camera,
FLAGS_flir_camera, FLAGS_camera_resolution, FLAGS_camera_fps,
FLAGS_camera_parameter_folder, !FLAGS_frame_keep_distortion,
(unsigned int) FLAGS_3d_views, FLAGS_flir_camera_index);
// poseModel
const auto poseModel = op::flagsToPoseModel(FLAGS_model_pose);
// JSON saving
if (!FLAGS_write_keypoint.empty())
op::log("Flag `write_keypoint` is deprecated and will eventually be removed."
" Please, use `write_json` instead.", op::Priority::Max);
// keypointScale
const auto keypointScale = op::flagsToScaleMode(FLAGS_keypoint_scale);
// heatmaps to add
const auto heatMapTypes = op::flagsToHeatMaps(FLAGS_heatmaps_add_parts, FLAGS_heatmaps_add_bkg,
FLAGS_heatmaps_add_PAFs);
const auto heatMapScale = op::flagsToHeatMapScaleMode(FLAGS_heatmaps_scale);
// >1 camera view?
const auto multipleView = (FLAGS_3d || FLAGS_3d_views > 1 || FLAGS_flir_camera);
// Enabling Google Logging
const bool enableGoogleLogging = true;
// Logging
op::log("", op::Priority::Low, __LINE__, __FUNCTION__, __FILE__);
// OpenPose wrapper
op::log("Configuring OpenPose wrapper...", op::Priority::Low, __LINE__, __FUNCTION__, __FILE__);
// op::Wrapper<std::vector<op::Datum>> opWrapper;
op::Wrapper<std::vector<UserDatum>> opWrapper;
// Initializing the user custom classes
// Processing
auto wUserPostProcessing = std::make_shared<WUserPostProcessing>();
// Add custom processing
const auto workerProcessingOnNewThread = true;
opWrapper.setWorkerPostProcessing(wUserPostProcessing, workerProcessingOnNewThread);
// Pose configuration (use WrapperStructPose{} for default and recommended configuration)
const op::WrapperStructPose wrapperStructPose{
!FLAGS_body_disable, netInputSize, outputSize, keypointScale, FLAGS_num_gpu, FLAGS_num_gpu_start,
FLAGS_scale_number, (float)FLAGS_scale_gap, op::flagsToRenderMode(FLAGS_render_pose, multipleView),
poseModel, !FLAGS_disable_blending, (float)FLAGS_alpha_pose, (float)FLAGS_alpha_heatmap,
FLAGS_part_to_show, FLAGS_model_folder, heatMapTypes, heatMapScale, FLAGS_part_candidates,
(float)FLAGS_render_threshold, FLAGS_number_people_max, enableGoogleLogging};
// Face configuration (use op::WrapperStructFace{} to disable it)
const op::WrapperStructFace wrapperStructFace{
FLAGS_face, faceNetInputSize, op::flagsToRenderMode(FLAGS_face_render, multipleView, FLAGS_render_pose),
(float)FLAGS_face_alpha_pose, (float)FLAGS_face_alpha_heatmap, (float)FLAGS_face_render_threshold};
// Hand configuration (use op::WrapperStructHand{} to disable it)
const op::WrapperStructHand wrapperStructHand{
FLAGS_hand, handNetInputSize, FLAGS_hand_scale_number, (float)FLAGS_hand_scale_range, FLAGS_hand_tracking,
op::flagsToRenderMode(FLAGS_hand_render, multipleView, FLAGS_render_pose), (float)FLAGS_hand_alpha_pose,
(float)FLAGS_hand_alpha_heatmap, (float)FLAGS_hand_render_threshold};
// Producer (use default to disable any input)
const op::WrapperStructInput wrapperStructInput{
producerSharedPtr, FLAGS_frame_first, FLAGS_frame_last, FLAGS_process_real_time, FLAGS_frame_flip,
FLAGS_frame_rotate, FLAGS_frames_repeat};
// Extra functionality configuration (use op::WrapperStructExtra{} to disable it)
const op::WrapperStructExtra wrapperStructExtra{
FLAGS_3d, FLAGS_3d_min_views, FLAGS_identification, FLAGS_tracking, FLAGS_ik_threads};
// Consumer (comment or use default argument to disable any output)
const op::WrapperStructOutput wrapperStructOutput{
op::flagsToDisplayMode(FLAGS_display, FLAGS_3d), !FLAGS_no_gui_verbose, FLAGS_fullscreen,
FLAGS_write_keypoint, op::stringToDataFormat(FLAGS_write_keypoint_format), FLAGS_write_json,
FLAGS_write_coco_json, FLAGS_write_coco_foot_json, FLAGS_write_images, FLAGS_write_images_format,
FLAGS_write_video, FLAGS_camera_fps, FLAGS_write_heatmaps, FLAGS_write_heatmaps_format,
FLAGS_write_video_adam, FLAGS_write_bvh, FLAGS_udp_host, FLAGS_udp_port};
// Configure wrapper
opWrapper.configure(wrapperStructPose, wrapperStructFace, wrapperStructHand, wrapperStructExtra,
wrapperStructInput, wrapperStructOutput);
// Set to single-thread running (to debug and/or reduce latency)
if (FLAGS_disable_multi_thread)
opWrapper.disableMultiThreading();
// Start processing
// Two different ways of running the program on multithread environment
op::log("Starting thread(s)...", op::Priority::High);
// Start, run & stop threads - it blocks this thread until all others have finished
opWrapper.exec();
// // Option b) Keeping this thread free in case you want to do something else meanwhile, e.g. profiling the GPU
// memory
// // VERY IMPORTANT NOTE: if OpenCV is compiled with Qt support, this option will not work. Qt needs the main
// // thread to plot visual results, so the final GUI (which uses OpenCV) would return an exception similar to:
// // `QMetaMethod::invoke: Unable to invoke methods with return values in queued connections`
// // Start threads
// opWrapper.start();
// // Profile used GPU memory
// // 1: wait ~10sec so the memory has been totally loaded on GPU
// // 2: profile the GPU memory
// const auto sleepTimeMs = 10;
// for (auto i = 0 ; i < 10000/sleepTimeMs && opWrapper.isRunning() ; i++)
// std::this_thread::sleep_for(std::chrono::milliseconds{sleepTimeMs});
// op::Profiler::profileGpuMemory(__LINE__, __FUNCTION__, __FILE__);
// // Keep program alive while running threads
// while (opWrapper.isRunning())
// std::this_thread::sleep_for(std::chrono::milliseconds{sleepTimeMs});
// // Stop and join threads
// op::log("Stopping thread(s)", op::Priority::High);
// opWrapper.stop();
// Measuring total time
const auto now = std::chrono::high_resolution_clock::now();
const auto totalTimeSec = (double)std::chrono::duration_cast<std::chrono::nanoseconds>(now-timerBegin).count()
* 1e-9;
const auto message = "OpenPose demo successfully finished. Total time: "
+ std::to_string(totalTimeSec) + " seconds.";
op::log(message, op::Priority::High);
// Return successful message
return 0;
}
catch (const std::exception& e)
{
op::error(e.what(), __LINE__, __FUNCTION__, __FILE__);
return -1;
}
}
int main(int argc, char *argv[])
{
// Parsing command line flags
gflags::ParseCommandLineFlags(&argc, &argv, true);
// Running openPoseDemo
return openPoseDemo();
}
...@@ -2,7 +2,6 @@ ...@@ -2,7 +2,6 @@
#define OPENPOSE_HAND_HAND_EXTRACTOR_HPP #define OPENPOSE_HAND_HAND_EXTRACTOR_HPP
#include <atomic> #include <atomic>
#include <thread>
#include <opencv2/core/core.hpp> // cv::Mat #include <opencv2/core/core.hpp> // cv::Mat
#include <openpose/core/common.hpp> #include <openpose/core/common.hpp>
#include <openpose/core/enumClasses.hpp> #include <openpose/core/enumClasses.hpp>
......
...@@ -2,6 +2,8 @@ ...@@ -2,6 +2,8 @@
#define OPENPOSE_NET_HEADERS_HPP #define OPENPOSE_NET_HEADERS_HPP
// net module // net module
#include <openpose/net/bodyPartConnectorBase.hpp>
#include <openpose/net/bodyPartConnectorCaffe.hpp>
#include <openpose/net/maximumBase.hpp> #include <openpose/net/maximumBase.hpp>
#include <openpose/net/maximumCaffe.hpp> #include <openpose/net/maximumCaffe.hpp>
#include <openpose/net/net.hpp> #include <openpose/net/net.hpp>
......
...@@ -2,8 +2,6 @@ ...@@ -2,8 +2,6 @@
#define OPENPOSE_POSE_HEADERS_HPP #define OPENPOSE_POSE_HEADERS_HPP
// pose module // pose module
#include <openpose/pose/bodyPartConnectorBase.hpp>
#include <openpose/pose/bodyPartConnectorCaffe.hpp>
#include <openpose/pose/enumClasses.hpp> #include <openpose/pose/enumClasses.hpp>
#include <openpose/pose/poseCpuRenderer.hpp> #include <openpose/pose/poseCpuRenderer.hpp>
#include <openpose/pose/poseExtractor.hpp> #include <openpose/pose/poseExtractor.hpp>
......
...@@ -2,7 +2,6 @@ ...@@ -2,7 +2,6 @@
#define OPENPOSE_POSE_POSE_EXTRACTOR_NET_HPP #define OPENPOSE_POSE_POSE_EXTRACTOR_NET_HPP
#include <atomic> #include <atomic>
#include <thread>
#include <openpose/core/common.hpp> #include <openpose/core/common.hpp>
#include <openpose/core/enumClasses.hpp> #include <openpose/core/enumClasses.hpp>
#include <openpose/pose/poseParameters.hpp> #include <openpose/pose/poseParameters.hpp>
......
#ifndef OPENPOSE_PRODUCER_PRODUCER_HPP #ifndef OPENPOSE_PRODUCER_PRODUCER_HPP
#define OPENPOSE_PRODUCER_PRODUCER_HPP #define OPENPOSE_PRODUCER_PRODUCER_HPP
#include <chrono>
#include <opencv2/core/core.hpp> // cv::Mat #include <opencv2/core/core.hpp> // cv::Mat
#include <opencv2/highgui/highgui.hpp> // capProperties of OpenCV #include <opencv2/highgui/highgui.hpp> // capProperties of OpenCV
#include <openpose/core/common.hpp> #include <openpose/core/common.hpp>
......
...@@ -3,7 +3,6 @@ ...@@ -3,7 +3,6 @@
#include <atomic> #include <atomic>
#include <mutex> #include <mutex>
#include <thread>
#include <openpose/core/common.hpp> #include <openpose/core/common.hpp>
#include <openpose/producer/videoCaptureReader.hpp> #include <openpose/producer/videoCaptureReader.hpp>
......
...@@ -2,7 +2,6 @@ ...@@ -2,7 +2,6 @@
#define OPENPOSE_THREAD_THREAD_HPP #define OPENPOSE_THREAD_THREAD_HPP
#include <atomic> #include <atomic>
#include <thread>
#include <openpose/core/common.hpp> #include <openpose/core/common.hpp>
#include <openpose/thread/subThread.hpp> #include <openpose/thread/subThread.hpp>
#include <openpose/thread/worker.hpp> #include <openpose/thread/worker.hpp>
......
...@@ -35,8 +35,6 @@ namespace op ...@@ -35,8 +35,6 @@ namespace op
// Implementation // Implementation
#include <chrono>
#include <thread>
namespace op namespace op
{ {
template<typename TDatums, typename TDatumsNoPtr> template<typename TDatums, typename TDatumsNoPtr>
......
...@@ -36,8 +36,6 @@ namespace op ...@@ -36,8 +36,6 @@ namespace op
// Implementation // Implementation
#include <chrono>
#include <thread>
namespace op namespace op
{ {
template<typename TDatums> template<typename TDatums>
......
#ifndef OPENPOSE_WRAPPER_ENUM_CLASSES_HPP
#define OPENPOSE_WRAPPER_ENUM_CLASSES_HPP
namespace op
{
enum class WorkerType : unsigned char
{
Input = 0,
// PreProcessing,
PostProcessing,
Output,
Size,
};
}
#endif // OPENPOSE_WRAPPER_ENUM_CLASSES_HPP
...@@ -2,6 +2,7 @@ ...@@ -2,6 +2,7 @@
#define OPENPOSE_WRAPPER_HEADERS_HPP #define OPENPOSE_WRAPPER_HEADERS_HPP
// wrapper module // wrapper module
#include <openpose/wrapper/enumClasses.hpp>
#include <openpose/wrapper/wrapper.hpp> #include <openpose/wrapper/wrapper.hpp>
#include <openpose/wrapper/wrapperAuxiliary.hpp> #include <openpose/wrapper/wrapperAuxiliary.hpp>
#include <openpose/wrapper/wrapperStructFace.hpp> #include <openpose/wrapper/wrapperStructFace.hpp>
......
...@@ -57,7 +57,7 @@ namespace op ...@@ -57,7 +57,7 @@ namespace op
* Since all the elements of the struct are public, they can also be manually filled. * Since all the elements of the struct are public, they can also be manually filled.
*/ */
WrapperStructFace(const bool enable = false, const Point<int>& netInputSize = Point<int>{368, 368}, WrapperStructFace(const bool enable = false, const Point<int>& netInputSize = Point<int>{368, 368},
const RenderMode renderMode = RenderMode::None, const RenderMode renderMode = RenderMode::Gpu,
const float alphaKeypoint = FACE_DEFAULT_ALPHA_KEYPOINT, const float alphaKeypoint = FACE_DEFAULT_ALPHA_KEYPOINT,
const float alphaHeatMap = FACE_DEFAULT_ALPHA_HEAT_MAP, const float alphaHeatMap = FACE_DEFAULT_ALPHA_HEAT_MAP,
const float renderThreshold = 0.4f); const float renderThreshold = 0.4f);
......
...@@ -78,7 +78,7 @@ namespace op ...@@ -78,7 +78,7 @@ namespace op
*/ */
WrapperStructHand(const bool enable = false, const Point<int>& netInputSize = Point<int>{368, 368}, WrapperStructHand(const bool enable = false, const Point<int>& netInputSize = Point<int>{368, 368},
const int scalesNumber = 1, const float scaleRange = 0.4f, const int scalesNumber = 1, const float scaleRange = 0.4f,
const bool tracking = false, const RenderMode renderMode = RenderMode::None, const bool tracking = false, const RenderMode renderMode = RenderMode::Gpu,
const float alphaKeypoint = HAND_DEFAULT_ALPHA_KEYPOINT, const float alphaKeypoint = HAND_DEFAULT_ALPHA_KEYPOINT,
const float alphaHeatMap = HAND_DEFAULT_ALPHA_HEAT_MAP, const float alphaHeatMap = HAND_DEFAULT_ALPHA_HEAT_MAP,
const float renderThreshold = 0.2f); const float renderThreshold = 0.2f);
......
...@@ -173,10 +173,10 @@ namespace op ...@@ -173,10 +173,10 @@ namespace op
* Since all the elements of the struct are public, they can also be manually filled. * Since all the elements of the struct are public, they can also be manually filled.
*/ */
WrapperStructPose(const bool enable = true, const Point<int>& netInputSize = Point<int>{656, 368}, WrapperStructPose(const bool enable = true, const Point<int>& netInputSize = Point<int>{656, 368},
const Point<int>& outputSize = Point<int>{1280, 720}, const Point<int>& outputSize = Point<int>{-1, -1},
const ScaleMode keypointScale = ScaleMode::InputResolution, const ScaleMode keypointScale = ScaleMode::InputResolution,
const int gpuNumber = -1, const int gpuNumberStart = 0, const int scalesNumber = 1, const int gpuNumber = -1, const int gpuNumberStart = 0, const int scalesNumber = 1,
const float scaleGap = 0.15f, const RenderMode renderMode = RenderMode::None, const float scaleGap = 0.15f, const RenderMode renderMode = RenderMode::Gpu,
const PoseModel poseModel = PoseModel::BODY_25, const bool blendOriginalFrame = true, const PoseModel poseModel = PoseModel::BODY_25, const bool blendOriginalFrame = true,
const float alphaKeypoint = POSE_DEFAULT_ALPHA_KEYPOINT, const float alphaKeypoint = POSE_DEFAULT_ALPHA_KEYPOINT,
const float alphaHeatMap = POSE_DEFAULT_ALPHA_HEAT_MAP, const float alphaHeatMap = POSE_DEFAULT_ALPHA_HEAT_MAP,
......
此差异已折叠。
// #include <thread>
#include <numeric> // std::accumulate #include <numeric> // std::accumulate
#ifdef USE_CERES #ifdef USE_CERES
#include <ceres/ceres.h> #include <ceres/ceres.h>
......
#include <fstream> #include <fstream>
#include <numeric> // std::accumulate #include <numeric> // std::accumulate
#include <thread>
#include <opencv2/core/core.hpp> #include <opencv2/core/core.hpp>
#ifdef USE_EIGEN #ifdef USE_EIGEN
#include <Eigen/Dense> #include <Eigen/Dense>
......
#include <chrono>
#include <thread>
#include <opencv2/highgui/highgui.hpp> // cv::waitKey #include <opencv2/highgui/highgui.hpp> // cv::waitKey
#include <openpose/filestream/fileStream.hpp> #include <openpose/filestream/fileStream.hpp>
#include <openpose/utilities/check.hpp> #include <openpose/utilities/check.hpp>
......
#include <chrono>
#include <cstdio> // std::snprintf #include <cstdio> // std::snprintf
#include <limits> // std::numeric_limits #include <limits> // std::numeric_limits
#include <openpose/utilities/fastMath.hpp> #include <openpose/utilities/fastMath.hpp>
......
set(CMAKE_CXX_SOURCE_FILE_EXTENSIONS C;M;c++;cc;cpp;cxx;mm;CPP;cl) set(CMAKE_CXX_SOURCE_FILE_EXTENSIONS C;M;c++;cc;cpp;cxx;mm;CPP;cl)
set(SOURCES_OP_NET set(SOURCES_OP_NET
bodyPartConnectorBase.cpp
bodyPartConnectorBase.cu
bodyPartConnectorCaffe.cpp
maximumBase.cpp maximumBase.cpp
maximumBase.cu maximumBase.cu
maximumCaffe.cpp maximumCaffe.cpp
......
#include <openpose/utilities/check.hpp> #include <openpose/utilities/check.hpp>
#include <openpose/utilities/fastMath.hpp> #include <openpose/utilities/fastMath.hpp>
#include <openpose/pose/poseParameters.hpp> #include <openpose/pose/poseParameters.hpp>
#include <openpose/pose/bodyPartConnectorBase.hpp> #include <openpose/net/bodyPartConnectorBase.hpp>
namespace op namespace op
{ {
......
#include <openpose/gpu/cuda.hpp> #include <openpose/gpu/cuda.hpp>
#include <openpose/pose/poseParameters.hpp> #include <openpose/pose/poseParameters.hpp>
#include <openpose/utilities/fastMath.hpp> #include <openpose/utilities/fastMath.hpp>
#include <openpose/pose/bodyPartConnectorBase.hpp> #include <openpose/net/bodyPartConnectorBase.hpp>
namespace op namespace op
{ {
...@@ -31,7 +31,7 @@ namespace op ...@@ -31,7 +31,7 @@ namespace op
const auto vectorAToBNormX = vectorAToBX/vectorNorm; const auto vectorAToBNormX = vectorAToBX/vectorNorm;
const auto vectorAToBNormY = vectorAToBY/vectorNorm; const auto vectorAToBNormY = vectorAToBY/vectorNorm;
auto sum = 0.; auto sum = T(0.);
auto count = 0; auto count = 0;
const auto vectorAToBXInLine = vectorAToBX/numberPointsInLine; const auto vectorAToBXInLine = vectorAToBX/numberPointsInLine;
const auto vectorAToBYInLine = vectorAToBY/numberPointsInLine; const auto vectorAToBYInLine = vectorAToBY/numberPointsInLine;
...@@ -49,7 +49,7 @@ namespace op ...@@ -49,7 +49,7 @@ namespace op
} }
// Return PAF score // Return PAF score
if (count/(float)numberPointsInLine > interMinAboveThreshold) if (count/T(numberPointsInLine) > interMinAboveThreshold)
return sum/count; return sum/count;
else else
{ {
...@@ -141,7 +141,7 @@ namespace op ...@@ -141,7 +141,7 @@ namespace op
maxPeaks, numberBodyPartPairs, heatMapSize.x, heatMapSize.y, interThreshold, maxPeaks, numberBodyPartPairs, heatMapSize.x, heatMapSize.y, interThreshold,
interMinAboveThreshold); interMinAboveThreshold);
// pairScoresCpu <-- pairScoresGpu // pairScoresCpu <-- pairScoresGpu
cudaMemcpy(pairScoresCpu.getPtr(), pairScoresGpuPtr, totalComputations * sizeof(float), cudaMemcpy(pairScoresCpu.getPtr(), pairScoresGpuPtr, totalComputations * sizeof(T),
cudaMemcpyDeviceToHost); cudaMemcpyDeviceToHost);
// New code // New code
......
...@@ -4,9 +4,9 @@ ...@@ -4,9 +4,9 @@
#ifdef USE_CUDA #ifdef USE_CUDA
#include <openpose/gpu/cuda.hpp> #include <openpose/gpu/cuda.hpp>
#endif #endif
#include <openpose/pose/bodyPartConnectorBase.hpp> #include <openpose/net/bodyPartConnectorBase.hpp>
#include <openpose/pose/poseParameters.hpp> #include <openpose/pose/poseParameters.hpp>
#include <openpose/pose/bodyPartConnectorCaffe.hpp> #include <openpose/net/bodyPartConnectorCaffe.hpp>
namespace op namespace op
{ {
......
set(SOURCES_OP_POSE set(SOURCES_OP_POSE
bodyPartConnectorBase.cpp
bodyPartConnectorBase.cu
bodyPartConnectorCaffe.cpp
defineTemplates.cpp defineTemplates.cpp
poseCpuRenderer.cpp poseCpuRenderer.cpp
poseExtractor.cpp poseExtractor.cpp
......
...@@ -2,10 +2,10 @@ ...@@ -2,10 +2,10 @@
#include <caffe/blob.hpp> #include <caffe/blob.hpp>
#endif #endif
#include <openpose/gpu/cuda.hpp> #include <openpose/gpu/cuda.hpp>
#include <openpose/net/bodyPartConnectorCaffe.hpp>
#include <openpose/net/netCaffe.hpp> #include <openpose/net/netCaffe.hpp>
#include <openpose/net/nmsCaffe.hpp> #include <openpose/net/nmsCaffe.hpp>
#include <openpose/net/resizeAndMergeCaffe.hpp> #include <openpose/net/resizeAndMergeCaffe.hpp>
#include <openpose/pose/bodyPartConnectorCaffe.hpp>
#include <openpose/pose/poseParameters.hpp> #include <openpose/pose/poseParameters.hpp>
#include <openpose/utilities/check.hpp> #include <openpose/utilities/check.hpp>
#include <openpose/utilities/fastMath.hpp> #include <openpose/utilities/fastMath.hpp>
......
#include <thread>
#include <openpose/utilities/check.hpp> #include <openpose/utilities/check.hpp>
#include <openpose/utilities/fastMath.hpp> #include <openpose/utilities/fastMath.hpp>
#include <openpose/producer/producer.hpp> #include <openpose/producer/producer.hpp>
......
#ifdef USE_FLIR_CAMERA
#include <thread>
#endif
#include <opencv2/imgproc/imgproc.hpp> // cv::undistort, cv::initUndistortRectifyMap #include <opencv2/imgproc/imgproc.hpp> // cv::undistort, cv::initUndistortRectifyMap
#ifdef USE_FLIR_CAMERA #ifdef USE_FLIR_CAMERA
#include <Spinnaker.h> #include <Spinnaker.h>
......
...@@ -214,7 +214,7 @@ namespace op ...@@ -214,7 +214,7 @@ namespace op
} }
} }
const auto DISCONNETED_THRESHOLD = 15; const auto DISCONNETED_THRESHOLD = 100;
void WebcamReader::bufferingThread() void WebcamReader::bufferingThread()
{ {
try try
......
#include <thread>
#include <openpose/tracking/pyramidalLK.hpp> #include <openpose/tracking/pyramidalLK.hpp>
#include <openpose/utilities/fastMath.hpp> #include <openpose/utilities/fastMath.hpp>
#include <openpose/tracking/personIdExtractor.hpp> #include <openpose/tracking/personIdExtractor.hpp>
......
#include <iostream> #include <iostream>
#include <thread>
#include <opencv2/imgproc/imgproc.hpp> // cv::resize #include <opencv2/imgproc/imgproc.hpp> // cv::resize
#include <openpose/tracking/personTracker.hpp> #include <openpose/tracking/personTracker.hpp>
#include <openpose/utilities/fastMath.hpp> #include <openpose/utilities/fastMath.hpp>
......
#include <chrono>
#include <map> #include <map>
#include <mutex> #include <mutex>
#include <thread>
#include <openpose/utilities/errorAndLog.hpp> #include <openpose/utilities/errorAndLog.hpp>
#include <openpose/utilities/profiler.hpp> #include <openpose/utilities/profiler.hpp>
......
...@@ -180,4 +180,17 @@ namespace op ...@@ -180,4 +180,17 @@ namespace op
error(e.what(), __LINE__, __FUNCTION__, __FILE__); error(e.what(), __LINE__, __FUNCTION__, __FILE__);
} }
} }
void threadIdPP(unsigned long long& threadId, const bool multiThreadEnabled)
{
try
{
if (multiThreadEnabled)
threadId++;
}
catch (const std::exception& e)
{
error(e.what(), __LINE__, __FUNCTION__, __FILE__);
}
}
} }
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册