提交 de117486 编写于 作者: G gineshidalgo99

Added flags body+upsampling_scale, new tutorial example

上级 e635967c
......@@ -273,7 +273,9 @@ if (WIN32)
endif (WIN32)
# Unity
option(BUILD_UNITY_SUPPORT "Build OpenPose as a Unity plugin." OFF)
if (WIN32)
option(BUILD_UNITY_SUPPORT "Build OpenPose as a Unity plugin." OFF)
endif (WIN32)
# Build as shared library
option(BUILD_SHARED_LIBS "Build as shared lib." ON)
......
......@@ -15,7 +15,7 @@ Note: Currently using [travis-matrix-badges](https://github.com/bjfish/travis-ma
[**OpenPose**](https://github.com/CMU-Perceptual-Computing-Lab/openpose) represents the **first real-time multi-person system to jointly detect human body, hand, facial, and foot keypoints (in total 135 keypoints) on single images**.
It is **authored by [Gines Hidalgo](https://www.gineshidalgo.com), [Zhe Cao](https://people.eecs.berkeley.edu/~zhecao), [Tomas Simon](http://www.cs.cmu.edu/~tsimon), [Shih-En Wei](https://scholar.google.com/citations?user=sFQD3k4AAAAJ&hl=en), [Hanbyul Joo](https://jhugestar.github.io), and [Yaser Sheikh](http://www.cs.cmu.edu/~yaser)**. Currently, it is being **maintained by [Gines Hidalgo](https://www.gineshidalgo.com) and [Yaadhav Raaj](https://www.linkedin.com/in/yaadhavraaj)**. In addition, OpenPose would not be possible without the [**CMU Panoptic Studio dataset**](http://domedb.perception.cs.cmu.edu). We would also like to thank all the people who helped OpenPose in any way. The main contributors are listed in [doc/contributors.md](doc/contributors.md).
It is **authored by [Gines Hidalgo](https://www.gineshidalgo.com), [Zhe Cao](https://people.eecs.berkeley.edu/~zhecao), [Tomas Simon](http://www.cs.cmu.edu/~tsimon), [Shih-En Wei](https://scholar.google.com/citations?user=sFQD3k4AAAAJ&hl=en), [Hanbyul Joo](https://jhugestar.github.io), and [Yaser Sheikh](http://www.cs.cmu.edu/~yaser)**. Currently, it is being **maintained by [Gines Hidalgo](https://www.gineshidalgo.com) and [Yaadhav Raaj](https://www.raaj.tech)**. In addition, OpenPose would not be possible without the [**CMU Panoptic Studio dataset**](http://domedb.perception.cs.cmu.edu). We would also like to thank all the people who helped OpenPose in any way. The main contributors are listed in [doc/contributors.md](doc/contributors.md).
<!-- The [original CVPR 2017 repo](https://github.com/ZheC/Multi-Person-Pose-Estimation) includes Matlab and Python versions, as well as the training code. The body pose estimation work is based on [the original ECCV 2016 demo](https://github.com/CMU-Perceptual-Computing-Lab/caffe_rtpose). -->
......
......@@ -11,7 +11,7 @@ OpenPose is authored by [Gines Hidalgo](https://www.gineshidalgo.com/), [Zhe Cao
### Contributors
We would also like to thank the following people who have highly contributed to OpenPose:
1. [Yaadhav Raaj](https://www.linkedin.com/in/yaadhavraaj): OpenPose maintainer, CPU version, OpenCL version, Mac version, Python API, and person tracker.
1. [Yaadhav Raaj](https://www.raaj.tech): OpenPose maintainer, CPU version, OpenCL version, Mac version, Python API, and person tracker.
2. [Bikramjot Hanzra](https://www.linkedin.com/in/bikz05): Former OpenPose maintainer, CMake (Ubuntu and Windows) version, and initial Travis Build version for Ubuntu.
3. [Donglai Xiang](https://xiangdonglai.github.io): Camera calibration toolbox improvement, including the implementation of its bundle adjustment algorithm.
4. [Luis Fernando Fraga](https://github.com/fragalfernando): Implementation of Lukas-Kanade algorith and person ID extractor.
......
......@@ -166,11 +166,12 @@ Each flag is divided into flag name, default value, and description.
- DEFINE_double(fps_max, -1., "Maximum processing frame rate. By default (-1), OpenPose will process frames as fast as possible. Example usage: If OpenPose is displaying images too quickly, this can reduce the speed so the user can analyze better each frame from the GUI.");
4. OpenPose Body Pose
- DEFINE_bool(body_disable, false, "Disable body keypoint detection. Option only possible for faster (but less accurate) face keypoint detection.");
- DEFINE_int32(body, 1, "Select 0 to disable body keypoint detection (e.g., for faster but less accurate face keypoint detection, custom hand detector, etc.), 1 (default) for body keypoint estimation, and 2 to disable its internal body pose estimation network but still still run the greedy association parsing algorithm");
- DEFINE_string(model_pose, "BODY_25", "Model to be used. E.g., `COCO` (18 keypoints), `MPI` (15 keypoints, ~10% faster), `MPI_4_layers` (15 keypoints, even faster but less accurate).");
- DEFINE_string(net_resolution, "-1x368", "Multiples of 16. If it is increased, the accuracy potentially increases. If it is decreased, the speed increases. For maximum speed-accuracy balance, it should keep the closest aspect ratio possible to the images or videos to be processed. Using `-1` in any of the dimensions, OP will choose the optimal aspect ratio depending on the user's input value. E.g., the default `-1x368` is equivalent to `656x368` in 16:9 resolutions, e.g., full HD (1980x1080) and HD (1280x720) resolutions.");
- DEFINE_int32(scale_number, 1, "Number of scales to average.");
- DEFINE_double(scale_gap, 0.25, "Scale gap between scales. No effect unless scale_number > 1. Initial scale is always 1. If you want to change the initial scale, you actually want to multiply the `net_resolution` by your desired initial scale.");
- DEFINE_double(upsampling_ratio, 0., "Upsampling ratio between the `net_resolution` and the output net results. A value less or equal than 0 (default) will use the network default value (recommended).");
5. OpenPose Body Pose Heatmaps and Part Candidates
- DEFINE_bool(heatmaps_add_parts, false, "If true, it will fill op::Datum::poseHeatMaps array with the body part heatmaps, and analogously face & hand heatmaps to op::Datum::faceHeatMaps & op::Datum::handHeatMaps. If more than one `add_heatmaps_X` flag is enabled, it will place then in sequential memory order: body parts + bkg + PAFs. It will follow the order on POSE_BODY_PART_MAPPING in `src/openpose/pose/poseParameters.cpp`. Program speed will considerably decrease. Not required for OpenPose, enable it only if you intend to explicitly use this information later.");
......
......@@ -109,7 +109,7 @@ It should be similar to the following image.
You can copy and modify the OpenPose 3-D demo to use any camera brand by:
1. You can optionally turn off the `WITH_FLIR_CAMERA` while compiling CMake.
2. Copy `examples/tutorial_api_cpp/7_synchronous_custom_input.cpp` (or 9_synchronous_custom_all.cpp).
2. Copy `examples/tutorial_api_cpp/13_synchronous_custom_input.cpp` (or `17_synchronous_custom_all_and_datum.cpp`).
3. Modify `WUserInput` and add your custom code there. Your code should fill `Datum::name`, `Datum::cameraMatrix`, `Datum::cvInputData`, and `Datum::cvOutputData` (fill cvOutputData = cvInputData).
4. Remove `WUserPostProcessing` and `WUserOutput` (unless you want to have your custom post-processing and/or output).
......
......@@ -277,63 +277,67 @@ OpenPose Library - Release Notes
8. Given that display can be disabled in all examples, they all have been added to the Travis build so they can be tested.
7. Added a virtual destructor to almost all clases, so they can be inherited. Exceptions (for performance reasons): Array, Point, Rectangle, CvMatToOpOutput, OpOutputToCvMat.
8. Auxiliary classes in errorAndLog turned into namespaces (Profiler must be kept as class to allow static parameters).
9. Added flag `--frame_step` to allow the user to select the step or gap between processed frames. E.g., `--frame_step 5` would read and process frames 0, 5, 10, etc.
10. Added sanity checks to avoid `--frame_last` to be smaller than `--frame_first` or higher than the number of total frames.
11. Array improvements for Pybind11 compatibility:
9. Added flags:
1. Added flag `--frame_step` to allow the user to select the step or gap between processed frames. E.g., `--frame_step 5` would read and process frames 0, 5, 10, etc.
2. Previously hardcoded `COCO_CHALLENGE` variable turned into user configurable flag `--maximize_positives`.
3. Added flag `--verbose` to plot the progress.
4. Added flag `--fps_max` to limit the maximum processing frame rate of OpenPose (useful to display results at a maximum desired speed).
5. Added sanit30. Added the flags `--prototxt_path` and `--caffemodel_path` to allow custom ProtoTxt and CaffeModel paths.
6. Added the flags `--face_detector` and `--hand_detector`, that enable the user to select the face/hand rectangle detector that is used for the later face/hand keypoint detection. It includes OpenCV (for face), and also allows the user to provide its own input. Flag `--hand_tracking` is removed and integrated into this flag too.
y checks to avoid `--frame_last` to be smaller than `--frame_first` or higher than the number of total frames.
7. Added the flag `--upsampling_ratio`, which controls the upsampling than OpenPose will perform to the frame before the greedy association parsing algorithm.
8. Added the flag `--body` (replacing `--body_disable`), which adds the possibility of disabling the OpenPose pose network but still running the greedy association parsing algorithm (on top of the user heatmaps, see the associated `tutorial_api_cpp` example).
10. Array improvements for Pybind11 compatibility:
1. Array::getStride() to get step size of each dimension of the array.
2. Array::getPybindPtr() to get an editable const pointer.
3. Array::pData as binding of spData.
4. Array::Array that takes as input a pointer, so it does not re-allocate memory.
12. Producer defined inside Wrapper rather than being defined on each example.
13. Reduced many Visual Studio warnings (e.g., uncontrolled conversions between types).
14. Added new keypoint-related auxiliary functions in `utilities/keypoints.hpp`.
15. Function `resizeFixedAspectRatio` can take already allocated memory (e.g., faster if target is an Array<T> object, no intermediate cv::Mat required).
16. Added compatibility for OpenCV 4.0, while preserving 2.4.X and 3.X compatibility.
17. Improved and added several functions to `utilities/keypoints.hpp` and Array to simplify keypoint post-processing.
18. Removed warnings from Spinnaker SDK at compiling time.
19. All bash scripts incorporate `#!/bin/bash` to tell the terminal that they are bash scripts.
20. Added flag `--verbose` to plot the progress.
21. Added find_package(Protobuf) to allow specific versions of Protobuf.
22. Video saving improvements:
11. Producer defined inside Wrapper rather than being defined on each example.
12. Reduced many Visual Studio warnings (e.g., uncontrolled conversions between types).
13. Added new keypoint-related auxiliary functions in `utilities/keypoints.hpp`.
14. Function `resizeFixedAspectRatio` can take already allocated memory (e.g., faster if target is an Array<T> object, no intermediate cv::Mat required).
15. Added compatibility for OpenCV 4.0, while preserving 2.4.X and 3.X compatibility.
16. Improved and added several functions to `utilities/keypoints.hpp` and Array to simplify keypoint post-processing.
17. Removed warnings from Spinnaker SDK at compiling time.
18. All bash scripts incorporate `#!/bin/bash` to tell the terminal that they are bash scripts.
19. Added find_package(Protobuf) to allow specific versions of Protobuf.
20. Video saving improvements:
1. Video (`--write_video`) can be generated from images (`--image_dir`), as long as they maintain the same resolution.
2. Video with the 3D output can be saved with the new `--write_video_3d` flag.
3. Added the capability of saving videos in MP4 format (by using the ffmpeg library).
4. Added the flag `write_video_with_audio` to enable saving these output MP4 videos with audio.
23. Added `--fps_max` flag to limit the maximum processing frame rate of OpenPose (useful to display results at a maximum desired speed).
24. Frame undistortion can be applied not only to FLIR cameras, but also to all other input sources (image, webcam, video, etc.).
25. Calibration improvements:
21. Frame undistortion can be applied not only to FLIR cameras, but also to all other input sources (image, webcam, video, etc.).
22. Calibration improvements:
1. Improved chessboard orientation detection, more robust and less errors.
2. Triangulation functions (triangulate and triangulateWithOptimization) public, so calibration can use them for bundle adjustment.
3. Added bundle adjustment refinement for camera extrinsic calibration.
4. Added `CameraMatrixInitial` field into the XML calibration files to keep the information of the original camera extrinsic parameters when bundle adjustment is run.
26. Added Mac OpenCL compatibility.
27. Added documentation for Nvidia TX2 with JetPack 3.3.
28. Added Travis build check for several configurations: Ubuntu (14/16)/Mac/Windows, CPU/CUDA/OpenCL, with/without Python, and Release/Debug.
29. Assigned 755 access to all sh scripts (some of them were only 644).
30. Added the flags `--prototxt_path` and `--caffemodel_path` to allow custom ProtoTxt and CaffeModel paths.
31. Replaced the old Python wrapper for an updated Pybind11 wrapper version, that includes all the functionality of the C++ API.
32. Function getFilesOnDirectory() can extra all basic image file types at once without requiring to manually enumerate them.
33. Added the flags `--face_detector` and `--hand_detector`, that enable the user to select the face/hand rectangle detector that is used for the later face/hand keypoint detection. It includes OpenCV (for face), and also allows the user to provide its own input. Flag `--hand_tracking` is removed and integrated into this flag too.
34. Maximum queue size per OpenPose thread is configurable through the Wrapper class.
35. Added pre-processing capabilities to Wrapper (WorkerType::PreProcessing), which will be run right after the image has been read.
36. Removed boost::shared_ptr and caffe::Blob dependencies from the headers. No 3rdparty dependencies left on headers (except dim3 for CUDA).
37. Added `poseNetOutput` to Datum so that user can introduce his custom network output.
23. Added Mac OpenCL compatibility.
24. Added documentation for Nvidia TX2 with JetPack 3.3.
25. Added Travis build check for several configurations: Ubuntu (14/16)/Mac/Windows, CPU/CUDA/OpenCL, with/without Python, and Release/Debug.
26. Assigned 755 access to all sh scripts (some of them were only 644).
27. Replaced the old Python wrapper for an updated Pybind11 wrapper version, that includes all the functionality of the C++ API.
28. Function getFilesOnDirectory() can extra all basic image file types at once without requiring to manually enumerate them.
29. Maximum queue size per OpenPose thread is configurable through the Wrapper class.
30. Added pre-processing capabilities to Wrapper (WorkerType::PreProcessing), which will be run right after the image has been read.
31. Removed boost::shared_ptr and caffe::Blob dependencies from the headers. No 3rdparty dependencies left on headers (except dim3 for CUDA).
32. Added Array `poseNetOutput` to Datum so that user can introduce his custom network output.
2. Functions or parameters renamed:
1. By default, python example `tutorial_developer/python_2_pose_from_heatmaps.py` was using 2 scales starting at -1x736, changed to 1 scale at -1x368.
2. WrapperStructPose default parameters changed to match those of the OpenPose demo binary.
3. WrapperT.configure() changed from 1 function that requries all arguments to individual functions that take 1 argument each.
4. Added `Forward` to all net classes that automatically selects between CUDA, OpenCL, or CPU-only version depending on the defines.
5. Previously hardcoded `COCO_CHALLENGE` variable turned into user configurable flag `--maximize_positives`.
6. Removed old COCO 2014 validation scripts.
7. WrapperStructOutput split into WrapperStructOutput and WrapperStructGui.
8. Replaced `--camera_fps` flag by `--write_video_fps`, given that it was a confusing name: It did not affect the webcam FPS, but only the FPS of the output video. In addition, default value changed from 30 to -1.
9. Renamed `--frame_keep_distortion` as `--frame_undistort`, which performs the opposite operation (the default value has been also changed to the opposite).
10. Renamed `--camera_parameter_folder` as `--camera_parameter_path` because it could also take a whole XML file path rather than its parent folder.
11. Default value of flag `--scale_gap` changed from 0.3 to 0.25.
12. Moved most sh scripts into the `scripts/` folder. Only models/getModels.sh and the `*.bat` files are kept under `models/` and `3rdparty/windows`.
13. For Python compatibility and scalability increase, template `TDatums` used for `include/openpose/wrapper/wrapper.hpp` has changed from `std::vector<Datum>` to `std::vector<std::shared_ptr<Datum>>`, including the respective changes in all the worker classes. In addition, some template classes have been simplified to only take 1 template parameter for user simplicity.
14. Renamed intRound, charRound, etc. by positiveIntRound, positiveCharRound, etc. so that people can realize it is not safe for negative numbers.
15. Flag `--hand_tracking` is a subcase of `--hand_detector`, so it has been removed and incorporated as `--hand_detector 3`.
5. Removed old COCO 2014 validation scripts.
6. WrapperStructOutput split into WrapperStructOutput and WrapperStructGui.
7. Replaced flags:
1. Replaced `--camera_fps` flag by `--write_video_fps`, given that it was a confusing name: It did not affect the webcam FPS, but only the FPS of the output video. In addition, default value changed from 30 to -1.
2. Flag `--hand_tracking` is a subcase of `--hand_detector`, so it has been removed and incorporated as `--hand_detector 3`.
8. Renamed `--frame_keep_distortion` as `--frame_undistort`, which performs the opposite operation (the default value has been also changed to the opposite).
9. Renamed `--camera_parameter_folder` as `--camera_parameter_path` because it could also take a whole XML file path rather than its parent folder.
10. Default value of flag `--scale_gap` changed from 0.3 to 0.25.
11. Moved most sh scripts into the `scripts/` folder. Only models/getModels.sh and the `*.bat` files are kept under `models/` and `3rdparty/windows`.
12. For Python compatibility and scalability increase, template `TDatums` used for `include/openpose/wrapper/wrapper.hpp` has changed from `std::vector<Datum>` to `std::vector<std::shared_ptr<Datum>>`, including the respective changes in all the worker classes. In addition, some template classes have been simplified to only take 1 template parameter for user simplicity.
13. Renamed intRound, charRound, etc. by positiveIntRound, positiveCharRound, etc. so that people can realize it is not safe for negative numbers.
3. Main bugs fixed:
1. CMake-GUI was forcing to Release mode, allowed Debug modes too.
2. NMS returns in index 0 the number of found peaks. However, while the number of peaks was truncated to a maximum of 127, this index 0 was saving the real number instead of the truncated one.
......
......@@ -40,6 +40,8 @@ void configureWrapper(op::Wrapper& opWrapper)
const auto faceNetInputSize = op::flagsToPoint(FLAGS_face_net_resolution, "368x368 (multiples of 16)");
// handNetInputSize
const auto handNetInputSize = op::flagsToPoint(FLAGS_hand_net_resolution, "368x368 (multiples of 16)");
// poseMode
const auto poseMode = op::flagsToPoseMode(FLAGS_body);
// poseModel
const auto poseModel = op::flagsToPoseModel(FLAGS_model_pose);
// JSON saving
......@@ -62,12 +64,12 @@ void configureWrapper(op::Wrapper& opWrapper)
// Pose configuration (use WrapperStructPose{} for default and recommended configuration)
const op::WrapperStructPose wrapperStructPose{
!FLAGS_body_disable, netInputSize, outputSize, keypointScaleMode, FLAGS_num_gpu, FLAGS_num_gpu_start,
poseMode, netInputSize, outputSize, keypointScaleMode, FLAGS_num_gpu, FLAGS_num_gpu_start,
FLAGS_scale_number, (float)FLAGS_scale_gap, op::flagsToRenderMode(FLAGS_render_pose, multipleView),
poseModel, !FLAGS_disable_blending, (float)FLAGS_alpha_pose, (float)FLAGS_alpha_heatmap,
FLAGS_part_to_show, FLAGS_model_folder, heatMapTypes, heatMapScaleMode, FLAGS_part_candidates,
(float)FLAGS_render_threshold, FLAGS_number_people_max, FLAGS_maximize_positives, FLAGS_fps_max,
FLAGS_prototxt_path, FLAGS_caffemodel_path, enableGoogleLogging};
FLAGS_prototxt_path, FLAGS_caffemodel_path, (float)FLAGS_upsampling_ratio, enableGoogleLogging};
opWrapper.configure(wrapperStructPose);
// Face configuration (use op::WrapperStructFace{} to disable it)
const op::WrapperStructFace wrapperStructFace{
......
......@@ -43,17 +43,15 @@ int handFromJsonTest()
const auto handNetInputSize = op::flagsToPoint(FLAGS_hand_net_resolution, "368x368 (multiples of 16)");
// producerType
const auto producerSharedPtr = op::createProducer(op::ProducerType::ImageDirectory, FLAGS_image_dir);
// Enabling Google Logging
const bool enableGoogleLogging = true;
// OpenPose wrapper
op::log("Configuring OpenPose...", op::Priority::High);
op::WrapperHandFromJsonTest<op::Datum> opWrapper;
// Pose configuration (use WrapperStructPose{} for default and recommended configuration)
op::WrapperStructPose wrapperStructPose{
false, op::flagsToPoint("656x368"), op::flagsToPoint("1280x720"), op::ScaleMode::InputResolution,
op::PoseMode::Disabled, op::flagsToPoint("656x368"), op::flagsToPoint("1280x720"), op::ScaleMode::InputResolution,
FLAGS_num_gpu, FLAGS_num_gpu_start, 1, 0.15f, op::RenderMode::None, op::PoseModel::BODY_25, true, 0.f, 0.f,
0, "models/", {}, op::ScaleMode::ZeroToOne, false, 0.05f, -1, false, enableGoogleLogging};
0, "models/", {}, op::ScaleMode::ZeroToOne, false, 0.05f, -1, false};
wrapperStructPose.modelFolder = FLAGS_model_folder;
// Hand configuration (use op::WrapperStructHand{} to disable it)
const op::WrapperStructHand wrapperStructHand{
......
......@@ -20,7 +20,7 @@
// 4. If extra classes and files are required, add those extra files inside the OpenPose include and src folders,
// under a new folder (i.e., `include/newMethod/` and `src/newMethod/`), including `namespace op` on those files.
// This example is a sub-case of `tutorial_api_cpp/6_synchronous_custom_postprocessing.cpp`, where only custom post-processing is
// This example is a sub-case of `tutorial_api_cpp/15_synchronous_custom_postprocessing.cpp`, where only custom post-processing is
// considered.
// Command-line user intraface
......@@ -58,6 +58,8 @@ void configureWrapper(op::WrapperT<op::UserDatum>& opWrapperT)
const auto faceNetInputSize = op::flagsToPoint(FLAGS_face_net_resolution, "368x368 (multiples of 16)");
// handNetInputSize
const auto handNetInputSize = op::flagsToPoint(FLAGS_hand_net_resolution, "368x368 (multiples of 16)");
// poseMode
const auto poseMode = op::flagsToPoseMode(FLAGS_body);
// poseModel
const auto poseModel = op::flagsToPoseModel(FLAGS_model_pose);
// JSON saving
......@@ -80,12 +82,12 @@ void configureWrapper(op::WrapperT<op::UserDatum>& opWrapperT)
// Pose configuration (use WrapperStructPose{} for default and recommended configuration)
const op::WrapperStructPose wrapperStructPose{
!FLAGS_body_disable, netInputSize, outputSize, keypointScaleMode, FLAGS_num_gpu, FLAGS_num_gpu_start,
poseMode, netInputSize, outputSize, keypointScaleMode, FLAGS_num_gpu, FLAGS_num_gpu_start,
FLAGS_scale_number, (float)FLAGS_scale_gap, op::flagsToRenderMode(FLAGS_render_pose, multipleView),
poseModel, !FLAGS_disable_blending, (float)FLAGS_alpha_pose, (float)FLAGS_alpha_heatmap,
FLAGS_part_to_show, FLAGS_model_folder, heatMapTypes, heatMapScaleMode, FLAGS_part_candidates,
(float)FLAGS_render_threshold, FLAGS_number_people_max, FLAGS_maximize_positives, FLAGS_fps_max,
FLAGS_prototxt_path, FLAGS_caffemodel_path, enableGoogleLogging};
FLAGS_prototxt_path, FLAGS_caffemodel_path, (float)FLAGS_upsampling_ratio, enableGoogleLogging};
opWrapperT.configure(wrapperStructPose);
// Face configuration (use op::WrapperStructFace{} to disable it)
const op::WrapperStructFace wrapperStructFace{
......
......@@ -82,6 +82,8 @@ void configureWrapper(op::Wrapper& opWrapper)
const auto faceNetInputSize = op::flagsToPoint(FLAGS_face_net_resolution, "368x368 (multiples of 16)");
// handNetInputSize
const auto handNetInputSize = op::flagsToPoint(FLAGS_hand_net_resolution, "368x368 (multiples of 16)");
// poseMode
const auto poseMode = op::flagsToPoseMode(FLAGS_body);
// poseModel
const auto poseModel = op::flagsToPoseModel(FLAGS_model_pose);
// JSON saving
......@@ -104,12 +106,12 @@ void configureWrapper(op::Wrapper& opWrapper)
// Pose configuration (use WrapperStructPose{} for default and recommended configuration)
const op::WrapperStructPose wrapperStructPose{
!FLAGS_body_disable, netInputSize, outputSize, keypointScaleMode, FLAGS_num_gpu, FLAGS_num_gpu_start,
poseMode, netInputSize, outputSize, keypointScaleMode, FLAGS_num_gpu, FLAGS_num_gpu_start,
FLAGS_scale_number, (float)FLAGS_scale_gap, op::flagsToRenderMode(FLAGS_render_pose, multipleView),
poseModel, !FLAGS_disable_blending, (float)FLAGS_alpha_pose, (float)FLAGS_alpha_heatmap,
FLAGS_part_to_show, FLAGS_model_folder, heatMapTypes, heatMapScaleMode, FLAGS_part_candidates,
(float)FLAGS_render_threshold, FLAGS_number_people_max, FLAGS_maximize_positives, FLAGS_fps_max,
FLAGS_prototxt_path, FLAGS_caffemodel_path, enableGoogleLogging};
FLAGS_prototxt_path, FLAGS_caffemodel_path, (float)FLAGS_upsampling_ratio, enableGoogleLogging};
opWrapper.configure(wrapperStructPose);
// Face configuration (use op::WrapperStructFace{} to disable it)
const op::WrapperStructFace wrapperStructFace{
......
......@@ -84,6 +84,8 @@ void configureWrapper(op::Wrapper& opWrapper)
const auto faceNetInputSize = op::flagsToPoint(FLAGS_face_net_resolution, "368x368 (multiples of 16)");
// handNetInputSize
const auto handNetInputSize = op::flagsToPoint(FLAGS_hand_net_resolution, "368x368 (multiples of 16)");
// poseMode
const auto poseMode = op::flagsToPoseMode(FLAGS_body);
// poseModel
const auto poseModel = op::flagsToPoseModel(FLAGS_model_pose);
// JSON saving
......@@ -106,12 +108,12 @@ void configureWrapper(op::Wrapper& opWrapper)
// Pose configuration (use WrapperStructPose{} for default and recommended configuration)
const op::WrapperStructPose wrapperStructPose{
!FLAGS_body_disable, netInputSize, outputSize, keypointScaleMode, FLAGS_num_gpu, FLAGS_num_gpu_start,
poseMode, netInputSize, outputSize, keypointScaleMode, FLAGS_num_gpu, FLAGS_num_gpu_start,
FLAGS_scale_number, (float)FLAGS_scale_gap, op::flagsToRenderMode(FLAGS_render_pose, multipleView),
poseModel, !FLAGS_disable_blending, (float)FLAGS_alpha_pose, (float)FLAGS_alpha_heatmap,
FLAGS_part_to_show, FLAGS_model_folder, heatMapTypes, heatMapScaleMode, FLAGS_part_candidates,
(float)FLAGS_render_threshold, FLAGS_number_people_max, FLAGS_maximize_positives, FLAGS_fps_max,
FLAGS_prototxt_path, FLAGS_caffemodel_path, enableGoogleLogging};
FLAGS_prototxt_path, FLAGS_caffemodel_path, (float)FLAGS_upsampling_ratio, enableGoogleLogging};
opWrapper.configure(wrapperStructPose);
// Face configuration (use op::WrapperStructFace{} to disable it)
const op::WrapperStructFace wrapperStructFace{
......
......@@ -90,6 +90,8 @@ void configureWrapper(op::Wrapper& opWrapper)
const auto faceNetInputSize = op::flagsToPoint(FLAGS_face_net_resolution, "368x368 (multiples of 16)");
// handNetInputSize
const auto handNetInputSize = op::flagsToPoint(FLAGS_hand_net_resolution, "368x368 (multiples of 16)");
// poseMode
const auto poseMode = op::flagsToPoseMode(FLAGS_body);
// poseModel
const auto poseModel = op::flagsToPoseModel(FLAGS_model_pose);
// JSON saving
......@@ -112,12 +114,12 @@ void configureWrapper(op::Wrapper& opWrapper)
// Pose configuration (use WrapperStructPose{} for default and recommended configuration)
const op::WrapperStructPose wrapperStructPose{
!FLAGS_body_disable, netInputSize, outputSize, keypointScaleMode, FLAGS_num_gpu, FLAGS_num_gpu_start,
poseMode, netInputSize, outputSize, keypointScaleMode, FLAGS_num_gpu, FLAGS_num_gpu_start,
FLAGS_scale_number, (float)FLAGS_scale_gap, op::flagsToRenderMode(FLAGS_render_pose, multipleView),
poseModel, !FLAGS_disable_blending, (float)FLAGS_alpha_pose, (float)FLAGS_alpha_heatmap,
FLAGS_part_to_show, FLAGS_model_folder, heatMapTypes, heatMapScaleMode, FLAGS_part_candidates,
(float)FLAGS_render_threshold, FLAGS_number_people_max, FLAGS_maximize_positives, FLAGS_fps_max,
FLAGS_prototxt_path, FLAGS_caffemodel_path, enableGoogleLogging};
FLAGS_prototxt_path, FLAGS_caffemodel_path, (float)FLAGS_upsampling_ratio, enableGoogleLogging};
opWrapper.configure(wrapperStructPose);
// Face configuration (use op::WrapperStructFace{} to disable it)
const op::WrapperStructFace wrapperStructFace{
......
......@@ -85,6 +85,8 @@ void configureWrapper(op::Wrapper& opWrapper)
const auto faceNetInputSize = op::flagsToPoint(FLAGS_face_net_resolution, "368x368 (multiples of 16)");
// handNetInputSize
const auto handNetInputSize = op::flagsToPoint(FLAGS_hand_net_resolution, "368x368 (multiples of 16)");
// poseMode
const auto poseMode = op::flagsToPoseMode(FLAGS_body);
// poseModel
const auto poseModel = op::flagsToPoseModel(FLAGS_model_pose);
// JSON saving
......@@ -107,12 +109,12 @@ void configureWrapper(op::Wrapper& opWrapper)
// Pose configuration (use WrapperStructPose{} for default and recommended configuration)
const op::WrapperStructPose wrapperStructPose{
!FLAGS_body_disable, netInputSize, outputSize, keypointScaleMode, FLAGS_num_gpu, FLAGS_num_gpu_start,
poseMode, netInputSize, outputSize, keypointScaleMode, FLAGS_num_gpu, FLAGS_num_gpu_start,
FLAGS_scale_number, (float)FLAGS_scale_gap, op::flagsToRenderMode(FLAGS_render_pose, multipleView),
poseModel, !FLAGS_disable_blending, (float)FLAGS_alpha_pose, (float)FLAGS_alpha_heatmap,
FLAGS_part_to_show, FLAGS_model_folder, heatMapTypes, heatMapScaleMode, FLAGS_part_candidates,
(float)FLAGS_render_threshold, FLAGS_number_people_max, FLAGS_maximize_positives, FLAGS_fps_max,
FLAGS_prototxt_path, FLAGS_caffemodel_path, enableGoogleLogging};
FLAGS_prototxt_path, FLAGS_caffemodel_path, (float)FLAGS_upsampling_ratio, enableGoogleLogging};
opWrapper.configure(wrapperStructPose);
// Face configuration (use op::WrapperStructFace{} to disable it)
const op::WrapperStructFace wrapperStructFace{
......@@ -157,7 +159,7 @@ int tutorialApiCpp()
const auto opTimer = op::getTimerInit();
// Required flags to enable heatmaps
FLAGS_body_disable = true;
FLAGS_body = 0;
FLAGS_face = true;
FLAGS_face_detector = 2;
......@@ -200,7 +202,7 @@ int tutorialApiCpp()
// Info
op::log("NOTE: In addition with the user flags, this demo has auto-selected the following flags:\n"
" `--body_disable --face --face_detector 2`", op::Priority::High);
"\t`--body_disable --face --face_detector 2`", op::Priority::High);
// Measuring total time
op::printTime(opTimer, "OpenPose demo successfully finished. Total time: ", " seconds.", op::Priority::High);
......
......@@ -85,6 +85,8 @@ void configureWrapper(op::Wrapper& opWrapper)
const auto faceNetInputSize = op::flagsToPoint(FLAGS_face_net_resolution, "368x368 (multiples of 16)");
// handNetInputSize
const auto handNetInputSize = op::flagsToPoint(FLAGS_hand_net_resolution, "368x368 (multiples of 16)");
// poseMode
const auto poseMode = op::flagsToPoseMode(FLAGS_body);
// poseModel
const auto poseModel = op::flagsToPoseModel(FLAGS_model_pose);
// JSON saving
......@@ -107,12 +109,12 @@ void configureWrapper(op::Wrapper& opWrapper)
// Pose configuration (use WrapperStructPose{} for default and recommended configuration)
const op::WrapperStructPose wrapperStructPose{
!FLAGS_body_disable, netInputSize, outputSize, keypointScaleMode, FLAGS_num_gpu, FLAGS_num_gpu_start,
poseMode, netInputSize, outputSize, keypointScaleMode, FLAGS_num_gpu, FLAGS_num_gpu_start,
FLAGS_scale_number, (float)FLAGS_scale_gap, op::flagsToRenderMode(FLAGS_render_pose, multipleView),
poseModel, !FLAGS_disable_blending, (float)FLAGS_alpha_pose, (float)FLAGS_alpha_heatmap,
FLAGS_part_to_show, FLAGS_model_folder, heatMapTypes, heatMapScaleMode, FLAGS_part_candidates,
(float)FLAGS_render_threshold, FLAGS_number_people_max, FLAGS_maximize_positives, FLAGS_fps_max,
FLAGS_prototxt_path, FLAGS_caffemodel_path, enableGoogleLogging};
FLAGS_prototxt_path, FLAGS_caffemodel_path, (float)FLAGS_upsampling_ratio, enableGoogleLogging};
opWrapper.configure(wrapperStructPose);
// Face configuration (use op::WrapperStructFace{} to disable it)
const op::WrapperStructFace wrapperStructFace{
......@@ -157,7 +159,7 @@ int tutorialApiCpp()
const auto opTimer = op::getTimerInit();
// Required flags to enable heatmaps
FLAGS_body_disable = true;
FLAGS_body = 0;
FLAGS_hand = true;
FLAGS_hand_detector = 2;
......@@ -209,7 +211,7 @@ int tutorialApiCpp()
// Info
op::log("NOTE: In addition with the user flags, this demo has auto-selected the following flags:\n"
" `--body_disable --hand --hand_detector 2`", op::Priority::High);
"\t`--body_disable --hand --hand_detector 2`", op::Priority::High);
// Measuring total time
op::printTime(opTimer, "OpenPose demo successfully finished. Total time: ", " seconds.", op::Priority::High);
......
......@@ -114,6 +114,8 @@ void configureWrapper(op::Wrapper& opWrapper)
const auto faceNetInputSize = op::flagsToPoint(FLAGS_face_net_resolution, "368x368 (multiples of 16)");
// handNetInputSize
const auto handNetInputSize = op::flagsToPoint(FLAGS_hand_net_resolution, "368x368 (multiples of 16)");
// poseMode
const auto poseMode = op::flagsToPoseMode(FLAGS_body);
// poseModel
const auto poseModel = op::flagsToPoseModel(FLAGS_model_pose);
// JSON saving
......@@ -136,12 +138,12 @@ void configureWrapper(op::Wrapper& opWrapper)
// Pose configuration (use WrapperStructPose{} for default and recommended configuration)
const op::WrapperStructPose wrapperStructPose{
!FLAGS_body_disable, netInputSize, outputSize, keypointScaleMode, FLAGS_num_gpu, FLAGS_num_gpu_start,
poseMode, netInputSize, outputSize, keypointScaleMode, FLAGS_num_gpu, FLAGS_num_gpu_start,
FLAGS_scale_number, (float)FLAGS_scale_gap, op::flagsToRenderMode(FLAGS_render_pose, multipleView),
poseModel, !FLAGS_disable_blending, (float)FLAGS_alpha_pose, (float)FLAGS_alpha_heatmap,
FLAGS_part_to_show, FLAGS_model_folder, heatMapTypes, heatMapScaleMode, FLAGS_part_candidates,
(float)FLAGS_render_threshold, FLAGS_number_people_max, FLAGS_maximize_positives, FLAGS_fps_max,
FLAGS_prototxt_path, FLAGS_caffemodel_path, enableGoogleLogging};
FLAGS_prototxt_path, FLAGS_caffemodel_path, (float)FLAGS_upsampling_ratio, enableGoogleLogging};
opWrapper.configure(wrapperStructPose);
// Face configuration (use op::WrapperStructFace{} to disable it)
const op::WrapperStructFace wrapperStructFace{
......@@ -219,7 +221,7 @@ int tutorialApiCpp()
// Info
op::log("NOTE: In addition with the user flags, this demo has auto-selected the following flags:\n"
" `--heatmaps_add_parts --heatmaps_add_bkg --heatmaps_add_PAFs`",
"\t`--heatmaps_add_parts --heatmaps_add_bkg --heatmaps_add_PAFs`",
op::Priority::High);
// Measuring total time
......
// ----------------------- OpenPose C++ API Tutorial - Example 9 - Keypoints from heatmaps -----------------------
// It reads a custom set of heatmaps and run the OpenPose greedy connection algorithm.
// OpenPose will not run its internal body pose estimation network and will instead use
// this data as the substitute of its network. The size of this element must match the size of the output of
// its internal network, or it will lead to core dumped (segmentation) errors. You can modify the pose
// estimation flags to match the dimension of both elements (e.g., `--net_resolution`, `--scale_number`, etc.).
// Command-line user intraface
#define OPENPOSE_FLAGS_DISABLE_PRODUCER
#define OPENPOSE_FLAGS_DISABLE_DISPLAY
#include <openpose/flags.hpp>
// OpenPose dependencies
#include <openpose/headers.hpp>
// Custom OpenPose flags
// Producer
DEFINE_string(image_path, "examples/media/COCO_val2014_000000000294.jpg",
"Process an image. Read all standard formats (jpg, png, bmp, etc.).");
// Display
DEFINE_bool(no_display, false,
"Enable to disable the visual display.");
// This worker will just read and return all the jpg files in a directory
void display(const std::shared_ptr<std::vector<std::shared_ptr<op::Datum>>>& datumsPtr)
{
try
{
// User's displaying/saving/other processing here
// datum.cvOutputData: rendered frame with pose or heatmaps
// datum.poseKeypoints: Array<float> with the estimated pose
if (datumsPtr != nullptr && !datumsPtr->empty())
{
// Display image
cv::imshow(OPEN_POSE_NAME_AND_VERSION + " - Tutorial C++ API", datumsPtr->at(0)->cvOutputData);
cv::waitKey(0);
}
else
op::log("Nullptr or empty datumsPtr found.", op::Priority::High);
}
catch (const std::exception& e)
{
op::error(e.what(), __LINE__, __FUNCTION__, __FILE__);
}
}
void printKeypoints(const std::shared_ptr<std::vector<std::shared_ptr<op::Datum>>>& datumsPtr)
{
try
{
// Example: How to use the pose keypoints
if (datumsPtr != nullptr && !datumsPtr->empty())
{
op::log("Body keypoints: " + datumsPtr->at(0)->poseKeypoints.toString(), op::Priority::High);
op::log("Face keypoints: " + datumsPtr->at(0)->faceKeypoints.toString(), op::Priority::High);
op::log("Left hand keypoints: " + datumsPtr->at(0)->handKeypoints[0].toString(), op::Priority::High);
op::log("Right hand keypoints: " + datumsPtr->at(0)->handKeypoints[1].toString(), op::Priority::High);
}
else
op::log("Nullptr or empty datumsPtr found.", op::Priority::High);
}
catch (const std::exception& e)
{
op::error(e.what(), __LINE__, __FUNCTION__, __FILE__);
}
}
void configureWrapper(op::Wrapper& opWrapper)
{
try
{
// Configuring OpenPose
// logging_level
op::check(0 <= FLAGS_logging_level && FLAGS_logging_level <= 255, "Wrong logging_level value.",
__LINE__, __FUNCTION__, __FILE__);
op::ConfigureLog::setPriorityThreshold((op::Priority)FLAGS_logging_level);
op::Profiler::setDefaultX(FLAGS_profile_speed);
// Applying user defined configuration - GFlags to program variables
// outputSize
const auto outputSize = op::flagsToPoint(FLAGS_output_resolution, "-1x-1");
// netInputSize
const auto netInputSize = op::flagsToPoint(FLAGS_net_resolution, "-1x368");
// faceNetInputSize
const auto faceNetInputSize = op::flagsToPoint(FLAGS_face_net_resolution, "368x368 (multiples of 16)");
// handNetInputSize
const auto handNetInputSize = op::flagsToPoint(FLAGS_hand_net_resolution, "368x368 (multiples of 16)");
// poseMode
const auto poseMode = op::flagsToPoseMode(FLAGS_body);
// poseModel
const auto poseModel = op::flagsToPoseModel(FLAGS_model_pose);
// JSON saving
if (!FLAGS_write_keypoint.empty())
op::log("Flag `write_keypoint` is deprecated and will eventually be removed."
" Please, use `write_json` instead.", op::Priority::Max);
// keypointScaleMode
const auto keypointScaleMode = op::flagsToScaleMode(FLAGS_keypoint_scale);
// heatmaps to add
const auto heatMapTypes = op::flagsToHeatMaps(FLAGS_heatmaps_add_parts, FLAGS_heatmaps_add_bkg,
FLAGS_heatmaps_add_PAFs);
const auto heatMapScaleMode = op::flagsToHeatMapScaleMode(FLAGS_heatmaps_scale);
// >1 camera view?
const auto multipleView = (FLAGS_3d || FLAGS_3d_views > 1);
// Face and hand detectors
const auto faceDetector = op::flagsToDetector(FLAGS_face_detector);
const auto handDetector = op::flagsToDetector(FLAGS_hand_detector);
// Enabling Google Logging
const bool enableGoogleLogging = true;
// Pose configuration (use WrapperStructPose{} for default and recommended configuration)
const op::WrapperStructPose wrapperStructPose{
poseMode, netInputSize, outputSize, keypointScaleMode, FLAGS_num_gpu, FLAGS_num_gpu_start,
FLAGS_scale_number, (float)FLAGS_scale_gap, op::flagsToRenderMode(FLAGS_render_pose, multipleView),
poseModel, !FLAGS_disable_blending, (float)FLAGS_alpha_pose, (float)FLAGS_alpha_heatmap,
FLAGS_part_to_show, FLAGS_model_folder, heatMapTypes, heatMapScaleMode, FLAGS_part_candidates,
(float)FLAGS_render_threshold, FLAGS_number_people_max, FLAGS_maximize_positives, FLAGS_fps_max,
FLAGS_prototxt_path, FLAGS_caffemodel_path, (float)FLAGS_upsampling_ratio, enableGoogleLogging};
opWrapper.configure(wrapperStructPose);
// Face configuration (use op::WrapperStructFace{} to disable it)
const op::WrapperStructFace wrapperStructFace{
FLAGS_face, faceDetector, faceNetInputSize,
op::flagsToRenderMode(FLAGS_face_render, multipleView, FLAGS_render_pose),
(float)FLAGS_face_alpha_pose, (float)FLAGS_face_alpha_heatmap, (float)FLAGS_face_render_threshold};
opWrapper.configure(wrapperStructFace);
// Hand configuration (use op::WrapperStructHand{} to disable it)
const op::WrapperStructHand wrapperStructHand{
FLAGS_hand, handDetector, handNetInputSize, FLAGS_hand_scale_number, (float)FLAGS_hand_scale_range,
op::flagsToRenderMode(FLAGS_hand_render, multipleView, FLAGS_render_pose), (float)FLAGS_hand_alpha_pose,
(float)FLAGS_hand_alpha_heatmap, (float)FLAGS_hand_render_threshold};
opWrapper.configure(wrapperStructHand);
// Extra functionality configuration (use op::WrapperStructExtra{} to disable it)
const op::WrapperStructExtra wrapperStructExtra{
FLAGS_3d, FLAGS_3d_min_views, FLAGS_identification, FLAGS_tracking, FLAGS_ik_threads};
opWrapper.configure(wrapperStructExtra);
// Output (comment or use default argument to disable any output)
const op::WrapperStructOutput wrapperStructOutput{
FLAGS_cli_verbose, FLAGS_write_keypoint, op::stringToDataFormat(FLAGS_write_keypoint_format),
FLAGS_write_json, FLAGS_write_coco_json, FLAGS_write_coco_foot_json, FLAGS_write_coco_json_variant,
FLAGS_write_images, FLAGS_write_images_format, FLAGS_write_video, FLAGS_write_video_fps,
FLAGS_write_video_with_audio, FLAGS_write_heatmaps, FLAGS_write_heatmaps_format, FLAGS_write_video_3d,
FLAGS_write_video_adam, FLAGS_write_bvh, FLAGS_udp_host, FLAGS_udp_port};
opWrapper.configure(wrapperStructOutput);
// No GUI. Equivalent to: opWrapper.configure(op::WrapperStructGui{});
// Set to single-thread (for sequential processing and/or debugging and/or reducing latency)
if (FLAGS_disable_multi_thread)
opWrapper.disableMultiThreading();
}
catch (const std::exception& e)
{
op::error(e.what(), __LINE__, __FUNCTION__, __FILE__);
}
}
int tutorialApiCpp()
{
try
{
op::log("Starting OpenPose demo...", op::Priority::High);
const auto opTimer = op::getTimerInit();
// Image to process
const auto imageToProcess = cv::imread(FLAGS_image_path);
// Required flags to disable the OpenPose network
FLAGS_body = 2;
// Configuring OpenPose
op::log("Configuring OpenPose...", op::Priority::High);
op::Wrapper opWrapper{op::ThreadManagerMode::Asynchronous};
configureWrapper(opWrapper);
// Heatmap set selection
std::shared_ptr<std::vector<std::shared_ptr<op::Datum>>> datumHeatmaps;
// Using a random set of heatmaps
// Replace the following lines inside the try-catch block with your custom heatmap generator
try
{
op::log("Temporarily running another OpenPose instance to get the heatmaps...", op::Priority::High);
// Required flags to enable heatmaps
FLAGS_heatmaps_add_parts = true;
FLAGS_heatmaps_add_bkg = true;
FLAGS_heatmaps_add_PAFs = true;
FLAGS_heatmaps_scale = 3;
FLAGS_upsampling_ratio = 1;
FLAGS_body = 1;
// Configuring OpenPose
op::Wrapper opWrapperGetHeatMaps{op::ThreadManagerMode::Asynchronous};
configureWrapper(opWrapperGetHeatMaps);
// Starting OpenPose
opWrapperGetHeatMaps.start();
// Get heatmaps
datumHeatmaps = opWrapperGetHeatMaps.emplaceAndPop(imageToProcess);
if (datumHeatmaps == nullptr)
op::error("Image could not be processed.", __LINE__, __FUNCTION__, __FILE__);
}
catch (const std::exception& e)
{
op::error(e.what(), __LINE__, __FUNCTION__, __FILE__);
}
// Starting OpenPose
op::log("Starting thread(s)...", op::Priority::High);
opWrapper.start();
// Create new datum
auto datumProcessed = std::make_shared<std::vector<std::shared_ptr<op::Datum>>>();
datumProcessed->emplace_back();
auto& datumPtr = datumProcessed->at(0);
datumPtr = std::make_shared<op::Datum>();
// Fill datum
datumPtr->cvInputData = imageToProcess;
datumPtr->poseNetOutput = datumHeatmaps->at(0)->poseHeatMaps;
// Display image
if (opWrapper.emplaceAndPop(datumProcessed))
{
printKeypoints(datumProcessed);
if (!FLAGS_no_display)
display(datumProcessed);
}
else
op::log("Image could not be processed.", op::Priority::High);
// Info
op::log("NOTE: In addition with the user flags, this demo has auto-selected the following flags:\n"
"\t`--body 2`", op::Priority::High);
// Measuring total time
op::printTime(opTimer, "OpenPose demo successfully finished. Total time: ", " seconds.", op::Priority::High);
// Return
return 0;
}
catch (const std::exception& e)
{
return -1;
}
}
int main(int argc, char *argv[])
{
// Parsing command line flags
gflags::ParseCommandLineFlags(&argc, &argv, true);
// Running tutorialApiCpp
return tutorialApiCpp();
}
// ------------------------- OpenPose C++ API Tutorial - Example 9 - Custom Input -------------------------
// ------------------------- OpenPose C++ API Tutorial - Example 10 - Custom Input -------------------------
// Asynchronous mode: ideal for fast prototyping when performance is not an issue.
// In this function, the user can implement its own way to create frames (e.g., reading his own folder of images)
// and emplaces/pushes the frames to OpenPose.
......@@ -96,6 +96,8 @@ void configureWrapper(op::Wrapper& opWrapper)
const auto faceNetInputSize = op::flagsToPoint(FLAGS_face_net_resolution, "368x368 (multiples of 16)");
// handNetInputSize
const auto handNetInputSize = op::flagsToPoint(FLAGS_hand_net_resolution, "368x368 (multiples of 16)");
// poseMode
const auto poseMode = op::flagsToPoseMode(FLAGS_body);
// poseModel
const auto poseModel = op::flagsToPoseModel(FLAGS_model_pose);
// JSON saving
......@@ -118,12 +120,12 @@ void configureWrapper(op::Wrapper& opWrapper)
// Pose configuration (use WrapperStructPose{} for default and recommended configuration)
const op::WrapperStructPose wrapperStructPose{
!FLAGS_body_disable, netInputSize, outputSize, keypointScaleMode, FLAGS_num_gpu, FLAGS_num_gpu_start,
poseMode, netInputSize, outputSize, keypointScaleMode, FLAGS_num_gpu, FLAGS_num_gpu_start,
FLAGS_scale_number, (float)FLAGS_scale_gap, op::flagsToRenderMode(FLAGS_render_pose, multipleView),
poseModel, !FLAGS_disable_blending, (float)FLAGS_alpha_pose, (float)FLAGS_alpha_heatmap,
FLAGS_part_to_show, FLAGS_model_folder, heatMapTypes, heatMapScaleMode, FLAGS_part_candidates,
(float)FLAGS_render_threshold, FLAGS_number_people_max, FLAGS_maximize_positives, FLAGS_fps_max,
FLAGS_prototxt_path, FLAGS_caffemodel_path, enableGoogleLogging};
FLAGS_prototxt_path, FLAGS_caffemodel_path, (float)FLAGS_upsampling_ratio, enableGoogleLogging};
opWrapper.configure(wrapperStructPose);
// Face configuration (use op::WrapperStructFace{} to disable it)
const op::WrapperStructFace wrapperStructFace{
......
// ------------------------- OpenPose C++ API Tutorial - Example 10 - Custom Output -------------------------
// ------------------------- OpenPose C++ API Tutorial - Example 11 - Custom Output -------------------------
// Asynchronous mode: ideal for fast prototyping when performance is not an issue.
// In this function, the user can implement its own way to render/display/storage the results.
......@@ -115,6 +115,8 @@ void configureWrapper(op::Wrapper& opWrapper)
const auto faceNetInputSize = op::flagsToPoint(FLAGS_face_net_resolution, "368x368 (multiples of 16)");
// handNetInputSize
const auto handNetInputSize = op::flagsToPoint(FLAGS_hand_net_resolution, "368x368 (multiples of 16)");
// poseMode
const auto poseMode = op::flagsToPoseMode(FLAGS_body);
// poseModel
const auto poseModel = op::flagsToPoseModel(FLAGS_model_pose);
// JSON saving
......@@ -137,12 +139,12 @@ void configureWrapper(op::Wrapper& opWrapper)
// Pose configuration (use WrapperStructPose{} for default and recommended configuration)
const op::WrapperStructPose wrapperStructPose{
!FLAGS_body_disable, netInputSize, outputSize, keypointScaleMode, FLAGS_num_gpu, FLAGS_num_gpu_start,
poseMode, netInputSize, outputSize, keypointScaleMode, FLAGS_num_gpu, FLAGS_num_gpu_start,
FLAGS_scale_number, (float)FLAGS_scale_gap, op::flagsToRenderMode(FLAGS_render_pose, multipleView),
poseModel, !FLAGS_disable_blending, (float)FLAGS_alpha_pose, (float)FLAGS_alpha_heatmap,
FLAGS_part_to_show, FLAGS_model_folder, heatMapTypes, heatMapScaleMode, FLAGS_part_candidates,
(float)FLAGS_render_threshold, FLAGS_number_people_max, FLAGS_maximize_positives, FLAGS_fps_max,
FLAGS_prototxt_path, FLAGS_caffemodel_path, enableGoogleLogging};
FLAGS_prototxt_path, FLAGS_caffemodel_path, (float)FLAGS_upsampling_ratio, enableGoogleLogging};
opWrapper.configure(wrapperStructPose);
// Face configuration (use op::WrapperStructFace{} to disable it)
const op::WrapperStructFace wrapperStructFace{
......
// --------------------- OpenPose C++ API Tutorial - Example 11 - Custom Input, Output, and Datum ---------------------
// --------------------- OpenPose C++ API Tutorial - Example 12 - Custom Input, Output, and Datum ---------------------
// Asynchronous mode: ideal for fast prototyping when performance is not an issue.
// In this function, the user can implement its own way to create frames (e.g., reading his own folder of images)
// and its own way to render/display them after being processed by OpenPose.
......@@ -193,6 +193,8 @@ void configureWrapper(op::WrapperT<UserDatum>& opWrapperT)
const auto faceNetInputSize = op::flagsToPoint(FLAGS_face_net_resolution, "368x368 (multiples of 16)");
// handNetInputSize
const auto handNetInputSize = op::flagsToPoint(FLAGS_hand_net_resolution, "368x368 (multiples of 16)");
// poseMode
const auto poseMode = op::flagsToPoseMode(FLAGS_body);
// poseModel
const auto poseModel = op::flagsToPoseModel(FLAGS_model_pose);
// JSON saving
......@@ -215,12 +217,12 @@ void configureWrapper(op::WrapperT<UserDatum>& opWrapperT)
// Pose configuration (use WrapperStructPose{} for default and recommended configuration)
const op::WrapperStructPose wrapperStructPose{
!FLAGS_body_disable, netInputSize, outputSize, keypointScaleMode, FLAGS_num_gpu, FLAGS_num_gpu_start,
poseMode, netInputSize, outputSize, keypointScaleMode, FLAGS_num_gpu, FLAGS_num_gpu_start,
FLAGS_scale_number, (float)FLAGS_scale_gap, op::flagsToRenderMode(FLAGS_render_pose, multipleView),
poseModel, !FLAGS_disable_blending, (float)FLAGS_alpha_pose, (float)FLAGS_alpha_heatmap,
FLAGS_part_to_show, FLAGS_model_folder, heatMapTypes, heatMapScaleMode, FLAGS_part_candidates,
(float)FLAGS_render_threshold, FLAGS_number_people_max, FLAGS_maximize_positives, FLAGS_fps_max,
FLAGS_prototxt_path, FLAGS_caffemodel_path, enableGoogleLogging};
FLAGS_prototxt_path, FLAGS_caffemodel_path, (float)FLAGS_upsampling_ratio, enableGoogleLogging};
opWrapperT.configure(wrapperStructPose);
// Face configuration (use op::WrapperStructFace{} to disable it)
const op::WrapperStructFace wrapperStructFace{
......
// ------------------------- OpenPose C++ API Tutorial - Example 12 - Custom Input -------------------------
// ------------------------- OpenPose C++ API Tutorial - Example 13 - Custom Input -------------------------
// Synchronous mode: ideal for production integration. It provides the fastest results with respect to runtime
// performance.
// In this function, the user can implement its own way to create frames (e.g., reading his own folder of images).
......@@ -101,6 +101,8 @@ void configureWrapper(op::Wrapper& opWrapper)
const auto faceNetInputSize = op::flagsToPoint(FLAGS_face_net_resolution, "368x368 (multiples of 16)");
// handNetInputSize
const auto handNetInputSize = op::flagsToPoint(FLAGS_hand_net_resolution, "368x368 (multiples of 16)");
// poseMode
const auto poseMode = op::flagsToPoseMode(FLAGS_body);
// poseModel
const auto poseModel = op::flagsToPoseModel(FLAGS_model_pose);
// JSON saving
......@@ -131,12 +133,12 @@ void configureWrapper(op::Wrapper& opWrapper)
// Pose configuration (use WrapperStructPose{} for default and recommended configuration)
const op::WrapperStructPose wrapperStructPose{
!FLAGS_body_disable, netInputSize, outputSize, keypointScaleMode, FLAGS_num_gpu, FLAGS_num_gpu_start,
poseMode, netInputSize, outputSize, keypointScaleMode, FLAGS_num_gpu, FLAGS_num_gpu_start,
FLAGS_scale_number, (float)FLAGS_scale_gap, op::flagsToRenderMode(FLAGS_render_pose, multipleView),
poseModel, !FLAGS_disable_blending, (float)FLAGS_alpha_pose, (float)FLAGS_alpha_heatmap,
FLAGS_part_to_show, FLAGS_model_folder, heatMapTypes, heatMapScaleMode, FLAGS_part_candidates,
(float)FLAGS_render_threshold, FLAGS_number_people_max, FLAGS_maximize_positives, FLAGS_fps_max,
FLAGS_prototxt_path, FLAGS_caffemodel_path, enableGoogleLogging};
FLAGS_prototxt_path, FLAGS_caffemodel_path, (float)FLAGS_upsampling_ratio, enableGoogleLogging};
opWrapper.configure(wrapperStructPose);
// Face configuration (use op::WrapperStructFace{} to disable it)
const op::WrapperStructFace wrapperStructFace{
......
// ------------------------- OpenPose C++ API Tutorial - Example 13 - Custom Pre-processing -------------------------
// ------------------------- OpenPose C++ API Tutorial - Example 14 - Custom Pre-processing -------------------------
// Synchronous mode: ideal for production integration. It provides the fastest results with respect to runtime
// performance.
// In this function, the user can implement its own pre-processing, i.e., his function will be called after the image
......@@ -66,6 +66,8 @@ void configureWrapper(op::Wrapper& opWrapper)
const auto faceNetInputSize = op::flagsToPoint(FLAGS_face_net_resolution, "368x368 (multiples of 16)");
// handNetInputSize
const auto handNetInputSize = op::flagsToPoint(FLAGS_hand_net_resolution, "368x368 (multiples of 16)");
// poseMode
const auto poseMode = op::flagsToPoseMode(FLAGS_body);
// poseModel
const auto poseModel = op::flagsToPoseModel(FLAGS_model_pose);
// JSON saving
......@@ -95,12 +97,12 @@ void configureWrapper(op::Wrapper& opWrapper)
// Pose configuration (use WrapperStructPose{} for default and recommended configuration)
const op::WrapperStructPose wrapperStructPose{
!FLAGS_body_disable, netInputSize, outputSize, keypointScaleMode, FLAGS_num_gpu, FLAGS_num_gpu_start,
poseMode, netInputSize, outputSize, keypointScaleMode, FLAGS_num_gpu, FLAGS_num_gpu_start,
FLAGS_scale_number, (float)FLAGS_scale_gap, op::flagsToRenderMode(FLAGS_render_pose, multipleView),
poseModel, !FLAGS_disable_blending, (float)FLAGS_alpha_pose, (float)FLAGS_alpha_heatmap,
FLAGS_part_to_show, FLAGS_model_folder, heatMapTypes, heatMapScaleMode, FLAGS_part_candidates,
(float)FLAGS_render_threshold, FLAGS_number_people_max, FLAGS_maximize_positives, FLAGS_fps_max,
FLAGS_prototxt_path, FLAGS_caffemodel_path, enableGoogleLogging};
FLAGS_prototxt_path, FLAGS_caffemodel_path, (float)FLAGS_upsampling_ratio, enableGoogleLogging};
opWrapper.configure(wrapperStructPose);
// Face configuration (use op::WrapperStructFace{} to disable it)
const op::WrapperStructFace wrapperStructFace{
......
// ------------------------- OpenPose C++ API Tutorial - Example 14 - Custom Post-processing -------------------------
// ------------------------- OpenPose C++ API Tutorial - Example 15 - Custom Post-processing -------------------------
// Synchronous mode: ideal for production integration. It provides the fastest results with respect to runtime
// performance.
// In this function, the user can implement its own post-processing, i.e., his function will be called after OpenPose
......@@ -67,6 +67,8 @@ void configureWrapper(op::Wrapper& opWrapper)
const auto faceNetInputSize = op::flagsToPoint(FLAGS_face_net_resolution, "368x368 (multiples of 16)");
// handNetInputSize
const auto handNetInputSize = op::flagsToPoint(FLAGS_hand_net_resolution, "368x368 (multiples of 16)");
// poseMode
const auto poseMode = op::flagsToPoseMode(FLAGS_body);
// poseModel
const auto poseModel = op::flagsToPoseModel(FLAGS_model_pose);
// JSON saving
......@@ -96,12 +98,12 @@ void configureWrapper(op::Wrapper& opWrapper)
// Pose configuration (use WrapperStructPose{} for default and recommended configuration)
const op::WrapperStructPose wrapperStructPose{
!FLAGS_body_disable, netInputSize, outputSize, keypointScaleMode, FLAGS_num_gpu, FLAGS_num_gpu_start,
poseMode, netInputSize, outputSize, keypointScaleMode, FLAGS_num_gpu, FLAGS_num_gpu_start,
FLAGS_scale_number, (float)FLAGS_scale_gap, op::flagsToRenderMode(FLAGS_render_pose, multipleView),
poseModel, !FLAGS_disable_blending, (float)FLAGS_alpha_pose, (float)FLAGS_alpha_heatmap,
FLAGS_part_to_show, FLAGS_model_folder, heatMapTypes, heatMapScaleMode, FLAGS_part_candidates,
(float)FLAGS_render_threshold, FLAGS_number_people_max, FLAGS_maximize_positives, FLAGS_fps_max,
FLAGS_prototxt_path, FLAGS_caffemodel_path, enableGoogleLogging};
FLAGS_prototxt_path, FLAGS_caffemodel_path, (float)FLAGS_upsampling_ratio, enableGoogleLogging};
opWrapper.configure(wrapperStructPose);
// Face configuration (use op::WrapperStructFace{} to disable it)
const op::WrapperStructFace wrapperStructFace{
......
// ------------------------- OpenPose C++ API Tutorial - Example 15 - Custom Output -------------------------
// ------------------------- OpenPose C++ API Tutorial - Example 16 - Custom Output -------------------------
// Synchronous mode: ideal for production integration. It provides the fastest results with respect to runtime
// performance.
// In this function, the user can implement its own way to render/display/storage the results.
......@@ -123,6 +123,8 @@ void configureWrapper(op::Wrapper& opWrapper)
const auto faceNetInputSize = op::flagsToPoint(FLAGS_face_net_resolution, "368x368 (multiples of 16)");
// handNetInputSize
const auto handNetInputSize = op::flagsToPoint(FLAGS_hand_net_resolution, "368x368 (multiples of 16)");
// poseMode
const auto poseMode = op::flagsToPoseMode(FLAGS_body);
// poseModel
const auto poseModel = op::flagsToPoseModel(FLAGS_model_pose);
// JSON saving
......@@ -152,12 +154,12 @@ void configureWrapper(op::Wrapper& opWrapper)
// Pose configuration (use WrapperStructPose{} for default and recommended configuration)
const op::WrapperStructPose wrapperStructPose{
!FLAGS_body_disable, netInputSize, outputSize, keypointScaleMode, FLAGS_num_gpu, FLAGS_num_gpu_start,
poseMode, netInputSize, outputSize, keypointScaleMode, FLAGS_num_gpu, FLAGS_num_gpu_start,
FLAGS_scale_number, (float)FLAGS_scale_gap, op::flagsToRenderMode(FLAGS_render_pose, multipleView),
poseModel, !FLAGS_disable_blending, (float)FLAGS_alpha_pose, (float)FLAGS_alpha_heatmap,
FLAGS_part_to_show, FLAGS_model_folder, heatMapTypes, heatMapScaleMode, FLAGS_part_candidates,
(float)FLAGS_render_threshold, FLAGS_number_people_max, FLAGS_maximize_positives, FLAGS_fps_max,
FLAGS_prototxt_path, FLAGS_caffemodel_path, enableGoogleLogging};
FLAGS_prototxt_path, FLAGS_caffemodel_path, (float)FLAGS_upsampling_ratio, enableGoogleLogging};
opWrapper.configure(wrapperStructPose);
// Face configuration (use op::WrapperStructFace{} to disable it)
const op::WrapperStructFace wrapperStructFace{
......
// --- OpenPose C++ API Tutorial - Example 16 - Custom Input, Pre-processing, Post-processing, Output, and Datum ---
// --- OpenPose C++ API Tutorial - Example 17 - Custom Input, Pre-processing, Post-processing, Output, and Datum ---
// Synchronous mode: ideal for production integration. It provides the fastest results with respect to runtime
// performance.
// In this function, the user can implement its own way to read frames, implement its own post-processing (i.e., his
......@@ -231,6 +231,8 @@ void configureWrapper(op::WrapperT<UserDatum>& opWrapperT)
const auto faceNetInputSize = op::flagsToPoint(FLAGS_face_net_resolution, "368x368 (multiples of 16)");
// handNetInputSize
const auto handNetInputSize = op::flagsToPoint(FLAGS_hand_net_resolution, "368x368 (multiples of 16)");
// poseMode
const auto poseMode = op::flagsToPoseMode(FLAGS_body);
// poseModel
const auto poseModel = op::flagsToPoseModel(FLAGS_model_pose);
// JSON saving
......@@ -271,12 +273,12 @@ void configureWrapper(op::WrapperT<UserDatum>& opWrapperT)
// Pose configuration (use WrapperStructPose{} for default and recommended configuration)
const op::WrapperStructPose wrapperStructPose{
!FLAGS_body_disable, netInputSize, outputSize, keypointScaleMode, FLAGS_num_gpu, FLAGS_num_gpu_start,
poseMode, netInputSize, outputSize, keypointScaleMode, FLAGS_num_gpu, FLAGS_num_gpu_start,
FLAGS_scale_number, (float)FLAGS_scale_gap, op::flagsToRenderMode(FLAGS_render_pose, multipleView),
poseModel, !FLAGS_disable_blending, (float)FLAGS_alpha_pose, (float)FLAGS_alpha_heatmap,
FLAGS_part_to_show, FLAGS_model_folder, heatMapTypes, heatMapScaleMode, FLAGS_part_candidates,
(float)FLAGS_render_threshold, FLAGS_number_people_max, FLAGS_maximize_positives, FLAGS_fps_max,
FLAGS_prototxt_path, FLAGS_caffemodel_path, enableGoogleLogging};
FLAGS_prototxt_path, FLAGS_caffemodel_path, (float)FLAGS_upsampling_ratio, enableGoogleLogging};
opWrapperT.configure(wrapperStructPose);
// Face configuration (use op::WrapperStructFace{} to disable it)
const op::WrapperStructFace wrapperStructFace{
......
......@@ -7,14 +7,15 @@ set(EXAMPLE_FILES
06_face_from_image.cpp
07_hand_from_image.cpp
08_heatmaps_from_image.cpp
09_asynchronous_custom_input.cpp
10_asynchronous_custom_output.cpp
11_asynchronous_custom_input_output_and_datum.cpp
12_synchronous_custom_input.cpp
13_synchronous_custom_preprocessing.cpp
14_synchronous_custom_postprocessing.cpp
15_synchronous_custom_output.cpp
16_synchronous_custom_all_and_datum.cpp)
09_keypoints_from_heatmaps.cpp
10_asynchronous_custom_input.cpp
11_asynchronous_custom_output.cpp
12_asynchronous_custom_input_output_and_datum.cpp
13_synchronous_custom_input.cpp
14_synchronous_custom_preprocessing.cpp
15_synchronous_custom_postprocessing.cpp
16_synchronous_custom_output.cpp
17_synchronous_custom_all_and_datum.cpp)
include(${CMAKE_SOURCE_DIR}/cmake/Utils.cmake)
......
......@@ -5,7 +5,6 @@ import cv2
import os
from sys import platform
import argparse
import numpy as np
# Import Openpose (Windows/Ubuntu/OSX)
dir_path = os.path.dirname(os.path.realpath(__file__))
......
# From Python
# It requires OpenCV installed for Python
import sys
import cv2
import os
from sys import platform
import argparse
import numpy as np
# Import Openpose (Windows/Ubuntu/OSX)
dir_path = os.path.dirname(os.path.realpath(__file__))
try:
# Windows Import
if platform == "win32":
# Change these variables to point to the correct folder (Release/x64 etc.)
sys.path.append(dir_path + '/../../python/openpose/Release');
os.environ['PATH'] = os.environ['PATH'] + ';' + dir_path + '/../../x64/Release;' + dir_path + '/../../bin;'
import pyopenpose as op
else:
# Change these variables to point to the correct folder (Release/x64 etc.)
sys.path.append('../../python');
# If you run `make install` (default path is `/usr/local/python` for Ubuntu), you can also access the OpenPose/python module from there. This will install OpenPose and the python library at your desired installation path. Ensure that this is in your python path in order to use it.
# sys.path.append('/usr/local/python')
from openpose import pyopenpose as op
except ImportError as e:
print('Error: OpenPose library could not be found. Did you enable `BUILD_PYTHON` in CMake and have this Python script in the right folder?')
raise e
# Flags
parser = argparse.ArgumentParser()
parser.add_argument("--image_path", default="../../../examples/media/COCO_val2014_000000000192.jpg", help="Process an image. Read all standard formats (jpg, png, bmp, etc.).")
args = parser.parse_known_args()
# Custom Params (refer to include/openpose/flags.hpp for more parameters)
params = dict()
params["model_folder"] = "../../../models/"
params["heatmaps_add_parts"] = True
params["heatmaps_add_bkg"] = True
params["heatmaps_add_PAFs"] = True
params["heatmaps_scale"] = 2
# Add others in path?
for i in range(0, len(args[1])):
curr_item = args[1][i]
if i != len(args[1])-1: next_item = args[1][i+1]
else: next_item = "1"
if "--" in curr_item and "--" in next_item:
key = curr_item.replace('-','')
if key not in params: params[key] = "1"
elif "--" in curr_item and "--" not in next_item:
key = curr_item.replace('-','')
if key not in params: params[key] = next_item
# Construct it from system arguments
# op.init_argv(args[1])
# oppython = op.OpenposePython()
# Starting OpenPose
opWrapper = op.WrapperPython()
opWrapper.configure(params)
opWrapper.start()
# Process Image
datum = op.Datum()
imageToProcess = cv2.imread(args[0].image_path)
datum.cvInputData = imageToProcess
opWrapper.emplaceAndPop([datum])
# Process outputs
outputImageF = (datum.inputNetData[0].copy())[0,:,:,:] + 0.5
outputImageF = cv2.merge([outputImageF[0,:,:], outputImageF[1,:,:], outputImageF[2,:,:]])
outputImageF = (outputImageF*255.).astype(dtype='uint8')
heatmaps = datum.poseHeatMaps.copy()
heatmaps = (heatmaps).astype(dtype='uint8')
# Display Image
counter = 0
while 1:
num_maps = heatmaps.shape[0]
heatmap = heatmaps[counter, :, :].copy()
heatmap = cv2.applyColorMap(heatmap, cv2.COLORMAP_JET)
combined = cv2.addWeighted(outputImageF, 0.5, heatmap, 0.5, 0)
cv2.imshow("OpenPose 1.4.0 - Tutorial Python API", combined)
key = cv2.waitKey(-1)
if key == 27:
break
counter += 1
counter = counter % num_maps
# From Python
# It requires OpenCV installed for Python
import sys
import cv2
import os
from sys import platform
import argparse
import time
# Import Openpose (Windows/Ubuntu/OSX)
dir_path = os.path.dirname(os.path.realpath(__file__))
try:
# Windows Import
if platform == "win32":
# Change these variables to point to the correct folder (Release/x64 etc.)
sys.path.append(dir_path + '/../../python/openpose/Release');
os.environ['PATH'] = os.environ['PATH'] + ';' + dir_path + '/../../x64/Release;' + dir_path + '/../../bin;'
import pyopenpose as op
else:
# Change these variables to point to the correct folder (Release/x64 etc.)
sys.path.append('../../python');
# If you run `make install` (default path is `/usr/local/python` for Ubuntu), you can also access the OpenPose/python module from there. This will install OpenPose and the python library at your desired installation path. Ensure that this is in your python path in order to use it.
# sys.path.append('/usr/local/python')
from openpose import pyopenpose as op
except ImportError as e:
print('Error: OpenPose library could not be found. Did you enable `BUILD_PYTHON` in CMake and have this Python script in the right folder?')
raise e
# Flags
parser = argparse.ArgumentParser()
parser.add_argument("--image_dir", default="../../../examples/media/", help="Process a directory of images. Read all standard formats (jpg, png, bmp, etc.).")
parser.add_argument("--no_display", default=False, help="Enable to disable the visual display.")
args = parser.parse_known_args()
# Custom Params (refer to include/openpose/flags.hpp for more parameters)
params = dict()
params["model_folder"] = "../../../models/"
# Add others in path?
for i in range(0, len(args[1])):
curr_item = args[1][i]
if i != len(args[1])-1: next_item = args[1][i+1]
else: next_item = "1"
if "--" in curr_item and "--" in next_item:
key = curr_item.replace('-','')
if key not in params: params[key] = "1"
elif "--" in curr_item and "--" not in next_item:
key = curr_item.replace('-','')
if key not in params: params[key] = next_item
# Construct it from system arguments
# op.init_argv(args[1])
# oppython = op.OpenposePython()
# Starting OpenPose
opWrapper = op.WrapperPython()
opWrapper.configure(params)
opWrapper.start()
# Read frames on directory
imagePaths = op.get_images_on_directory(args[0].image_dir);
start = time.time()
# Process and display images
for imagePath in imagePaths:
datum = op.Datum()
imageToProcess = cv2.imread(imagePath)
datum.cvInputData = imageToProcess
opWrapper.emplaceAndPop([datum])
print("Body keypoints: \n" + str(datum.poseKeypoints))
if not args[0].no_display:
cv2.imshow("OpenPose 1.4.0 - Tutorial Python API", datum.cvOutputData)
key = cv2.waitKey(15)
if key == 27: break
end = time.time()
print("OpenPose demo successfully finished. Total time: " + str(end - start) + " seconds")
# From Python
# It requires OpenCV installed for Python
import sys
import cv2
import os
from sys import platform
import argparse
import time
# Import Openpose (Windows/Ubuntu/OSX)
dir_path = os.path.dirname(os.path.realpath(__file__))
try:
# Windows Import
if platform == "win32":
# Change these variables to point to the correct folder (Release/x64 etc.)
sys.path.append(dir_path + '/../../python/openpose/Release');
os.environ['PATH'] = os.environ['PATH'] + ';' + dir_path + '/../../x64/Release;' + dir_path + '/../../bin;'
import pyopenpose as op
else:
# Change these variables to point to the correct folder (Release/x64 etc.)
sys.path.append('../../python');
# If you run `make install` (default path is `/usr/local/python` for Ubuntu), you can also access the OpenPose/python module from there. This will install OpenPose and the python library at your desired installation path. Ensure that this is in your python path in order to use it.
# sys.path.append('/usr/local/python')
from openpose import pyopenpose as op
except ImportError as e:
print('Error: OpenPose library could not be found. Did you enable `BUILD_PYTHON` in CMake and have this Python script in the right folder?')
raise e
# Flags
parser = argparse.ArgumentParser()
parser.add_argument("--image_dir", default="../../../examples/media/", help="Process a directory of images. Read all standard formats (jpg, png, bmp, etc.).")
parser.add_argument("--no_display", default=False, help="Enable to disable the visual display.")
args = parser.parse_known_args()
# Custom Params (refer to include/openpose/flags.hpp for more parameters)
params = dict()
params["model_folder"] = "../../../models/"
# Add others in path?
for i in range(0, len(args[1])):
curr_item = args[1][i]
if i != len(args[1])-1: next_item = args[1][i+1]
else: next_item = "1"
if "--" in curr_item and "--" in next_item:
key = curr_item.replace('-','')
if key not in params: params[key] = "1"
elif "--" in curr_item and "--" not in next_item:
key = curr_item.replace('-','')
if key not in params: params[key] = next_item
# Construct it from system arguments
# op.init_argv(args[1])
# oppython = op.OpenposePython()
# Starting OpenPose
opWrapper = op.WrapperPython()
opWrapper.configure(params)
opWrapper.start()
# Read frames on directory
imagePaths = op.get_images_on_directory(args[0].image_dir);
# Read number of GPUs in your system
numberGPUs = op.get_gpu_number()
start = time.time()
# Process and display images
for imageBaseId in range(0, len(imagePaths), numberGPUs):
# Create datums
datums = []
# Read and push images into OpenPose wrapper
for gpuId in range(0, numberGPUs):
imageId = imageBaseId+gpuId
if imageId < len(imagePaths):
imagePath = imagePaths[imageBaseId+gpuId]
datum = op.Datum()
imageToProcess = cv2.imread(imagePath)
datum.cvInputData = imageToProcess
datums.append(datum)
opWrapper.waitAndEmplace([datums[-1]])
# Retrieve processed results from OpenPose wrapper
for gpuId in range(0, numberGPUs):
imageId = imageBaseId+gpuId
if imageId < len(imagePaths):
datum = datums[gpuId]
opWrapper.waitAndPop([datum])
print("Body keypoints: \n" + str(datum.poseKeypoints))
if not args[0].no_display:
cv2.imshow("OpenPose 1.4.0 - Tutorial Python API", datum.cvOutputData)
key = cv2.waitKey(15)
if key == 27: break
end = time.time()
print("OpenPose demo successfully finished. Total time: " + str(end - start) + " seconds")
### Add Python Test
configure_file(01_body_from_image.py 01_body_from_image.py)
configure_file(02_whole_body_from_image.py 02_whole_body_from_image.py)
configure_file(04_keypoints_from_images.py 04_keypoints_from_images.py)
configure_file(05_keypoints_from_images_multi_gpu.py 05_keypoints_from_images_multi_gpu.py)
configure_file(06_face_from_image.py 06_face_from_image.py)
configure_file(07_hand_from_image.py 07_hand_from_image.py)
configure_file(08_heatmaps_from_image.py 08_heatmaps_from_image.py)
configure_file(openpose_python.py openpose_python.py)
configure_file(1_body_from_image.py 1_body_from_image.py)
configure_file(2_whole_body_from_image.py 2_whole_body_from_image.py)
configure_file(3_keypoints_from_images.py 3_keypoints_from_images.py)
configure_file(4_keypoints_from_images_multi_gpu.py 4_keypoints_from_images_multi_gpu.py)
configure_file(5_heatmaps_from_image.py 5_heatmaps_from_image.py)
configure_file(6_face_from_image.py 6_face_from_image.py)
configure_file(7_hand_from_image.py 7_hand_from_image.py)
......@@ -211,7 +211,7 @@ namespace op
* If it is not empty, OpenPose will not run its internal body pose estimation network and will instead use
* this data as the substitute of its network. The size of this element must match the size of the output of
* its internal network, or it will lead to core dumped (segmentation) errors. You can modify the pose
* estimation flags to match the dimension of both element (e.g., `--net_resolution`, `--scale_number`, etc.).
* estimation flags to match the dimension of both elements (e.g., `--net_resolution`, `--scale_number`, etc.).
*/
Array<float> poseNetOutput;
......
......@@ -89,8 +89,10 @@ DEFINE_double(fps_max, -1., "Maximum processing fram
" possible. Example usage: If OpenPose is displaying images too quickly, this can reduce"
" the speed so the user can analyze better each frame from the GUI.");
// OpenPose Body Pose
DEFINE_bool(body_disable, false, "Disable body keypoint detection. Option only possible for faster (but less accurate) face"
" keypoint detection.");
DEFINE_int32(body, 1, "Select 0 to disable body keypoint detection (e.g., for faster but less accurate face"
" keypoint detection, custom hand detector, etc.), 1 (default) for body keypoint"
" estimation, and 2 to disable its internal body pose estimation network but still"
" still run the greedy association parsing algorithm");
DEFINE_string(model_pose, "BODY_25", "Model to be used. E.g., `COCO` (18 keypoints), `MPI` (15 keypoints, ~10% faster), "
"`MPI_4_layers` (15 keypoints, even faster but less accurate).");
DEFINE_string(net_resolution, "-1x368", "Multiples of 16. If it is increased, the accuracy potentially increases. If it is"
......@@ -123,6 +125,8 @@ DEFINE_bool(part_candidates, false, "Also enable `write_json
" assembled into people). The empty body parts are filled with 0s. Program speed will"
" slightly decrease. Not required for OpenPose, enable it only if you intend to explicitly"
" use this information.");
DEFINE_double(upsampling_ratio, 0., "Upsampling ratio between the `net_resolution` and the output net results. A value less"
" or equal than 0 (default) will use the network default value (recommended).");
// OpenPose Face
DEFINE_bool(face, false, "Enables face keypoint detection. It will share some parameters from the body pose, e.g."
" `model_folder`. Note that this will considerable slow down the performance and increse"
......
......@@ -16,6 +16,7 @@ namespace op
const ScaleMode heatMapScaleMode = ScaleMode::ZeroToOne,
const bool addPartCandidates = false, const bool maximizePositives = false,
const std::string& protoTxtPath = "", const std::string& caffeModelPath = "",
const float upsamplingRatio = 0.f, const bool enableNet = true,
const bool enableGoogleLogging = true);
virtual ~PoseExtractorCaffe();
......@@ -26,7 +27,7 @@ namespace op
* @param poseNetOutput If it is not empty, OpenPose will not run its internal body pose estimation network
* and will instead use this data as the substitute of its network. The size of this element must match the
* size of the output of its internal network, or it will lead to core dumped (segmentation) errors. You can
* modify the pose estimation flags to match the dimension of both element (e.g., `--net_resolution`,
* modify the pose estimation flags to match the dimension of both elements (e.g., `--net_resolution`,
* `--scale_number`, etc.).
*/
void forwardPass(
......
......@@ -10,6 +10,8 @@
namespace op
{
OP_API PoseMode flagsToPoseMode(const int poseModeInt);
OP_API PoseModel flagsToPoseModel(const std::string& poseModeString);
OP_API ScaleMode flagsToScaleMode(const int keypointScaleMode);
......
......@@ -3,6 +3,14 @@
namespace op
{
enum class PoseMode : unsigned char
{
Disabled = 0,
Enabled,
NoNetwork,
Size,
};
enum class Detector : unsigned char
{
Body = 0,
......
......@@ -259,7 +259,7 @@ namespace op
std::vector<TWorker> cpuRenderers;
poseExtractorsWs.clear();
poseExtractorsWs.resize(numberThreads);
if (wrapperStructPose.enable)
if (wrapperStructPose.poseMode != PoseMode::Disabled)
{
// Pose estimators
for (auto gpuId = 0; gpuId < numberThreads; gpuId++)
......@@ -268,6 +268,7 @@ namespace op
wrapperStructPose.heatMapTypes, wrapperStructPose.heatMapScaleMode,
wrapperStructPose.addPartCandidates, wrapperStructPose.maximizePositives,
wrapperStructPose.protoTxtPath, wrapperStructPose.caffeModelPath,
wrapperStructPose.upsamplingRatio, wrapperStructPose.poseMode == PoseMode::Enabled,
wrapperStructPose.enableGoogleLogging
));
......@@ -359,7 +360,7 @@ namespace op
if (wrapperStructFace.detector == Detector::Body)
{
// Sanity check
if (!wrapperStructPose.enable)
if (wrapperStructPose.poseMode == PoseMode::Disabled)
error("Body keypoint detection is disabled but face Detector is set to Body. Either"
" re-enable OpenPose body or select a different face Detector (`--face_detector`).",
__LINE__, __FUNCTION__, __FILE__);
......@@ -414,7 +415,7 @@ namespace op
// Sanity check
if ((wrapperStructHand.detector == Detector::BodyWithTracking
|| wrapperStructHand.detector == Detector::Body)
&& !wrapperStructPose.enable)
&& wrapperStructPose.poseMode == PoseMode::Disabled)
error("Body keypoint detection is disabled but hand Detector is set to Body. Either"
" re-enable OpenPose body or select a different hand Detector (`--hand_detector`).",
__LINE__, __FUNCTION__, __FILE__);
......
......@@ -6,6 +6,7 @@
#include <openpose/pose/enumClasses.hpp>
#include <openpose/pose/poseParameters.hpp>
#include <openpose/pose/poseParametersRender.hpp>
#include <openpose/wrapper/enumClasses.hpp>
namespace op
{
......@@ -18,10 +19,10 @@ namespace op
{
/**
* Whether to extract body.
* It might be optionally disabled if only face keypoint detection is required. Otherwise, it must be always
* true.
* It might be optionally disabled for very few cases (e.g., if only face keypoint detection is desired for
* speedup while reducing its accuracy). Otherwise, it must be always enabled.
*/
bool enable;
PoseMode poseMode;
/**
* CCN (Conv Net) input size.
......@@ -187,6 +188,12 @@ namespace op
*/
std::string caffeModelPath;
/**
* The image upsampling scale. 8 is the stride of the network, so the ideal value to maximize the
* speed/accuracy trade-off.
*/
float upsamplingRatio;
/**
* Whether to internally enable Google Logging.
* This option is only applicable if Caffe is used.
......@@ -202,7 +209,7 @@ namespace op
* Since all the elements of the struct are public, they can also be manually filled.
*/
WrapperStructPose(
const bool enable = true, const Point<int>& netInputSize = Point<int>{656, 368},
const PoseMode poseMode = PoseMode::Enabled, const Point<int>& netInputSize = Point<int>{656, 368},
const Point<int>& outputSize = Point<int>{-1, -1},
const ScaleMode keypointScaleMode = ScaleMode::InputResolution, const int gpuNumber = -1,
const int gpuNumberStart = 0, const int scalesNumber = 1, const float scaleGap = 0.15f,
......@@ -212,8 +219,8 @@ namespace op
const std::string& modelFolder = "models/", const std::vector<HeatMapType>& heatMapTypes = {},
const ScaleMode heatMapScaleMode = ScaleMode::ZeroToOne, const bool addPartCandidates = false,
const float renderThreshold = 0.05f, const int numberPeopleMax = -1, const bool maximizePositives = false,
const double fpsMax = -1., const std::string& protoTxtPath = "",
const std::string& caffeModelPath = "", const bool enableGoogleLogging = true);
const double fpsMax = -1., const std::string& protoTxtPath = "", const std::string& caffeModelPath = "",
const float upsamplingRatio = 0.f, const bool enableGoogleLogging = true);
};
}
......
......@@ -79,6 +79,8 @@ public:
const auto faceNetInputSize = op::flagsToPoint(FLAGS_face_net_resolution, "368x368 (multiples of 16)");
// handNetInputSize
const auto handNetInputSize = op::flagsToPoint(FLAGS_hand_net_resolution, "368x368 (multiples of 16)");
// poseMode
const auto poseMode = op::flagsToPoseMode(FLAGS_body);
// poseModel
const auto poseModel = op::flagsToPoseModel(FLAGS_model_pose);
// JSON saving
......@@ -101,12 +103,12 @@ public:
// Pose configuration (use WrapperStructPose{} for default and recommended configuration)
const op::WrapperStructPose wrapperStructPose{
!FLAGS_body_disable, netInputSize, outputSize, keypointScaleMode, FLAGS_num_gpu, FLAGS_num_gpu_start,
poseMode, netInputSize, outputSize, keypointScaleMode, FLAGS_num_gpu, FLAGS_num_gpu_start,
FLAGS_scale_number, (float)FLAGS_scale_gap, op::flagsToRenderMode(FLAGS_render_pose, multipleView),
poseModel, !FLAGS_disable_blending, (float)FLAGS_alpha_pose, (float)FLAGS_alpha_heatmap,
FLAGS_part_to_show, FLAGS_model_folder, heatMapTypes, heatMapScaleMode, FLAGS_part_candidates,
(float)FLAGS_render_threshold, FLAGS_number_people_max, FLAGS_maximize_positives, FLAGS_fps_max,
FLAGS_prototxt_path, FLAGS_caffemodel_path, enableGoogleLogging};
FLAGS_prototxt_path, FLAGS_caffemodel_path, (float)FLAGS_upsampling_ratio, enableGoogleLogging};
opWrapper->configure(wrapperStructPose);
// Face configuration (use op::WrapperStructFace{} to disable it)
const op::WrapperStructFace wrapperStructFace{
......@@ -240,6 +242,7 @@ PYBIND11_MODULE(pyopenpose, m) {
.def_readwrite("cameraMatrix", &op::Datum::cameraMatrix)
.def_readwrite("cameraExtrinsics", &op::Datum::cameraExtrinsics)
.def_readwrite("cameraIntrinsics", &op::Datum::cameraIntrinsics)
.def_readwrite("poseNetOutput", &op::Datum::elementRendered)
.def_readwrite("scaleInputToNetInputs", &op::Datum::scaleInputToNetInputs)
.def_readwrite("netInputSizes", &op::Datum::netInputSizes)
.def_readwrite("scaleInputToOutput", &op::Datum::scaleInputToOutput)
......@@ -430,4 +433,3 @@ template <> struct type_caster<cv::Mat> {
}} // namespace pybind11::detail
#endif
......@@ -47,39 +47,43 @@ if [[ $RUN_EXAMPLES == true ]] ; then
echo " "
echo "Tutorial API C++: Example 8..."
./build/examples/tutorial_api_cpp/08_heatmaps_from_image.bin --hand_net_resolution 32x32 --write_json output/ --write_images output/ --no_display
./build/examples/tutorial_api_cpp/08_heatmaps_from_image.bin --net_resolution -1x32 --write_json output/ --write_images output/ --no_display
echo " "
echo "Tutorial API C++: Example 9..."
./build/examples/tutorial_api_cpp/09_asynchronous_custom_input.bin --image_dir examples/media/ --net_resolution -1x32 --write_json output/ --write_images output/ --display 0
./build/examples/tutorial_api_cpp/09_keypoints_from_heatmaps.bin --net_resolution -1x32 --write_json output/ --write_images output/ --no_display
echo " "
echo "Tutorial API C++: Example 10..."
./build/examples/tutorial_api_cpp/10_asynchronous_custom_output.bin --image_dir examples/media/ --net_resolution -1x32 --write_json output/ --write_images output/ --no_display
./build/examples/tutorial_api_cpp/10_asynchronous_custom_input.bin --image_dir examples/media/ --net_resolution -1x32 --write_json output/ --write_images output/ --display 0
echo " "
echo "Tutorial API C++: Example 11..."
./build/examples/tutorial_api_cpp/11_asynchronous_custom_input_output_and_datum.bin --image_dir examples/media/ --net_resolution -1x32 --write_json output/ --write_images output/ --no_display
./build/examples/tutorial_api_cpp/11_asynchronous_custom_output.bin --image_dir examples/media/ --net_resolution -1x32 --write_json output/ --write_images output/ --no_display
echo " "
echo "Tutorial API C++: Example 12..."
./build/examples/tutorial_api_cpp/12_synchronous_custom_input.bin --image_dir examples/media/ --net_resolution -1x32 --write_json output/ --write_images output/ --display 0
./build/examples/tutorial_api_cpp/12_asynchronous_custom_input_output_and_datum.bin --image_dir examples/media/ --net_resolution -1x32 --write_json output/ --write_images output/ --no_display
echo " "
echo "Tutorial API C++: Example 13..."
./build/examples/tutorial_api_cpp/13_synchronous_custom_preprocessing.bin --image_dir examples/media/ --net_resolution -1x32 --write_json output/ --write_images output/ --display 0
./build/examples/tutorial_api_cpp/13_synchronous_custom_input.bin --image_dir examples/media/ --net_resolution -1x32 --write_json output/ --write_images output/ --display 0
echo " "
echo "Tutorial API C++: Example 14..."
./build/examples/tutorial_api_cpp/14_synchronous_custom_postprocessing.bin --image_dir examples/media/ --net_resolution -1x32 --write_json output/ --write_images output/ --display 0
./build/examples/tutorial_api_cpp/14_synchronous_custom_preprocessing.bin --image_dir examples/media/ --net_resolution -1x32 --write_json output/ --write_images output/ --display 0
echo " "
echo "Tutorial API C++: Example 15..."
./build/examples/tutorial_api_cpp/15_synchronous_custom_output.bin --image_dir examples/media/ --net_resolution -1x32 --write_json output/ --write_images output/ --no_display
./build/examples/tutorial_api_cpp/15_synchronous_custom_postprocessing.bin --image_dir examples/media/ --net_resolution -1x32 --write_json output/ --write_images output/ --display 0
echo " "
echo "Tutorial API C++: Example 16..."
./build/examples/tutorial_api_cpp/16_synchronous_custom_all_and_datum.bin --image_dir examples/media/ --net_resolution -1x32 --write_json output/ --write_images output/ --no_display
./build/examples/tutorial_api_cpp/16_synchronous_custom_output.bin --image_dir examples/media/ --net_resolution -1x32 --write_json output/ --write_images output/ --no_display
echo " "
echo "Tutorial API C++: Example 17..."
./build/examples/tutorial_api_cpp/17_synchronous_custom_all_and_datum.bin --image_dir examples/media/ --net_resolution -1x32 --write_json output/ --write_images output/ --no_display
echo " "
# Python examples
......
......@@ -63,9 +63,18 @@ namespace op
try
{
#ifdef USE_CAFFE
// Get updated size
std::vector<int> arraySize;
// If batch size = 1 --> E.g., array.getSize() == {78, 368, 368}
if (array.getNumberDimensions() == 3)
// Add 1: arraySize = {1}
arraySize.emplace_back(1);
// Add {78, 368, 368}: arraySize = {1, 78, 368, 368}
for (const auto& sizeI : array.getSize())
arraySize.emplace_back(sizeI);
// Construct spImpl
spImpl.reset(new ImplArrayCpuGpu{});
spImpl->upCaffeBlobT.reset(new caffe::Blob<T>{array.getSize()});
spImpl->upCaffeBlobT.reset(new caffe::Blob<T>{arraySize});
spImpl->pCaffeBlobT = spImpl->upCaffeBlobT.get();
// Copy data
// CPU copy
......
......@@ -47,9 +47,9 @@ namespace op
}
template <typename T>
__global__ void writeResultKernel(T* output, const int length, const int* const kernelPtr,
const T* const sourcePtr, const int width, const int height, const int maxPeaks,
const T offsetX, const T offsetY)
__global__ void writeResultKernel(
T* output, const int length, const int* const kernelPtr, const T* const sourcePtr, const int width,
const int height, const int maxPeaks, const T offsetX, const T offsetY)
{
__shared__ int local[THREADS_PER_BLOCK+1]; // one more
const auto globalIdx = blockIdx.x * blockDim.x + threadIdx.x;
......@@ -226,9 +226,9 @@ namespace op
thrust::exclusive_scan(kernelThrustPtr, kernelThrustPtr + imageOffset, kernelThrustPtr);
// This returns targetPtrOffsetted, with the NMS applied over it
writeResultKernel<<<numBlocks1D, threadsPerBlock1D>>>(targetPtrOffsetted, imageOffset,
kernelPtrOffsetted, sourcePtrOffsetted,
width, height, maxPeaks, offset.x, offset.y);
writeResultKernel<<<numBlocks1D, threadsPerBlock1D>>>(
targetPtrOffsetted, imageOffset, kernelPtrOffsetted, sourcePtrOffsetted, width, height,
maxPeaks, offset.x, offset.y);
}
// // Sort based on score
......
......@@ -34,6 +34,8 @@ namespace op
const std::string mModelFolder;
const std::string mProtoTxtPath;
const std::string mCaffeModelPath;
const float mUpsamplingRatio;
const bool mEnableNet;
const bool mEnableGoogleLogging;
// General parameters
std::vector<std::shared_ptr<Net>> spNets;
......@@ -42,7 +44,6 @@ namespace op
std::shared_ptr<BodyPartConnectorCaffe<float>> spBodyPartConnectorCaffe;
std::shared_ptr<MaximumCaffe<float>> spMaximumCaffe;
std::vector<std::vector<int>> mNetInput4DSizes;
std::vector<double> mScaleInputToNetInputs;
// Init with thread
std::vector<std::shared_ptr<ArrayCpuGpu<float>>> spCaffeNetOutputBlobs;
std::shared_ptr<ArrayCpuGpu<float>> spHeatMapsBlob;
......@@ -52,12 +53,14 @@ namespace op
ImplPoseExtractorCaffe(
const PoseModel poseModel, const int gpuId, const std::string& modelFolder,
const std::string& protoTxtPath, const std::string& caffeModelPath,
const bool enableGoogleLogging) :
const float upsamplingRatio, const bool enableNet, const bool enableGoogleLogging) :
mPoseModel{poseModel},
mGpuId{gpuId},
mModelFolder{modelFolder},
mProtoTxtPath{protoTxtPath},
mCaffeModelPath{caffeModelPath},
mUpsamplingRatio{upsamplingRatio},
mEnableNet{enableNet},
mEnableGoogleLogging{enableGoogleLogging},
spResizeAndMergeCaffe{std::make_shared<ResizeAndMergeCaffe<float>>()},
spNmsCaffe{std::make_shared<NmsCaffe<float>>()},
......@@ -93,26 +96,25 @@ namespace op
std::shared_ptr<BodyPartConnectorCaffe<float>>& bodyPartConnectorCaffe,
std::shared_ptr<MaximumCaffe<float>>& maximumCaffe,
std::vector<std::shared_ptr<ArrayCpuGpu<float>>>& caffeNetOutputBlobsShared,
std::shared_ptr<ArrayCpuGpu<float>>& heatMapsBlob,
std::shared_ptr<ArrayCpuGpu<float>>& peaksBlob,
std::shared_ptr<ArrayCpuGpu<float>>& maximumPeaksBlob,
const float scaleInputToNetInput,
const PoseModel poseModel,
const int gpuID)
std::shared_ptr<ArrayCpuGpu<float>>& heatMapsBlob, std::shared_ptr<ArrayCpuGpu<float>>& peaksBlob,
std::shared_ptr<ArrayCpuGpu<float>>& maximumPeaksBlob, const float scaleInputToNetInput,
const PoseModel poseModel, const int gpuId, const float upsamplingRatio)
{
try
{
const auto netDescreaseFactor = (
upsamplingRatio <= 0.f ? getPoseNetDecreaseFactor(poseModel) : upsamplingRatio);
// HeatMaps extractor blob and layer
// Caffe modifies bottom - Heatmap gets resized
const auto caffeNetOutputBlobs = arraySharedToPtr(caffeNetOutputBlobsShared);
resizeAndMergeCaffe->Reshape(
caffeNetOutputBlobs, {heatMapsBlob.get()},
getPoseNetDecreaseFactor(poseModel), 1.f/scaleInputToNetInput, true, gpuID);
netDescreaseFactor, 1.f/scaleInputToNetInput, true, gpuId);
// Pose extractor blob and layer
nmsCaffe->Reshape({heatMapsBlob.get()}, {peaksBlob.get()}, getPoseMaxPeaks(),
getPoseNumberBodyParts(poseModel), gpuID);
getPoseNumberBodyParts(poseModel), gpuId);
// Pose extractor blob and layer
bodyPartConnectorCaffe->Reshape({heatMapsBlob.get(), peaksBlob.get()}, gpuID);
bodyPartConnectorCaffe->Reshape({heatMapsBlob.get(), peaksBlob.get()}, gpuId);
if (TOP_DOWN_REFINEMENT)
maximumCaffe->Reshape({heatMapsBlob.get()}, {maximumPeaksBlob.get()});
// Cuda check
......@@ -168,11 +170,11 @@ namespace op
const PoseModel poseModel, const std::string& modelFolder, const int gpuId,
const std::vector<HeatMapType>& heatMapTypes, const ScaleMode heatMapScaleMode, const bool addPartCandidates,
const bool maximizePositives, const std::string& protoTxtPath, const std::string& caffeModelPath,
const bool enableGoogleLogging) :
const float upsamplingRatio, const bool enableNet, const bool enableGoogleLogging) :
PoseExtractorNet{poseModel, heatMapTypes, heatMapScaleMode, addPartCandidates, maximizePositives}
#ifdef USE_CAFFE
, upImpl{new ImplPoseExtractorCaffe{poseModel, gpuId, modelFolder, protoTxtPath, caffeModelPath,
enableGoogleLogging}}
upsamplingRatio, enableNet, enableGoogleLogging}}
#endif
{
try
......@@ -211,16 +213,19 @@ namespace op
try
{
#ifdef USE_CAFFE
// Logging
log("Starting initialization on thread.", Priority::Low, __LINE__, __FUNCTION__, __FILE__);
// Initialize Caffe net
addCaffeNetOnThread(
upImpl->spNets, upImpl->spCaffeNetOutputBlobs, upImpl->mPoseModel, upImpl->mGpuId,
upImpl->mModelFolder, upImpl->mProtoTxtPath, upImpl->mCaffeModelPath,
upImpl->mEnableGoogleLogging);
#ifdef USE_CUDA
cudaCheck(__LINE__, __FUNCTION__, __FILE__);
#endif
if (upImpl->mEnableNet)
{
// Logging
log("Starting initialization on thread.", Priority::Low, __LINE__, __FUNCTION__, __FILE__);
// Initialize Caffe net
addCaffeNetOnThread(
upImpl->spNets, upImpl->spCaffeNetOutputBlobs, upImpl->mPoseModel, upImpl->mGpuId,
upImpl->mModelFolder, upImpl->mProtoTxtPath, upImpl->mCaffeModelPath,
upImpl->mEnableGoogleLogging);
#ifdef USE_CUDA
cudaCheck(__LINE__, __FUNCTION__, __FILE__);
#endif
}
// Initialize blobs
upImpl->spHeatMapsBlob = {std::make_shared<ArrayCpuGpu<float>>(1,1,1,1)};
upImpl->spPeaksBlob = {std::make_shared<ArrayCpuGpu<float>>(1,1,1,1)};
......@@ -255,56 +260,72 @@ namespace op
if (inputNetData.size() != scaleInputToNetInputs.size())
error("Size(inputNetData) must be same than size(scaleInputToNetInputs).",
__LINE__, __FUNCTION__, __FILE__);
if (poseNetOutput.empty() != upImpl->mEnableNet)
{
const std::string errorMsg = ". Either use OpenPose default network (`--body 1`) or fill the"
" `poseNetOutput` argument (only 1 of those 2, not both).";
if (poseNetOutput.empty())
error("The argument poseNetOutput cannot be empty if mEnableNet is true" + errorMsg,
__LINE__, __FUNCTION__, __FILE__);
else
error("The argument poseNetOutput is not empty and you have also explicitly chosen to run"
" the OpenPose network" + errorMsg, __LINE__, __FUNCTION__, __FILE__);
}
// Resize std::vectors if required
const auto numberScales = inputNetData.size();
upImpl->mNetInput4DSizes.resize(numberScales);
while (upImpl->spNets.size() < numberScales)
addCaffeNetOnThread(
upImpl->spNets, upImpl->spCaffeNetOutputBlobs, upImpl->mPoseModel, upImpl->mGpuId,
upImpl->mModelFolder, upImpl->mProtoTxtPath, upImpl->mCaffeModelPath, false);
// Process each image
if (poseNetOutput.empty())
// Process each image - Caffe deep network
if (upImpl->mEnableNet)
{
while (upImpl->spNets.size() < numberScales)
addCaffeNetOnThread(
upImpl->spNets, upImpl->spCaffeNetOutputBlobs, upImpl->mPoseModel, upImpl->mGpuId,
upImpl->mModelFolder, upImpl->mProtoTxtPath, upImpl->mCaffeModelPath, false);
for (auto i = 0u ; i < inputNetData.size(); i++)
{
// 1. Caffe deep network
// ~80ms
upImpl->spNets.at(i)->forwardPass(inputNetData[i]);
// Reshape blobs if required
// Note: In order to resize to input size to have same results as Matlab, uncomment the
// commented lines
// Note: For dynamic sizes (e.g., a folder with images of different aspect ratio)
const auto changedVectors = !vectorsAreEqual(
upImpl->mNetInput4DSizes.at(i), inputNetData[i].getSize());
if (changedVectors)
// || !vectorsAreEqual(upImpl->mScaleInputToNetInputs, scaleInputToNetInputs))
{
upImpl->mNetInput4DSizes.at(i) = inputNetData[i].getSize();
// upImpl->mScaleInputToNetInputs = scaleInputToNetInputs;
reshapePoseExtractorCaffe(upImpl->spResizeAndMergeCaffe, upImpl->spNmsCaffe,
upImpl->spBodyPartConnectorCaffe, upImpl->spMaximumCaffe,
upImpl->spCaffeNetOutputBlobs, upImpl->spHeatMapsBlob,
upImpl->spPeaksBlob, upImpl->spMaximumPeaksBlob,
1.f, upImpl->mPoseModel, upImpl->mGpuId);
// scaleInputToNetInputs[i] vs. 1.f
}
// Get scale net to output (i.e., image input)
if (changedVectors || TOP_DOWN_REFINEMENT)
mNetOutputSize = Point<int>{upImpl->mNetInput4DSizes[0][3],
upImpl->mNetInput4DSizes[0][2]};
}
}
// If custom network output
else
{
// Sanity check
if (inputNetData.size() != 1u)
error("Size(inputNetData) must be provided heatmaps (" + std::to_string(inputNetData.size())
+ " vs. " + std::to_string(1) + ").", __LINE__, __FUNCTION__, __FILE__);
// Copy heatmap information
upImpl->spCaffeNetOutputBlobs.clear();
const bool copyFromGpu = false;
upImpl->spCaffeNetOutputBlobs.emplace_back(
std::make_shared<ArrayCpuGpu<float>>(poseNetOutput, copyFromGpu));
}
// Reshape blobs if required
for (auto i = 0u ; i < inputNetData.size(); i++)
{
// Reshape blobs if required - For dynamic sizes (e.g., images of different aspect ratio)
const auto changedVectors = !vectorsAreEqual(
upImpl->mNetInput4DSizes.at(i), inputNetData[i].getSize());
if (changedVectors)
{
upImpl->mNetInput4DSizes.at(i) = inputNetData[i].getSize();
reshapePoseExtractorCaffe(
upImpl->spResizeAndMergeCaffe, upImpl->spNmsCaffe, upImpl->spBodyPartConnectorCaffe,
upImpl->spMaximumCaffe, upImpl->spCaffeNetOutputBlobs, upImpl->spHeatMapsBlob,
upImpl->spPeaksBlob, upImpl->spMaximumPeaksBlob, 1.f, upImpl->mPoseModel,
upImpl->mGpuId, upImpl->mUpsamplingRatio);
// In order to resize to input size to have same results as Matlab
// scaleInputToNetInputs[i] vs. 1.f
}
// Get scale net to output (i.e., image input)
const auto ratio = (
upImpl->mUpsamplingRatio <= 0.f
? 1 : upImpl->mUpsamplingRatio / getPoseNetDecreaseFactor(mPoseModel));
if (changedVectors || TOP_DOWN_REFINEMENT)
mNetOutputSize = Point<int>{
positiveIntRound(ratio*upImpl->mNetInput4DSizes[0][3]),
positiveIntRound(ratio*upImpl->mNetInput4DSizes[0][2])};
}
// 2. Resize heat maps + merge different scales
// ~5ms (GPU) / ~20ms (CPU)
const auto caffeNetOutputBlobs = arraySharedToPtr(upImpl->spCaffeNetOutputBlobs);
......@@ -315,8 +336,8 @@ namespace op
// Note: In order to resize to input size, (un)comment the following lines
const auto scaleProducerToNetInput = resizeGetScaleFactor(inputDataSize, mNetOutputSize);
const Point<int> netSize{
(int)std::round(scaleProducerToNetInput*inputDataSize.x),
(int)std::round(scaleProducerToNetInput*inputDataSize.y)};
positiveIntRound(scaleProducerToNetInput*inputDataSize.x),
positiveIntRound(scaleProducerToNetInput*inputDataSize.y)};
mScaleNetToOutput = {(float)resizeGetScaleFactor(netSize, inputDataSize)};
// mScaleNetToOutput = 1.f;
// 3. Get peaks by Non-Maximum Suppression
......@@ -435,13 +456,13 @@ namespace op
if (!vectorsAreEqual(upImpl->mNetInput4DSizes.at(0), inputNetDataRoi.getSize()))
{
upImpl->mNetInput4DSizes.at(0) = inputNetDataRoi.getSize();
reshapePoseExtractorCaffe(upImpl->spResizeAndMergeCaffe, upImpl->spNmsCaffe,
upImpl->spBodyPartConnectorCaffe, upImpl->spMaximumCaffe,
// upImpl->spCaffeNetOutputBlobs,
caffeNetOutputBlob,
upImpl->spHeatMapsBlob, upImpl->spPeaksBlob,
upImpl->spMaximumPeaksBlob, 1.f, upImpl->mPoseModel,
upImpl->mGpuId);
reshapePoseExtractorCaffe(
upImpl->spResizeAndMergeCaffe, upImpl->spNmsCaffe,
upImpl->spBodyPartConnectorCaffe, upImpl->spMaximumCaffe,
// upImpl->spCaffeNetOutputBlobs,
caffeNetOutputBlob, upImpl->spHeatMapsBlob, upImpl->spPeaksBlob,
upImpl->spMaximumPeaksBlob, 1.f, upImpl->mPoseModel, upImpl->mGpuId,
upImpl->mUpsamplingRatio);
}
// 2. Resize heat maps + merge different scales
const auto caffeNetOutputBlobs = arraySharedToPtr(caffeNetOutputBlob);
......
......@@ -4,6 +4,27 @@
namespace op
{
PoseMode flagsToPoseMode(const int poseModeInt)
{
try
{
log("", Priority::Low, __LINE__, __FUNCTION__, __FILE__);
if (poseModeInt >= 0 && poseModeInt < (int)PoseMode::Size)
return (PoseMode)poseModeInt;
else
{
error("Value (" + std::to_string(poseModeInt) + ") does not correspond with any PoseMode.",
__LINE__, __FUNCTION__, __FILE__);
return PoseMode::Enabled;
}
}
catch (const std::exception& e)
{
error(e.what(), __LINE__, __FUNCTION__, __FILE__);
return PoseMode::Enabled;
}
}
PoseModel flagsToPoseModel(const std::string& poseModeString)
{
try
......
......@@ -106,7 +106,8 @@ namespace op
error("Writting video is only available if the OpenPose producer is used (i.e."
" producerSharedPtr cannot be a nullptr).",
__LINE__, __FUNCTION__, __FILE__);
if (!wrapperStructPose.enable && !wrapperStructFace.enable && !wrapperStructHand.enable)
if (wrapperStructPose.poseMode == PoseMode::Disabled && !wrapperStructFace.enable
&& !wrapperStructHand.enable)
error("Body, face, and hand keypoint detectors are disabled. You must enable at least one (i.e,"
" unselect `--body_disable`, select `--face`, or select `--hand`.",
__LINE__, __FUNCTION__, __FILE__);
......@@ -122,7 +123,7 @@ namespace op
" `examples/tutorial_api_cpp/` examples, or change the value of `--face_detector` and/or"
" `--hand_detector`.", __LINE__, __FUNCTION__, __FILE__);
// Warning
if (ownDetectorProvided && wrapperStructPose.enable)
if (ownDetectorProvided && wrapperStructPose.poseMode != PoseMode::Disabled)
log("Warning: Body keypoint estimation is enabled while you have also selected to provide your own"
" face and/or hand rectangle detections (`face_detector 2` and/or `hand_detector 2`). Therefore,"
" OpenPose will not detect face and/or hand keypoints based on the body keypoints. Are you sure"
......
......@@ -3,15 +3,16 @@
namespace op
{
WrapperStructPose::WrapperStructPose(
const bool enable_, const Point<int>& netInputSize_, const Point<int>& outputSize_,
const PoseMode poseMode_, const Point<int>& netInputSize_, const Point<int>& outputSize_,
const ScaleMode keypointScaleMode_, const int gpuNumber_, const int gpuNumberStart_, const int scalesNumber_,
const float scaleGap_, const RenderMode renderMode_, const PoseModel poseModel_,
const bool blendOriginalFrame_, const float alphaKeypoint_, const float alphaHeatMap_,
const int defaultPartToRender_, const std::string& modelFolder_, const std::vector<HeatMapType>& heatMapTypes_,
const ScaleMode heatMapScaleMode_, const bool addPartCandidates_, const float renderThreshold_,
const int numberPeopleMax_, const bool maximizePositives_, const double fpsMax_,
const std::string& protoTxtPath_, const std::string& caffeModelPath_, const bool enableGoogleLogging_) :
enable{enable_},
const std::string& protoTxtPath_, const std::string& caffeModelPath_, const float upsamplingRatio_,
const bool enableGoogleLogging_) :
poseMode{poseMode_},
netInputSize{netInputSize_},
outputSize{outputSize_},
keypointScaleMode{keypointScaleMode_},
......@@ -35,6 +36,7 @@ namespace op
fpsMax{fpsMax_},
protoTxtPath{protoTxtPath_},
caffeModelPath{caffeModelPath_},
upsamplingRatio{upsamplingRatio_},
enableGoogleLogging{enableGoogleLogging_}
{
}
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册