提交 e2995e17 编写于 作者: G gineshidalgo99

Added most examples to Travis

上级 1f13f818
......@@ -19,6 +19,7 @@ OpenPose - Frequently Asked Question (FAQ)
14. [Check Failed for ReadProtoFromBinaryFile (Failed to Parse NetParameter File)](#check-failed-for-readprotofrombinaryfile-failed-to-parse-netparameter-file)
15. [3D OpenPose Returning Wrong Results: 0, NaN, Infinity, etc.](#3d-openpose-returning-wrong-results-0-nan-infinity-etc)
16. [Protobuf Clip Param Caffe Error](#protobuf-clip-param-caffe-error)
17. [The Human Skeleton Looks like Dotted Lines Rather than Solid Lines](#the-human-skeleton-looks-like-dotted-lines-rather-than-solid-lines)
......@@ -143,6 +144,13 @@ COCO model will eventually be removed. BODY_25 model is faster, more accurate, a
F0821 14:26:29.665053 22812 upgrade_proto.cpp:97] Check failed: ReadProtoFromBinaryFile(param_file, param) Failed to parse NetParameter file: models/pose/body_25/pose_iter_584000.caffemodel
```
**A**: This error only happens in some Ubuntu machines. Following #787, compile your own Caffe with an older version of it. The hacky (quick but not recommended way) is to follow [#787#issuecomment-415476837](https://github.com/CMU-Perceptual-Computing-Lab/openpose/issues/787#issuecomment-415476837), the elegant way (compatible with future OpenPose versions) is to build your own Caffe independently, following [doc/installation.md#custom-caffe-ubuntu-only](./installation.md#custom-caffe-ubuntu-only).
**A**: This error has been solved in the latest OpenPose versions. Completely remove OpenPose and re-download the latest version (just cleaning the compilation or removing the `build/` folder will not work).
Note that OpenPose uses a [custom fork of Caffe](https://github.com/CMU-Perceptual-Computing-Lab/caffe) (rather than the official Caffe master), which it is only updated if it works on our machines. Currently, this version works on a newly formatted machine (Ubuntu 16.04 LTS) and in all our machines (CUDA 8 and 10 tested). The default GPU version is the master branch, which it is also compatible with CUDA 10 without changes (official Caffe version requires some changes for it). We also use the OpenCL and CPU tags if their CMake flags are selected.
If you wanna use your custom Caffe and it has this error: This error only happens in some Ubuntu machines. Following #787, compile your own Caffe with an older version of it. The hacky (quick but not recommended way) is to follow [#787#issuecomment-415476837](https://github.com/CMU-Perceptual-Computing-Lab/openpose/issues/787#issuecomment-415476837), the elegant way (compatible with future OpenPose versions) is to build your own Caffe independently, following [doc/installation.md#custom-caffe-ubuntu-only](./installation.md#custom-caffe-ubuntu-only).
### The Human Skeleton Looks like Dotted Lines Rather than Solid Lines
**Q:** When I use the demo to handle my images,the skeletons are dotted lines. I want to know how to make them to be solid lines.
**A**: The reason is that your input image size is too small. You can either 1) manually rescale your images up or 2) use a bigger `--output_resolution` so OpenPose will resize them up.
......@@ -402,6 +402,8 @@ Then, you would have to reduce the `--net_resolution` flag to fit the model into
#### Custom Caffe (Ubuntu Only)
Note that OpenPose uses a [custom fork of Caffe](https://github.com/CMU-Perceptual-Computing-Lab/caffe) (rather than the official Caffe master). Our custom fork is only updated if it works on our machines, but we try to keep it updated with the latest Caffe version. This version works on a newly formatted machine (Ubuntu 16.04 LTS) and in all our machines (CUDA 8 and 10 tested). The default GPU version is the master branch, which it is also compatible with CUDA 10 without changes (official Caffe version might require some changes for it). We also use the OpenCL and CPU tags if their CMake flags are selected.
We only modified some Caffe compilation flags and minor details. You can use your own Caffe distribution, simply specify the Caffe include path and the library as shown below. You will also need to turn off the `BUILD_CAFFE` variable. Note that cuDNN is required in order to get the maximum possible accuracy in OpenPose.
<p align="center">
<img src="media/cmake_installation/im_5.png", width="480">
......
......@@ -38,7 +38,7 @@ bash ./scripts/ubuntu/install_caffe_and_openpose_JetsonTX2_JetPack3.1.sh
## Usage
It is for now recommended to use an external camera with the demo. To get to decent FPS you need to lower the net resolution:
```
./build/openpose/openpose.bin -camera_resolution 640x480 -net_resolution 128x96
./build/examples/openpose/openpose.bin -camera_resolution 640x480 -net_resolution 128x96
```
To activate hand or face resolution please complete this command with the following options (warning, both simultaneously will cause out of memory error):
......
......@@ -38,7 +38,7 @@ bash ./scripts/ubuntu/install_caffe_and_openpose_JetsonTX2_JetPack3.3.sh
## Usage
It is for now recommended to use an external camera with the demo. To get to decent FPS you need to lower the net resolution:
```
./build/openpose/openpose.bin -camera_resolution 640x480 -net_resolution 128x96
./build/examples/openpose/openpose.bin -camera_resolution 640x480 -net_resolution 128x96
```
To activate hand or face resolution please complete this command with the following options (warning, both simultaneously will cause out of memory error):
......
......@@ -262,7 +262,7 @@ OpenPose Library - Release Notes
1. Main improvements:
1. Added initial single-person tracker for further speed up or visual smoothing (`--tracking` flag).
2. Greedy body part connector implemented in CUDA: +~30% speed up in Nvidia (CUDA) version with default flags and +~10% in maximum accuracy configuration. In addition, it provides a small 0.5% boost in accuracy (default flags).
3. OpenPose can be built as Unity plugin: Added flag `BUILD_UNITY_SUPPORT` and special Unity code.
3. Unity binding of OpenPose released. OpenPose adds the flag `BUILD_UNITY_SUPPORT` on CMake, which enables special Unity code so it can be built as a Unity plugin.
4. If camera is unplugged, OpenPose GUI and command line will display a warning and try to reconnect it.
5. Wrapper classes simplified and renamed. Wrapper renamed as WrapperT, and created Wrapper as the non-templated class equivalent.
6. API and examples improved:
......@@ -270,6 +270,10 @@ OpenPose Library - Release Notes
2. `tutorial_wrapper` renamed as `tutorial_api_cpp` as well as new examples were added.
2. `tutorial_python` renamed as `tutorial_api_python` as well as new examples were added.
3. `tutorial_pose` and `tutorial_thread` renamed as `tutorial_developer`, not meant to be used by users, but rather for OpenPose developers.
4. Examples do not end in core dumped if an OpenPose exception occurred during initialization, but they are rather closed returning -1. However, it will still results in core dumped if the exception occurs during multi-threading execution.
5. Added new examples, including examples to extract face and/or hand from images.
6. Added `--no_display` flag for the examples that does not use OpenPose output.
7. Given that display can be disabled in all examples, they all have been added to the Travis build so they can be tested.
7. Added a virtual destructor to almost all clases, so they can be inherited. Exceptions (for performance reasons): Array, Point, Rectangle, CvMatToOpOutput, OpOutputToCvMat.
8. Auxiliary classes in errorAndLog turned into namespaces (Profiler must be kept as class to allow static parameters).
9. Added flag `--frame_step` to allow the user to select the step or gap between processed frames. E.g., `--frame_step 5` would read and process frames 0, 5, 10, etc.
......@@ -289,25 +293,25 @@ OpenPose Library - Release Notes
19. All bash scripts incorporate `#!/bin/bash` to tell the terminal that they are bash scripts.
20. Added flag `--verbose` to plot the progress.
21. Added find_package(Protobuf) to allow specific versions of Protobuf.
22. Examples do not end in core dumped if an OpenPose exception occurred during initialization, but it is rather closed returning -1. However, it will still results in core dumped if the exception occurs during multi-threading execution.
23. Video saving improvements:
22. Video saving improvements:
1. Video (`--write_video`) can be generated from images (`--image_dir`), as long as they maintain the same resolution.
2. Video with the 3D output can be saved with the new `--write_video_3d` flag.
3. Added the capability of saving videos in MP4 format (by using the ffmpeg library).
4. Added the flag `write_video_with_audio` to enable saving these output MP4 videos with audio.
24. Added `--fps_max` flag to limit the maximum processing frame rate of OpenPose (useful to display results at a maximum desired speed).
25. Frame undistortion can be applied not only to FLIR cameras, but also to all other input sources (image, webcam, video, etc.).
26. Calibration improvements:
23. Added `--fps_max` flag to limit the maximum processing frame rate of OpenPose (useful to display results at a maximum desired speed).
24. Frame undistortion can be applied not only to FLIR cameras, but also to all other input sources (image, webcam, video, etc.).
25. Calibration improvements:
1. Improved chessboard orientation detection, more robust and less errors.
2. Triangulation functions (triangulate and triangulateWithOptimization) public, so calibration can use them for bundle adjustment.
3. Added bundle adjustment refinement for camera extrinsic calibration.
4. Added `CameraMatrixInitial` field into the XML calibration files to keep the information of the original camera extrinsic parameters when bundle adjustment is run.
27. Added Mac OpenCL compatibility.
28. Added documentation for Nvidia TX2 with JetPack 3.3.
29. Added Travis build check for several configurations: Ubuntu (14/16)/Mac/Windows, CPU/CUDA/OpenCL, with/without Python, and Release/Debug.
30. Assigned 755 access to all sh scripts (some of them were only 644).
31. Added the flags `--prototxt_path` and `--caffemodel_path` to allow custom ProtoTxt and CaffeModel paths.
32. Replaced the old Python wrapper for an updated Pybind11 wrapper version, that includes all the functionality of the C++ API.
26. Added Mac OpenCL compatibility.
27. Added documentation for Nvidia TX2 with JetPack 3.3.
28. Added Travis build check for several configurations: Ubuntu (14/16)/Mac/Windows, CPU/CUDA/OpenCL, with/without Python, and Release/Debug.
29. Assigned 755 access to all sh scripts (some of them were only 644).
30. Added the flags `--prototxt_path` and `--caffemodel_path` to allow custom ProtoTxt and CaffeModel paths.
31. Replaced the old Python wrapper for an updated Pybind11 wrapper version, that includes all the functionality of the C++ API.
32. Function getFilesOnDirectory() can extra all basic image file types at once without requiring to manually enumerate them.
2. Functions or parameters renamed:
1. By default, python example `tutorial_developer/python_2_pose_from_heatmaps.py` was using 2 scales starting at -1x736, changed to 1 scale at -1x368.
2. WrapperStructPose default parameters changed to match those of the OpenPose demo binary.
......
......@@ -12,23 +12,17 @@
// OpenPose dependencies
#include <openpose/headers.hpp>
int openPoseDemo()
void configureWrapper(op::Wrapper& opWrapper)
{
try
{
op::log("Starting OpenPose demo...", op::Priority::High);
const auto timerBegin = std::chrono::high_resolution_clock::now();
// Configuring OpenPose
// logging_level
op::check(0 <= FLAGS_logging_level && FLAGS_logging_level <= 255, "Wrong logging_level value.",
__LINE__, __FUNCTION__, __FILE__);
op::ConfigureLog::setPriorityThreshold((op::Priority)FLAGS_logging_level);
op::Profiler::setDefaultX(FLAGS_profile_speed);
// // For debugging
// // Print all logging messages
// op::ConfigureLog::setPriorityThreshold(op::Priority::None);
// // Print out speed values faster
// op::Profiler::setDefaultX(100);
// Applying user defined configuration - GFlags to program variables
// cameraSize
......@@ -63,9 +57,6 @@ int openPoseDemo()
// Enabling Google Logging
const bool enableGoogleLogging = true;
// Configuring OpenPose
op::log("Configuring OpenPose...", op::Priority::High);
op::Wrapper opWrapper;
// Pose configuration (use WrapperStructPose{} for default and recommended configuration)
const op::WrapperStructPose wrapperStructPose{
!FLAGS_body_disable, netInputSize, outputSize, keypointScaleMode, FLAGS_num_gpu, FLAGS_num_gpu_start,
......@@ -111,6 +102,24 @@ int openPoseDemo()
// Set to single-thread (for sequential processing and/or debugging and/or reducing latency)
if (FLAGS_disable_multi_thread)
opWrapper.disableMultiThreading();
}
catch (const std::exception& e)
{
op::error(e.what(), __LINE__, __FUNCTION__, __FILE__);
}
}
int openPoseDemo()
{
try
{
op::log("Starting OpenPose demo...", op::Priority::High);
const auto timerBegin = std::chrono::high_resolution_clock::now();
// Configure OpenPose
op::log("Configuring OpenPose...", op::Priority::High);
op::Wrapper opWrapper;
configureWrapper(opWrapper);
// Start, run, and stop processing - exec() blocks this thread until OpenPose wrapper has finished
op::log("Starting thread(s)...", op::Priority::High);
......
......@@ -11,56 +11,73 @@
// Producer
DEFINE_string(image_path, "examples/media/COCO_val2014_000000000192.jpg",
"Process an image. Read all standard formats (jpg, png, bmp, etc.).");
// Display
DEFINE_bool(no_display, false,
"Enable to disable the visual display.");
// This worker will just read and return all the jpg files in a directory
void display(const std::shared_ptr<std::vector<std::shared_ptr<op::Datum>>>& datumsPtr)
{
// User's displaying/saving/other processing here
// datum.cvOutputData: rendered frame with pose or heatmaps
// datum.poseKeypoints: Array<float> with the estimated pose
if (datumsPtr != nullptr && !datumsPtr->empty())
try
{
// User's displaying/saving/other processing here
// datum.cvOutputData: rendered frame with pose or heatmaps
// datum.poseKeypoints: Array<float> with the estimated pose
if (datumsPtr != nullptr && !datumsPtr->empty())
{
// Display image
cv::imshow("User worker GUI", datumsPtr->at(0)->cvOutputData);
cv::waitKey(0);
}
else
op::log("Nullptr or empty datumsPtr found.", op::Priority::High);
}
catch (const std::exception& e)
{
// Display image
cv::imshow("User worker GUI", datumsPtr->at(0)->cvOutputData);
cv::waitKey(0);
op::error(e.what(), __LINE__, __FUNCTION__, __FILE__);
}
else
op::log("Nullptr or empty datumsPtr found.", op::Priority::High);
}
void printKeypoints(const std::shared_ptr<std::vector<std::shared_ptr<op::Datum>>>& datumsPtr)
{
// Example: How to use the pose keypoints
if (datumsPtr != nullptr && !datumsPtr->empty())
try
{
// Alternative 1
op::log("Body keypoints: " + datumsPtr->at(0)->poseKeypoints.toString());
// Example: How to use the pose keypoints
if (datumsPtr != nullptr && !datumsPtr->empty())
{
// Alternative 1
op::log("Body keypoints: " + datumsPtr->at(0)->poseKeypoints.toString());
// // Alternative 2
// op::log(datumsPtr->at(0).poseKeypoints);
// // Alternative 2
// op::log(datumsPtr->at(0).poseKeypoints);
// // Alternative 3
// std::cout << datumsPtr->at(0).poseKeypoints << std::endl;
// // Alternative 3
// std::cout << datumsPtr->at(0).poseKeypoints << std::endl;
// // Alternative 4 - Accesing each element of the keypoints
// op::log("\nKeypoints:");
// const auto& poseKeypoints = datumsPtr->at(0).poseKeypoints;
// op::log("Person pose keypoints:");
// for (auto person = 0 ; person < poseKeypoints.getSize(0) ; person++)
// {
// op::log("Person " + std::to_string(person) + " (x, y, score):");
// for (auto bodyPart = 0 ; bodyPart < poseKeypoints.getSize(1) ; bodyPart++)
// {
// std::string valueToPrint;
// for (auto xyscore = 0 ; xyscore < poseKeypoints.getSize(2) ; xyscore++)
// valueToPrint += std::to_string( poseKeypoints[{person, bodyPart, xyscore}] ) + " ";
// op::log(valueToPrint);
// }
// }
// op::log(" ");
// // Alternative 4 - Accesing each element of the keypoints
// op::log("\nKeypoints:");
// const auto& poseKeypoints = datumsPtr->at(0).poseKeypoints;
// op::log("Person pose keypoints:");
// for (auto person = 0 ; person < poseKeypoints.getSize(0) ; person++)
// {
// op::log("Person " + std::to_string(person) + " (x, y, score):");
// for (auto bodyPart = 0 ; bodyPart < poseKeypoints.getSize(1) ; bodyPart++)
// {
// std::string valueToPrint;
// for (auto xyscore = 0 ; xyscore < poseKeypoints.getSize(2) ; xyscore++)
// valueToPrint += std::to_string( poseKeypoints[{person, bodyPart, xyscore}] ) + " ";
// op::log(valueToPrint);
// }
// }
// op::log(" ");
}
else
op::log("Nullptr or empty datumsPtr found.", op::Priority::High);
}
catch (const std::exception& e)
{
op::error(e.what(), __LINE__, __FUNCTION__, __FILE__);
}
else
op::log("Nullptr or empty datumsPtr found.", op::Priority::High);
}
int tutorialApiCpp1()
......@@ -75,6 +92,7 @@ int tutorialApiCpp1()
// Set to single-thread (for sequential processing and/or debugging and/or reducing latency)
if (FLAGS_disable_multi_thread)
opWrapper.disableMultiThreading();
// Starting OpenPose
op::log("Starting thread(s)...", op::Priority::High);
opWrapper.start();
......@@ -85,7 +103,8 @@ int tutorialApiCpp1()
if (datumProcessed != nullptr)
{
printKeypoints(datumProcessed);
display(datumProcessed);
if (!FLAGS_no_display)
display(datumProcessed);
}
else
op::log("Image could not be processed.", op::Priority::High);
......
......@@ -11,35 +11,52 @@
// Producer
DEFINE_string(image_path, "examples/media/COCO_val2014_000000000241.jpg",
"Process an image. Read all standard formats (jpg, png, bmp, etc.).");
// Display
DEFINE_bool(no_display, false,
"Enable to disable the visual display.");
// This worker will just read and return all the jpg files in a directory
void display(const std::shared_ptr<std::vector<std::shared_ptr<op::Datum>>>& datumsPtr)
{
// User's displaying/saving/other processing here
// datum.cvOutputData: rendered frame with pose or heatmaps
// datum.poseKeypoints: Array<float> with the estimated pose
if (datumsPtr != nullptr && !datumsPtr->empty())
try
{
// User's displaying/saving/other processing here
// datum.cvOutputData: rendered frame with pose or heatmaps
// datum.poseKeypoints: Array<float> with the estimated pose
if (datumsPtr != nullptr && !datumsPtr->empty())
{
// Display image
cv::imshow("User worker GUI", datumsPtr->at(0)->cvOutputData);
cv::waitKey(0);
}
else
op::log("Nullptr or empty datumsPtr found.", op::Priority::High);
}
catch (const std::exception& e)
{
// Display image
cv::imshow("User worker GUI", datumsPtr->at(0)->cvOutputData);
cv::waitKey(0);
op::error(e.what(), __LINE__, __FUNCTION__, __FILE__);
}
else
op::log("Nullptr or empty datumsPtr found.", op::Priority::High);
}
void printKeypoints(const std::shared_ptr<std::vector<std::shared_ptr<op::Datum>>>& datumsPtr)
{
// Example: How to use the pose keypoints
if (datumsPtr != nullptr && !datumsPtr->empty())
try
{
op::log("Body keypoints: " + datumsPtr->at(0)->poseKeypoints.toString());
op::log("Face keypoints: " + datumsPtr->at(0)->faceKeypoints.toString());
op::log("Left hand keypoints: " + datumsPtr->at(0)->handKeypoints[0].toString());
op::log("Right hand keypoints: " + datumsPtr->at(0)->handKeypoints[1].toString());
// Example: How to use the pose keypoints
if (datumsPtr != nullptr && !datumsPtr->empty())
{
op::log("Body keypoints: " + datumsPtr->at(0)->poseKeypoints.toString());
op::log("Face keypoints: " + datumsPtr->at(0)->faceKeypoints.toString());
op::log("Left hand keypoints: " + datumsPtr->at(0)->handKeypoints[0].toString());
op::log("Right hand keypoints: " + datumsPtr->at(0)->handKeypoints[1].toString());
}
else
op::log("Nullptr or empty datumsPtr found.", op::Priority::High);
}
catch (const std::exception& e)
{
op::error(e.what(), __LINE__, __FUNCTION__, __FILE__);
}
else
op::log("Nullptr or empty datumsPtr found.", op::Priority::High);
}
int tutorialApiCpp2()
......@@ -57,6 +74,7 @@ int tutorialApiCpp2()
// Set to single-thread (for sequential processing and/or debugging and/or reducing latency)
if (FLAGS_disable_multi_thread)
opWrapper.disableMultiThreading();
// Starting OpenPose
op::log("Starting thread(s)...", op::Priority::High);
opWrapper.start();
......@@ -67,7 +85,8 @@ int tutorialApiCpp2()
if (datumProcessed != nullptr)
{
printKeypoints(datumProcessed);
display(datumProcessed);
if (!FLAGS_no_display)
display(datumProcessed);
}
else
op::log("Image could not be processed.", op::Priority::High);
......
......@@ -13,48 +13,59 @@
// Producer
DEFINE_string(image_path, "examples/media/COCO_val2014_000000000294.jpg",
"Process an image. Read all standard formats (jpg, png, bmp, etc.).");
// Display
DEFINE_bool(no_display, false,
"Enable to disable the visual display.");
// This worker will just read and return all the jpg files in a directory
void display(const std::shared_ptr<std::vector<std::shared_ptr<op::Datum>>>& datumsPtr)
{
// User's displaying/saving/other processing here
// datum.cvOutputData: rendered frame with pose or heatmaps
// datum.poseKeypoints: Array<float> with the estimated pose
if (datumsPtr != nullptr && !datumsPtr->empty())
try
{
// User's displaying/saving/other processing here
// datum.cvOutputData: rendered frame with pose or heatmaps
// datum.poseKeypoints: Array<float> with the estimated pose
if (datumsPtr != nullptr && !datumsPtr->empty())
{
// Display image
cv::imshow("User worker GUI", datumsPtr->at(0)->cvOutputData);
cv::waitKey(0);
}
else
op::log("Nullptr or empty datumsPtr found.", op::Priority::High);
}
catch (const std::exception& e)
{
// Display image
cv::imshow("User worker GUI", datumsPtr->at(0)->cvOutputData);
cv::waitKey(0);
op::error(e.what(), __LINE__, __FUNCTION__, __FILE__);
}
else
op::log("Nullptr or empty datumsPtr found.", op::Priority::High);
}
void printKeypoints(const std::shared_ptr<std::vector<std::shared_ptr<op::Datum>>>& datumsPtr)
{
// Example: How to use the pose keypoints
if (datumsPtr != nullptr && !datumsPtr->empty())
try
{
// Example: How to use the pose keypoints
if (datumsPtr != nullptr && !datumsPtr->empty())
{
op::log("Body keypoints: " + datumsPtr->at(0)->poseKeypoints.toString());
op::log("Face keypoints: " + datumsPtr->at(0)->faceKeypoints.toString());
op::log("Left hand keypoints: " + datumsPtr->at(0)->handKeypoints[0].toString());
op::log("Right hand keypoints: " + datumsPtr->at(0)->handKeypoints[1].toString());
}
else
op::log("Nullptr or empty datumsPtr found.", op::Priority::High);
}
catch (const std::exception& e)
{
op::log("Body keypoints: " + datumsPtr->at(0)->poseKeypoints.toString());
op::log("Face keypoints: " + datumsPtr->at(0)->faceKeypoints.toString());
op::log("Left hand keypoints: " + datumsPtr->at(0)->handKeypoints[0].toString());
op::log("Right hand keypoints: " + datumsPtr->at(0)->handKeypoints[1].toString());
op::error(e.what(), __LINE__, __FUNCTION__, __FILE__);
}
else
op::log("Nullptr or empty datumsPtr found.", op::Priority::High);
}
int tutorialApiCpp3()
void configureWrapper(op::Wrapper& opWrapper)
{
try
{
op::log("Starting OpenPose demo...", op::Priority::High);
// logging_level
op::check(0 <= FLAGS_logging_level && FLAGS_logging_level <= 255, "Wrong logging_level value.",
__LINE__, __FUNCTION__, __FILE__);
op::ConfigureLog::setPriorityThreshold((op::Priority)FLAGS_logging_level);
op::Profiler::setDefaultX(FLAGS_profile_speed);
// Configuring OpenPose
// Applying user defined configuration - GFlags to program variables
// outputSize
......@@ -82,9 +93,6 @@ int tutorialApiCpp3()
// Enabling Google Logging
const bool enableGoogleLogging = true;
// Configuring OpenPose
op::log("Configuring OpenPose...", op::Priority::High);
op::Wrapper opWrapper{op::ThreadManagerMode::Asynchronous};
// Pose configuration (use WrapperStructPose{} for default and recommended configuration)
const op::WrapperStructPose wrapperStructPose{
!FLAGS_body_disable, netInputSize, outputSize, keypointScaleMode, FLAGS_num_gpu, FLAGS_num_gpu_start,
......@@ -121,6 +129,30 @@ int tutorialApiCpp3()
// Set to single-thread (for sequential processing and/or debugging and/or reducing latency)
if (FLAGS_disable_multi_thread)
opWrapper.disableMultiThreading();
}
catch (const std::exception& e)
{
op::error(e.what(), __LINE__, __FUNCTION__, __FILE__);
}
}
int tutorialApiCpp3()
{
try
{
op::log("Starting OpenPose demo...", op::Priority::High);
// logging_level
op::check(0 <= FLAGS_logging_level && FLAGS_logging_level <= 255, "Wrong logging_level value.",
__LINE__, __FUNCTION__, __FILE__);
op::ConfigureLog::setPriorityThreshold((op::Priority)FLAGS_logging_level);
op::Profiler::setDefaultX(FLAGS_profile_speed);
// Configuring OpenPose
op::log("Configuring OpenPose...", op::Priority::High);
op::Wrapper opWrapper{op::ThreadManagerMode::Asynchronous};
configureWrapper(opWrapper);
// Starting OpenPose
op::log("Starting thread(s)...", op::Priority::High);
opWrapper.start();
......@@ -131,7 +163,8 @@ int tutorialApiCpp3()
if (datumProcessed != nullptr)
{
printKeypoints(datumProcessed);
display(datumProcessed);
if (!FLAGS_no_display)
display(datumProcessed);
}
else
op::log("Image could not be processed.", op::Priority::High);
......
......@@ -26,6 +26,9 @@
// Producer
DEFINE_string(image_dir, "examples/media/",
"Process a directory of images. Read all standard formats (jpg, png, bmp, etc.).");
// Display
DEFINE_bool(no_display, false,
"Enable to disable the visual display.");
// If the user needs his own variables, he can inherit the op::Datum struct and add them in there.
// UserDatum can be directly used by the OpenPose wrapper because it inherits from op::Datum, just define
......@@ -44,13 +47,13 @@ struct UserDatum : public op::Datum
// that the user usually knows which kind of data he will move between the queues,
// in this case we assume a std::shared_ptr of a std::vector of UserDatum
// This worker will just read and return all the jpg files in a directory
// This worker will just read and return all the basic image file formats in a directory
class UserInputClass
{
public:
UserInputClass(const std::string& directoryPath) :
mImageFiles{op::getFilesOnDirectory(directoryPath, "jpg")},
// If we want "jpg" + "png" images
mImageFiles{op::getFilesOnDirectory(directoryPath, op::Extensions::Images)}, // For all basic image formats
// If we want only e.g., "jpg" + "png" images
// mImageFiles{op::getFilesOnDirectory(directoryPath, std::vector<std::string>{"jpg", "png"})},
mCounter{0},
mClosed{false}
......@@ -276,7 +279,8 @@ int tutorialApiCpp4()
std::shared_ptr<std::vector<std::shared_ptr<UserDatum>>> datumProcessed;
if (successfullyEmplaced && opWrapperT.waitAndPop(datumProcessed))
{
userWantsToExit = userOutputClass.display(datumProcessed);
if (!FLAGS_no_display)
userWantsToExit = userOutputClass.display(datumProcessed);
userOutputClass.printKeypoints(datumProcessed);
}
else
......
......@@ -21,6 +21,11 @@
// OpenPose dependencies
#include <openpose/headers.hpp>
// Custom OpenPose flags
// Display
DEFINE_bool(no_display, false,
"Enable to disable the visual display.");
// If the user needs his own variables, he can inherit the op::Datum struct and add them in there.
// UserDatum can be directly used by the OpenPose wrapper because it inherits from op::Datum, just define
// WrapperT<std::vector<std::shared_ptr<UserDatum>>> instead of Wrapper
......@@ -218,9 +223,14 @@ int tutorialApiCpp5()
std::shared_ptr<std::vector<std::shared_ptr<UserDatum>>> datumProcessed;
if (opWrapperT.waitAndPop(datumProcessed))
{
userWantsToExit = userOutputClass.display(datumProcessed);;
if (!FLAGS_no_display)
userWantsToExit = userOutputClass.display(datumProcessed);;
userOutputClass.printKeypoints(datumProcessed);
}
// If OpenPose finished reading images
else if (!opWrapperT.isRunning())
break;
// Something else happened
else
op::log("Processed datum could not be emplaced.", op::Priority::High, __LINE__, __FUNCTION__, __FILE__);
}
......
......@@ -44,13 +44,13 @@ struct UserDatum : public op::Datum
// that the user usually knows which kind of data he will move between the queues,
// in this case we assume a std::shared_ptr of a std::vector of UserDatum
// This worker will just read and return all the jpg files in a directory
// This worker will just read and return all the basic image file formats in a directory
class WUserInput : public op::WorkerProducer<std::shared_ptr<std::vector<std::shared_ptr<UserDatum>>>>
{
public:
WUserInput(const std::string& directoryPath) :
mImageFiles{op::getFilesOnDirectory(directoryPath, "jpg")},
// If we want "jpg" + "png" images
mImageFiles{op::getFilesOnDirectory(directoryPath, op::Extensions::Images)}, // For all basic image formats
// If we want only e.g., "jpg" + "png" images
// mImageFiles{op::getFilesOnDirectory(directoryPath, std::vector<std::string>{"jpg", "png"})},
mCounter{0}
{
......
......@@ -22,6 +22,11 @@
// OpenPose dependencies
#include <openpose/headers.hpp>
// Custom OpenPose flags
// Display
DEFINE_bool(no_display, false,
"Enable to disable the visual display.");
// If the user needs his own variables, he can inherit the op::Datum struct and add them in there.
// UserDatum can be directly used by the OpenPose wrapper because it inherits from op::Datum, just define
// WrapperT<std::vector<std::shared_ptr<UserDatum>>> instead of Wrapper
......@@ -100,12 +105,16 @@ public:
+ std::to_string(handHeatMaps[1].getSize(3)) + "]");
}
// Display rendered output image
cv::imshow("User worker GUI", datumsPtr->at(0)->cvOutputData);
// Display image and sleeps at least 1 ms (it usually sleeps ~5-10 msec to display the image)
const char key = (char)cv::waitKey(1);
if (key == 27)
this->stop();
// Display results (if enabled)
if (!FLAGS_no_display)
{
// Display rendered output image
cv::imshow("User worker GUI", datumsPtr->at(0)->cvOutputData);
// Display image and sleeps at least 1 ms (it usually sleeps ~5-10 msec to display the image)
const char key = (char)cv::waitKey(1);
if (key == 27)
this->stop();
}
}
}
catch (const std::exception& e)
......
......@@ -26,6 +26,9 @@
// Producer
DEFINE_string(image_dir, "examples/media/",
"Process a directory of images. Read all standard formats (jpg, png, bmp, etc.).");
// Display
DEFINE_bool(no_display, false,
"Enable to disable the visual display.");
// If the user needs his own variables, he can inherit the op::Datum struct and add them in there.
// UserDatum can be directly used by the OpenPose wrapper because it inherits from op::Datum, just define
......@@ -44,13 +47,13 @@ struct UserDatum : public op::Datum
// that the user usually knows which kind of data he will move between the queues,
// in this case we assume a std::shared_ptr of a std::vector of UserDatum
// This worker will just read and return all the jpg files in a directory
// This worker will just read and return all the basic image file formats in a directory
class WUserInput : public op::WorkerProducer<std::shared_ptr<std::vector<std::shared_ptr<UserDatum>>>>
{
public:
WUserInput(const std::string& directoryPath) :
mImageFiles{op::getFilesOnDirectory(directoryPath, "jpg")},
// If we want "jpg" + "png" images
mImageFiles{op::getFilesOnDirectory(directoryPath, op::Extensions::Images)}, // For all basic image formats
// If we want only e.g., "jpg" + "png" images
// mImageFiles{op::getFilesOnDirectory(directoryPath, std::vector<std::string>{"jpg", "png"})},
mCounter{0}
{
......@@ -201,12 +204,16 @@ public:
+ std::to_string(handHeatMaps[1].getSize(3)) + "]");
}
// Display rendered output image
cv::imshow("User worker GUI", datumsPtr->at(0)->cvOutputData);
// Display image and sleeps at least 1 ms (it usually sleeps ~5-10 msec to display the image)
const char key = (char)cv::waitKey(1);
if (key == 27)
this->stop();
// Display results (if enabled)
if (!FLAGS_no_display)
{
// Display rendered output image
cv::imshow("User worker GUI", datumsPtr->at(0)->cvOutputData);
// Display image and sleeps at least 1 ms (it usually sleeps ~5-10 msec to display the image)
const char key = (char)cv::waitKey(1);
if (key == 27)
this->stop();
}
}
}
catch (const std::exception& e)
......
......@@ -43,13 +43,13 @@ DEFINE_bool(fullscreen, false, "Run in full-screen mode
// that the user usually knows which kind of data he will move between the queues,
// in this case we assume a std::shared_ptr of a std::vector of op::Datum
// This worker will just read and return all the jpg files in a directory
// This worker will just read and return all the basic image file formats in a directory
class WUserInput : public op::WorkerProducer<std::shared_ptr<std::vector<std::shared_ptr<op::Datum>>>>
{
public:
WUserInput(const std::string& directoryPath) :
mImageFiles{op::getFilesOnDirectory(directoryPath, "jpg")},
// If we want "jpg" + "png" images
mImageFiles{op::getFilesOnDirectory(directoryPath, op::Extensions::Images)}, // For all basic image formats
// If we want only e.g., "jpg" + "png" images
// mImageFiles{op::getFilesOnDirectory(directoryPath, std::vector<std::string>{"jpg", "png"})},
mCounter{0}
{
......
......@@ -57,13 +57,13 @@ struct UserDatum : public op::Datum
// that the user usually knows which kind of data he will move between the queues,
// in this case we assume a std::shared_ptr of a std::vector of UserDatum
// This worker will just read and return all the jpg files in a directory
// This worker will just read and return all the basic image file formats in a directory
class WUserInput : public op::WorkerProducer<std::shared_ptr<std::vector<std::shared_ptr<UserDatum>>>>
{
public:
WUserInput(const std::string& directoryPath) :
mImageFiles{op::getFilesOnDirectory(directoryPath, "jpg")},
// If we want "jpg" + "png" images
mImageFiles{op::getFilesOnDirectory(directoryPath, op::Extensions::Images)}, // For all basic image formats
// If we want only e.g., "jpg" + "png" images
// mImageFiles{op::getFilesOnDirectory(directoryPath, std::vector<std::string>{"jpg", "png"})},
mCounter{0}
{
......
......@@ -27,6 +27,12 @@ namespace op
Max = 4,
NoOutput = 255,
};
enum class Extensions : unsigned char
{
Images, // jpg, png, ...
Size
};
}
#endif // OPENPOSE_UTILITIES_ENUM_CLASSES_HPP
......@@ -62,8 +62,8 @@ namespace op
* @param extensions std::vector<std::string> with the extensions of the desired files.
* @return std::vector<std::string> with the existing file names.
*/
OP_API std::vector<std::string> getFilesOnDirectory(const std::string& directoryPath,
const std::vector<std::string>& extensions = {});
OP_API std::vector<std::string> getFilesOnDirectory(
const std::string& directoryPath, const std::vector<std::string>& extensions = {});
/**
* Analogous to getFilesOnDirectory(const std::string& directoryPath, const std::vector<std::string>& extensions)
......@@ -72,8 +72,18 @@ namespace op
* @param extension std::string with the extension of the desired files.
* @return std::vector<std::string> with the existing file names.
*/
OP_API std::vector<std::string> getFilesOnDirectory(const std::string& directoryPath,
const std::string& extension);
OP_API std::vector<std::string> getFilesOnDirectory(
const std::string& directoryPath, const std::string& extension);
/**
* This function extracts all the files in a directory path with the desired
* group of extensions (e.g., Extensions::Images).
* @param directoryPath std::string with the directory path.
* @param extensions Extensions with the kind of extensions desired (e.g., Extensions:Images).
* @return std::vector<std::string> with the existing file names.
*/
OP_API std::vector<std::string> getFilesOnDirectory(
const std::string& directoryPath, const Extensions extensions);
OP_API std::string removeSpecialsCharacters(const std::string& stringToVariate);
......
......@@ -18,16 +18,24 @@ if [[ $RUN_EXAMPLES == true ]] ; then
./build/examples/tutorial_add_module/1_custom_post_processing.bin --net_resolution -1x32 --image_dir examples/media/ --write_json output/ --display 0 --render_pose 0
echo " "
# # Note: Examples 1-5 and 8-9 require GUI
# echo "Tutorial API C++: Examples 1-5 and 8-9..."
# # Note: Examples 1-2 require the whole OpenPose resolution (too much RAM memory)
# echo "Tutorial API C++: Examples 1-2..."
# ./build/examples/tutorial_api_cpp/1_body_from_image.bin
# ./build/examples/tutorial_api_cpp/2_whole_body_from_image.bin
# ./build/examples/tutorial_api_cpp/3_keypoints_from_image_configurable.bin --net_resolution -1x32
# ./build/examples/tutorial_api_cpp/4_asynchronous_loop_custom_input_and_output.bin --net_resolution -1x32 --image_dir examples/media/
# ./build/examples/tutorial_api_cpp/8_synchronous_custom_output.bin --net_resolution -1x32 --image_dir examples/media/
# ./build/examples/tutorial_api_cpp/9_synchronous_custom_all.bin --net_resolution -1x32 --image_dir examples/media/
# echo " "
echo "Tutorial API C++: Example 3..."
./build/examples/tutorial_api_cpp/3_keypoints_from_image_configurable.bin --no_display --net_resolution -1x32 --write_json output/
echo " "
echo "Tutorial API C++: Example 4..."
./build/examples/tutorial_api_cpp/4_asynchronous_loop_custom_input_and_output.bin --no_display --net_resolution -1x32 --image_dir examples/media/
echo " "
echo "Tutorial API C++: Example 5..."
./build/examples/tutorial_api_cpp/5_asynchronous_loop_custom_output.bin --no_display --net_resolution -1x32 --image_dir examples/media/
echo " "
echo "Tutorial API C++: Example 6..."
./build/examples/tutorial_api_cpp/6_synchronous_custom_postprocessing.bin --net_resolution -1x32 --image_dir examples/media/ --write_json output/ --display 0 --render_pose 0
echo " "
......@@ -36,9 +44,17 @@ if [[ $RUN_EXAMPLES == true ]] ; then
./build/examples/tutorial_api_cpp/7_synchronous_custom_input.bin --net_resolution -1x32 --image_dir examples/media/ --write_json output/ --display 0 --render_pose 0
echo " "
echo "Tutorial API C++: Example 8..."
./build/examples/tutorial_api_cpp/8_synchronous_custom_output.bin --no_display --net_resolution -1x32 --image_dir examples/media/
echo " "
echo "Tutorial API C++: Example 9..."
./build/examples/tutorial_api_cpp/9_synchronous_custom_all.bin --no_display --net_resolution -1x32 --image_dir examples/media/
echo " "
# Python examples
if [[ $WITH_PYTHON == true ]] ; then
echo "Python API C++: OpenPose demo..."
echo "Tutorial API Python: OpenPose demo..."
cd build/examples/tutorial_api_python
python openpose_python.py --net_resolution -1x32 --image_dir ../../../examples/media/ --write_json output/ --display 0 --render_pose 0
echo " "
......
......@@ -67,12 +67,7 @@ namespace op
try
{
// Get files on directory with the desired extensions
const std::vector<std::string> extensions{
// Completely supported by OpenCV
"bmp", "dib", "pbm", "pgm", "ppm", "sr", "ras",
// Most of them supported by OpenCV
"jpg", "jpeg", "png"};
const auto imagePaths = getFilesOnDirectory(imageDirectoryPath, extensions);
const auto imagePaths = getFilesOnDirectory(imageDirectoryPath, Extensions::Images);
// Check #files > 0
if (imagePaths.empty())
error("No images were found on `" + imageDirectoryPath + "`.", __LINE__, __FUNCTION__, __FILE__);
......
......@@ -10,17 +10,11 @@ namespace op
try
{
// Get files on directory with the desired extensions
const std::vector<std::string> extensions{
// Completely supported by OpenCV
"bmp", "dib", "pbm", "pgm", "ppm", "sr", "ras",
// Most of them supported by OpenCV
"jpg", "jpeg", "png"};
const auto imagePaths = getFilesOnDirectory(imageDirectoryPath, extensions);
const auto imagePaths = getFilesOnDirectory(imageDirectoryPath, Extensions::Images);
// Check #files > 0
if (imagePaths.empty())
error("No images were found on " + imageDirectoryPath, __LINE__, __FUNCTION__, __FILE__);
// Return result
return imagePaths;
}
catch (const std::exception& e)
......
......@@ -346,6 +346,35 @@ namespace op
}
}
std::vector<std::string> getFilesOnDirectory(const std::string& directoryPath, const Extensions extensions)
{
try
{
// Get files on directory with the desired extensions
if (extensions == Extensions::Images)
{
const std::vector<std::string> extensions{
// Completely supported by OpenCV
"bmp", "dib", "pbm", "pgm", "ppm", "sr", "ras",
// Most of them supported by OpenCV
"jpg", "jpeg", "png"};
return getFilesOnDirectory(directoryPath, extensions);
}
// Unknown kind of extensions
else
{
error("Unknown kind of extensions (id = " + std::to_string(int(extensions))
+ "). Notify us of this error.", __LINE__, __FUNCTION__, __FILE__);
return {};
}
}
catch (const std::exception& e)
{
error(e.what(), __LINE__, __FUNCTION__, __FILE__);
return {};
}
}
std::string removeSpecialsCharacters(const std::string& stringToVariate)
{
try
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册