@@ -1064,9 +1064,10 @@ Generating docs for compound op::FaceExtractorCaffe...
Genera/home/travis/build/CMU-Perceptual-Computing-Lab/openpose/include/openpose/face/faceExtractorNet.hpp:18: warning: The following parameters of op::FaceExtractorNet::FaceExtractorNet(const Point< int > &netInputSize, const Point< int > &netOutputSize, const std::vector< HeatMapType > &heatMapTypes={}, const ScaleMode heatMapScale=ScaleMode::ZeroToOne) are not documented:
parameter 'heatMapTypes'
parameter 'heatMapScale'
/home/travis/build/CMU-Perceptual-Computing-Lab/openpose/include/openpose/filestream/cocoJsonSaver.hpp:18: warning: The following parameters of op::CocoJsonSaver::CocoJsonSaver(const std::string &filePathToSave, const bool humanReadable=true, const CocoJsonFormat cocoJsonFormat=CocoJsonFormat::Body) are not documented:
/home/travis/build/CMU-Perceptual-Computing-Lab/openpose/include/openpose/filestream/cocoJsonSaver.hpp:18: warning: The following parameters of op::CocoJsonSaver::CocoJsonSaver(const std::string &filePathToSave, const bool humanReadable=true, const CocoJsonFormat cocoJsonFormat=CocoJsonFormat::Body, const int mCocoJsonVariant=0) are not documented:
parameter 'humanReadable'
parameter 'cocoJsonFormat'
parameter 'mCocoJsonVariant'
/home/travis/build/CMU-Perceptual-Computing-Lab/openpose/include/openpose/hand/handExtractorCaffe.hpp:18: warning: The following parameters of op::HandExtractorCaffe::HandExtractorCaffe(const Point< int > &netInputSize, const Point< int > &netOutputSize, const std::string &modelFolder, const int gpuId, const unsigned short numberScales=1, const float rangeScales=0.4f, const std::vector< HeatMapType > &heatMapTypes={}, const ScaleMode heatMapScale=ScaleMode::ZeroToOne, const bool enableGoogleLogging=true) are not documented:
parameter 'heatMapTypes'
parameter 'heatMapScale'
...
...
@@ -1225,5 +1226,5 @@ Generating file index...
Generating file member index...
Generating example index...
finalizing index lists...
lookup cache used 4980/65536 hits=38456 misses=5343
lookup cache used 4983/65536 hits=38475 misses=5346
<p>It returns a string with the whole array data. Useful for debugging. The format is: values separated by a space, and a enter for each dimension. E.g.: For the <aclass="el"href="classop_1_1_array.html">Array</a>{2, 2, 3}, it will print: <aclass="el"href="classop_1_1_array.html#ae3ec6553128d77b0c26b848c0a0f81ca">Array<T>::toString()</a>: x1 x2 x3 x4 x5 x6</p>
<p>It returns a string with the whole array data. Useful for debugging. The format is: values separated by a space, and a enter for each dimension. E.g., For the <aclass="el"href="classop_1_1_array.html">Array</a>{2, 2, 3}, it will print: <aclass="el"href="classop_1_1_array.html#ae3ec6553128d77b0c26b848c0a0f81ca">Array<T>::toString()</a>: x1 x2 x3 x4 x5 x6</p>
<p>x7 x8 x9 x10 x11 x12 </p>
<dlclass="section return"><dt>Returns</dt><dd>A string with the array values in the above format. </dd></dl>
<p>This is the complete list of members for <aclass="el"href="classop_1_1_coco_json_saver.html">op::CocoJsonSaver</a>, including all inherited members.</p>
<divclass="textblock"><p>The <aclass="el"href="classop_1_1_coco_json_saver.html">CocoJsonSaver</a> class creates a COCO validation json file with details about the processed images. It inherits from Recorder. </p>
</div><h2class="groupheader">Constructor & Destructor Documentation</h2>
<p>This function extracts the face keypoints for each detected face in the image. </p>
<dlclass="params"><dt>Parameters</dt><dd>
<tableclass="params">
<tr><tdclass="paramname">faceRectangles</td><td>location of the faces in the image. It is a length-variable std::vector, where each index corresponds to a different person in the image. Internally, a op::Rectangle<float> (similar to cv::Rect for floating values) with the position of that face (or 0,0,0,0 if some face is missing, e.g. if a specific person has only half of the body inside the image). </td></tr>
<tr><tdclass="paramname">faceRectangles</td><td>location of the faces in the image. It is a length-variable std::vector, where each index corresponds to a different person in the image. Internally, a op::Rectangle<float> (similar to cv::Rect for floating values) with the position of that face (or 0,0,0,0 if some face is missing, e.g., if a specific person has only half of the body inside the image). </td></tr>
<tr><tdclass="paramname">cvInputData</td><td>Original image in cv::Mat format and BGR format. </td></tr>
<p>This function extracts the face keypoints for each detected face in the image. </p>
<dlclass="params"><dt>Parameters</dt><dd>
<tableclass="params">
<tr><tdclass="paramname">faceRectangles</td><td>location of the faces in the image. It is a length-variable std::vector, where each index corresponds to a different person in the image. Internally, a op::Rectangle<float> (similar to cv::Rect for floating values) with the position of that face (or 0,0,0,0 if some face is missing, e.g. if a specific person has only half of the body inside the image). </td></tr>
<tr><tdclass="paramname">faceRectangles</td><td>location of the faces in the image. It is a length-variable std::vector, where each index corresponds to a different person in the image. Internally, a op::Rectangle<float> (similar to cv::Rect for floating values) with the position of that face (or 0,0,0,0 if some face is missing, e.g., if a specific person has only half of the body inside the image). </td></tr>
<tr><tdclass="paramname">cvInputData</td><td>Original image in cv::Mat format and BGR format. </td></tr>
<p>This function returns a unique frame name (e.g. the frame number for video, the frame counter for webcam, the image name for image directory reader, etc.). </p>
<p>This function returns a unique frame name (e.g., the frame number for video, the frame counter for webcam, the image name for image directory reader, etc.). </p>
<dlclass="section return"><dt>Returns</dt><dd>std::string with an unique frame name. </dd></dl>
<p>This function extracts the hand keypoints for each detected hand in the image. </p>
<dlclass="params"><dt>Parameters</dt><dd>
<tableclass="params">
<tr><tdclass="paramname">handRectangles</td><td>location of the hands in the image. It is a length-variable std::vector, where each index corresponds to a different person in the image. Internally the std::vector, a std::array of 2 elements: index 0 and 1 for left and right hand respectively. Inside each array element, a op::Rectangle<float> (similar to cv::Rect for floating values) with the position of that hand (or 0,0,0,0 if some hand is missing, e.g. if a specific person has only half of the body inside the image). </td></tr>
<tr><tdclass="paramname">handRectangles</td><td>location of the hands in the image. It is a length-variable std::vector, where each index corresponds to a different person in the image. Internally the std::vector, a std::array of 2 elements: index 0 and 1 for left and right hand respectively. Inside each array element, a op::Rectangle<float> (similar to cv::Rect for floating values) with the position of that hand (or 0,0,0,0 if some hand is missing, e.g., if a specific person has only half of the body inside the image). </td></tr>
<tr><tdclass="paramname">cvInputData</td><td>Original image in cv::Mat format and BGR format. </td></tr>
<p>This function extracts the hand keypoints for each detected hand in the image. </p>
<dlclass="params"><dt>Parameters</dt><dd>
<tableclass="params">
<tr><tdclass="paramname">handRectangles</td><td>location of the hands in the image. It is a length-variable std::vector, where each index corresponds to a different person in the image. Internally the std::vector, a std::array of 2 elements: index 0 and 1 for left and right hand respectively. Inside each array element, a op::Rectangle<float> (similar to cv::Rect for floating values) with the position of that hand (or 0,0,0,0 if some hand is missing, e.g. if a specific person has only half of the body inside the image). </td></tr>
<tr><tdclass="paramname">handRectangles</td><td>location of the hands in the image. It is a length-variable std::vector, where each index corresponds to a different person in the image. Internally the std::vector, a std::array of 2 elements: index 0 and 1 for left and right hand respectively. Inside each array element, a op::Rectangle<float> (similar to cv::Rect for floating values) with the position of that hand (or 0,0,0,0 if some hand is missing, e.g., if a specific person has only half of the body inside the image). </td></tr>
<tr><tdclass="paramname">cvInputData</td><td>Original image in cv::Mat format and BGR format. </td></tr>
<p>This function returns a unique frame name (e.g. the frame number for video, the frame counter for webcam, the image name for image directory reader, etc.). </p>
<p>This function returns a unique frame name (e.g., the frame number for video, the frame counter for webcam, the image name for image directory reader, etc.). </p>
<dlclass="section return"><dt>Returns</dt><dd>std::string with an unique frame name. </dd></dl>
<p>This function returns a unique frame name (e.g. the frame number for video, the frame counter for webcam, the image name for image directory reader, etc.). </p>
<p>This function returns a unique frame name (e.g., the frame number for video, the frame counter for webcam, the image name for image directory reader, etc.). </p>
<dlclass="section return"><dt>Returns</dt><dd>std::string with an unique frame name. </dd></dl>
<divclass="textblock"><p><aclass="el"href="classop_1_1_producer.html">Producer</a> is an abstract class to extract frames from a source (image directory, video file, webcam stream, etc.). It has the basic and common functions (e.g. getFrame, release & isOpened). </p>
<divclass="textblock"><p><aclass="el"href="classop_1_1_producer.html">Producer</a> is an abstract class to extract frames from a source (image directory, video file, webcam stream, etc.). It has the basic and common functions (e.g., getFrame, release & isOpened). </p>
</div><h2class="groupheader">Constructor & Destructor Documentation</h2>
@@ -481,7 +481,7 @@ Protected Member Functions</h2></td></tr>
</tr>
</table>
</div><divclass="memdoc">
<p>This function returns a unique frame name (e.g. the frame number for video, the frame counter for webcam, the image name for image directory reader, etc.). </p>
<p>This function returns a unique frame name (e.g., the frame number for video, the frame counter for webcam, the image name for image directory reader, etc.). </p>
<dlclass="section return"><dt>Returns</dt><dd>std::string with an unique frame name. </dd></dl>
<p>Implemented in <aclass="el"href="classop_1_1_image_directory_reader.html#a46ce23209afe6d3ca90db545b69cd04a">op::ImageDirectoryReader</a>, <aclass="el"href="classop_1_1_video_capture_reader.html#a06348fd9a290fc2ece2f3c2e4dc9bc70">op::VideoCaptureReader</a>, <aclass="el"href="classop_1_1_video_reader.html#a508eed918fbe3bfe3eff4c1ebacb3463">op::VideoReader</a>, <aclass="el"href="classop_1_1_webcam_reader.html#a58c315e577c12486e5ab1b941d4cce04">op::WebcamReader</a>, <aclass="el"href="classop_1_1_flir_reader.html#a711db0919bd7516fde3e641c13259637">op::FlirReader</a>, and <aclass="el"href="classop_1_1_ip_camera_reader.html#a0c1582090cc7c54dd9cb752207b52986">op::IpCameraReader</a>.</p>
<divclass="textblock"><p><aclass="el"href="classop_1_1_video_capture_reader.html">VideoCaptureReader</a> is an abstract class to extract frames from a cv::VideoCapture source (video file, webcam stream, etc.). It has the basic and common functions of the cv::VideoCapture class (e.g. get, set, etc.). </p>
<divclass="textblock"><p><aclass="el"href="classop_1_1_video_capture_reader.html">VideoCaptureReader</a> is an abstract class to extract frames from a cv::VideoCapture source (video file, webcam stream, etc.). It has the basic and common functions of the cv::VideoCapture class (e.g., get, set, etc.). </p>
</div><h2class="groupheader">Constructor & Destructor Documentation</h2>
@@ -347,7 +347,7 @@ Protected Member Functions</h2></td></tr>
</tr>
</table>
</div><divclass="memdoc">
<p>This function returns a unique frame name (e.g. the frame number for video, the frame counter for webcam, the image name for image directory reader, etc.). </p>
<p>This function returns a unique frame name (e.g., the frame number for video, the frame counter for webcam, the image name for image directory reader, etc.). </p>
<dlclass="section return"><dt>Returns</dt><dd>std::string with an unique frame name. </dd></dl>
<divclass="textblock"><p><aclass="el"href="classop_1_1_video_reader.html">VideoReader</a> is a wrapper of the cv::VideoCapture class for video. It allows controlling a video (e.g. extracting frames, setting resolution & fps, etc). </p>
<divclass="textblock"><p><aclass="el"href="classop_1_1_video_reader.html">VideoReader</a> is a wrapper of the cv::VideoCapture class for video. It allows controlling a video (e.g., extracting frames, setting resolution & fps, etc). </p>
</div><h2class="groupheader">Constructor & Destructor Documentation</h2>
<p>This function returns a unique frame name (e.g. the frame number for video, the frame counter for webcam, the image name for image directory reader, etc.). </p>
<p>This function returns a unique frame name (e.g., the frame number for video, the frame counter for webcam, the image name for image directory reader, etc.). </p>
<dlclass="section return"><dt>Returns</dt><dd>std::string with an unique frame name. </dd></dl>
<p>This function returns a unique frame name (e.g. the frame number for video, the frame counter for webcam, the image name for image directory reader, etc.). </p>
<p>This function returns a unique frame name (e.g., the frame number for video, the frame counter for webcam, the image name for image directory reader, etc.). </p>
<dlclass="section return"><dt>Returns</dt><dd>std::string with an unique frame name. </dd></dl>
<divclass="line"><aname="l00006"></a><spanclass="lineno"> 6</span> <spanclass="comment">// Use op::round/max/min for basic types (int, char, long, float, double, etc). Never with classes!</span></div>
<divclass="line"><aname="l00007"></a><spanclass="lineno"> 7</span> <spanclass="comment">// `std::` alternatives uses 'const T&' instead of 'const T' as argument.</span></div>
<divclass="line"><aname="l00008"></a><spanclass="lineno"> 8</span> <spanclass="comment">// E.g. std::round is really slow (~300 ms vs ~10 ms when I individually apply it to each element of a whole</span></div>
<divclass="line"><aname="l00008"></a><spanclass="lineno"> 8</span> <spanclass="comment">// E.g., std::round is really slow (~300 ms vs ~10 ms when I individually apply it to each element of a whole</span></div>
<p>Most users do not need the OpenPose C++/Python API, but can simply use the OpenPose Demo:</p>
<ul>
<li><b>OpenPose Demo</b>: To easily process images/video/webcam and display/save the results. See doc/demo_overview.md. E.g. run OpenPose in a video with: ``` <h1>Ubuntu</h1>
<li><b>OpenPose Demo</b>: To easily process images/video/webcam and display/save the results. See doc/demo_overview.md. E.g., run OpenPose in a video with: ``` <h1>Ubuntu</h1>
<li><b>Adding an extra module</b>: Check ./doc/library_add_new_module.md "doc/library_add_new_module.md".</li>
<li><b>Standalone face or hand detector</b>:<ul>
<li><b>Face</b> keypoint detection <b>without body</b> keypoint detection: If you want to speed it up (but also reduce amount of detected faces), check the OpenCV-face-detector approach in doc/standalone_face_or_hand_keypoint_detector.md.</li>
<li><b>Use your own face/hand detector</b>: You can use the hand and/or face keypoint detectors with your own face or hand detectors, rather than using the body detector. E.g. useful for camera views at which the hands are visible but not the body (OpenPose detector would fail). See doc/standalone_face_or_hand_keypoint_detector.md.</li>
<li><b>Use your own face/hand detector</b>: You can use the hand and/or face keypoint detectors with your own face or hand detectors, rather than using the body detector. E.g., useful for camera views at which the hands are visible but not the body (OpenPose detector would fail). See doc/standalone_face_or_hand_keypoint_detector.md.</li>
@@ -709,7 +709,7 @@ Public Attributes</h2></td></tr>
</tr>
</table>
</div><divclass="memdoc">
<p>Rendered image in cv::Mat uchar format. It has been resized to the desired output resolution (e.g. <code>resolution</code> flag in the demo). If outputData is empty, cvOutputData will also be empty. Size: (output_height x output_width) x 3 channels </p>
<p>Rendered image in cv::Mat uchar format. It has been resized to the desired output resolution (e.g.,<code>resolution</code> flag in the demo). If outputData is empty, cvOutputData will also be empty. Size: (output_height x output_width) x 3 channels </p>
</div>
</div>
...
...
@@ -902,7 +902,7 @@ Public Attributes</h2></td></tr>
</tr>
</table>
</div><divclass="memdoc">
<p>Name used when saving the data to disk (e.g. <code>write_images</code> or <code>write_keypoint</code> flags in the demo). </p>
<p>Name used when saving the data to disk (e.g.,<code>write_images</code> or <code>write_keypoint</code> flags in the demo). </p>
</div>
</div>
...
...
@@ -941,7 +941,7 @@ Public Attributes</h2></td></tr>
</tr>
</table>
</div><divclass="memdoc">
<p>Rendered image in <aclass="el"href="classop_1_1_array.html">Array<float></a> format. It consists of a blending of the cvInputData and the pose/body part(s) heatmap/PAF(s). If rendering is disabled (e.g. <code>no_render_pose</code> flag in the demo), outputData will be empty. Size: 3 x output_net_height x output_net_width </p>
<p>Rendered image in <aclass="el"href="classop_1_1_array.html">Array<float></a> format. It consists of a blending of the cvInputData and the pose/body part(s) heatmap/PAF(s). If rendering is disabled (e.g.,<code>no_render_pose</code> flag in the demo), outputData will be empty. Size: 3 x output_net_height x output_net_width </p>
</div>
</div>
...
...
@@ -993,7 +993,7 @@ Public Attributes</h2></td></tr>
</tr>
</table>
</div><divclass="memdoc">
<p>Body pose (x,y,score) locations for each person in the image. It has been resized to the desired output resolution (e.g.<code>resolution</code> flag in the demo). Size: #people x #body parts (e.g. 18 for COCO or 15 for MPI) x 3 ((x,y) coordinates + score) </p>
<p>Body pose (x,y,score) locations for each person in the image. It has been resized to the desired output resolution (e.g., <code>resolution</code> flag in the demo). Size: #people x #body parts (e.g., 18 for COCO or 15 for MPI) x 3 ((x,y) coordinates + score) </p>
</div>
</div>
...
...
@@ -1006,7 +1006,7 @@ Public Attributes</h2></td></tr>
</tr>
</table>
</div><divclass="memdoc">
<p>Body pose (x,y,z,score) locations for each person in the image. Size: #people x #body parts (e.g. 18 for COCO or 15 for MPI) x 4 ((x,y,z) coordinates + score) </p>
<p>Body pose (x,y,z,score) locations for each person in the image. Size: #people x #body parts (e.g., 18 for COCO or 15 for MPI) x 4 ((x,y,z) coordinates + score) </p>
</div>
</div>
...
...
@@ -1019,7 +1019,7 @@ Public Attributes</h2></td></tr>
</tr>
</table>
</div><divclass="memdoc">
<p>Body pose global confidence/score for each person in the image. It does not only consider the score of each body keypoint, but also the score of each PAF association. Optimized for COCO evaluation metric. It will highly penalyze people with missing body parts (e.g. cropped people on the borders of the image). If poseKeypoints is empty, poseScores will also be empty. Size: #people </p>
<p>Body pose global confidence/score for each person in the image. It does not only consider the score of each body keypoint, but also the score of each PAF association. Optimized for COCO evaluation metric. It will highly penalyze people with missing body parts (e.g., cropped people on the borders of the image). If poseKeypoints is empty, poseScores will also be empty. Size: #people </p>
@@ -302,7 +302,7 @@ Public Attributes</h2></td></tr>
</tr>
</table>
</div><divclass="memdoc">
<p>Total range between smallest and biggest scale. The scales will be centered in ratio 1. E.g. if scaleRange = 0.4 and scalesNumber = 2, then there will be 2 scales, 0.8 and 1.2. </p>
<p>Total range between smallest and biggest scale. The scales will be centered in ratio 1. E.g., if scaleRange = 0.4 and scalesNumber = 2, then there will be 2 scales, 0.8 and 1.2. </p>
<divclass="textblock"><p><aclass="el"href="structop_1_1_wrapper_struct_output.html">WrapperStructOutput</a>: Output (small GUI, writing rendered results and/or pose data, etc.) configuration struct. <aclass="el"href="structop_1_1_wrapper_struct_output.html">WrapperStructOutput</a> allows the user to set up the input frames generator. </p>
</div><h2class="groupheader">Constructor & Destructor Documentation</h2>
@@ -439,7 +460,7 @@ Public Attributes</h2></td></tr>
</tr>
</table>
</div><divclass="memdoc">
<p>Rendered image saving folder format. Check your OpenCV version documentation for a list of compatible formats. E.g. png, jpg, etc. If writeImages is empty (default), it makes no effect. </p>
<p>Rendered image saving folder format. Check your OpenCV version documentation for a list of compatible formats. E.g., png, jpg, etc. If writeImages is empty (default), it makes no effect. </p>
<divclass="line"><aname="l00030"></a><spanclass="lineno"> 30</span> <spanclass="comment">// Virtual in case some function needs spetial stopping (e.g. buffers might not stop inmediately and need a few iterations)</span></div>
<divclass="line"><aname="l00030"></a><spanclass="lineno"> 30</span> <spanclass="comment">// Virtual in case some function needs spetial stopping (e.g., buffers might not stop inmediately and need a</span></div>
<divclass="line"><aname="l00031"></a><spanclass="lineno"> 31</span> <spanclass="comment">// few iterations)</span></div>