提交 1f13f818 编写于 作者: G gineshidalgo99

Renamed intRound to avoid bugs

上级 f0664bcd
......@@ -4,11 +4,11 @@
-----------------
| | Python (CUDA GPU) | Python (CPU)| CUDA GPU | CPU | Debug mode |
| :---: | :---: | :---: | :---: |:---: | :---: |
| **Linux** | [![Status](https://travis-matrix-badges.herokuapp.com/repos/CMU-Perceptual-Computing-Lab/openpose/branches/master/1)](https://travis-ci.org/CMU-Perceptual-Computing-Lab/openpose) | [![Status](https://travis-matrix-badges.herokuapp.com/repos/CMU-Perceptual-Computing-Lab/openpose/branches/master/2)](https://travis-ci.org/CMU-Perceptual-Computing-Lab/openpose) | [![Status](https://travis-matrix-badges.herokuapp.com/repos/CMU-Perceptual-Computing-Lab/openpose/branches/master/3)](https://travis-ci.org/CMU-Perceptual-Computing-Lab/openpose) | [![Status](https://travis-matrix-badges.herokuapp.com/repos/CMU-Perceptual-Computing-Lab/openpose/branches/master/4)](https://travis-ci.org/CMU-Perceptual-Computing-Lab/openpose) | [![Status](https://travis-matrix-badges.herokuapp.com/repos/CMU-Perceptual-Computing-Lab/openpose/branches/master/5)](https://travis-ci.org/CMU-Perceptual-Computing-Lab/openpose) |
| **MacOS** | | [![Status](https://travis-matrix-badges.herokuapp.com/repos/CMU-Perceptual-Computing-Lab/openpose/branches/master/6)](https://travis-ci.org/CMU-Perceptual-Computing-Lab/openpose) | | [![Status](https://travis-matrix-badges.herokuapp.com/repos/CMU-Perceptual-Computing-Lab/openpose/branches/master/7)](https://travis-ci.org/CMU-Perceptual-Computing-Lab/openpose) | [![Status](https://travis-matrix-badges.herokuapp.com/repos/CMU-Perceptual-Computing-Lab/openpose/branches/master/8)](https://travis-ci.org/CMU-Perceptual-Computing-Lab/openpose) |
<!-- | **Windows** | | | | | | -->
| | `Python (CUDA GPU)` | `Python (CPU)` | `CUDA GPU` | `CPU` | `Debug mode` |
| :---: | :---: | :---: | :---: |:---: | :---: |
| **`Linux`** | [![Status](https://travis-matrix-badges.herokuapp.com/repos/CMU-Perceptual-Computing-Lab/openpose/branches/master/1)](https://travis-ci.org/CMU-Perceptual-Computing-Lab/openpose) | [![Status](https://travis-matrix-badges.herokuapp.com/repos/CMU-Perceptual-Computing-Lab/openpose/branches/master/2)](https://travis-ci.org/CMU-Perceptual-Computing-Lab/openpose) | [![Status](https://travis-matrix-badges.herokuapp.com/repos/CMU-Perceptual-Computing-Lab/openpose/branches/master/3)](https://travis-ci.org/CMU-Perceptual-Computing-Lab/openpose) | [![Status](https://travis-matrix-badges.herokuapp.com/repos/CMU-Perceptual-Computing-Lab/openpose/branches/master/4)](https://travis-ci.org/CMU-Perceptual-Computing-Lab/openpose) | [![Status](https://travis-matrix-badges.herokuapp.com/repos/CMU-Perceptual-Computing-Lab/openpose/branches/master/5)](https://travis-ci.org/CMU-Perceptual-Computing-Lab/openpose) |
| **`MacOS`** | | [![Status](https://travis-matrix-badges.herokuapp.com/repos/CMU-Perceptual-Computing-Lab/openpose/branches/master/6)](https://travis-ci.org/CMU-Perceptual-Computing-Lab/openpose) | | [![Status](https://travis-matrix-badges.herokuapp.com/repos/CMU-Perceptual-Computing-Lab/openpose/branches/master/7)](https://travis-ci.org/CMU-Perceptual-Computing-Lab/openpose) | [![Status](https://travis-matrix-badges.herokuapp.com/repos/CMU-Perceptual-Computing-Lab/openpose/branches/master/8)](https://travis-ci.org/CMU-Perceptual-Computing-Lab/openpose) |
<!-- | **`Windows`** | | | | | | -->
<!--
Note: Currently using [travis-matrix-badges](https://github.com/bjfish/travis-matrix-badges) vs. traditional [![Build Status](https://travis-ci.org/CMU-Perceptual-Computing-Lab/openpose.svg?branch=master)](https://travis-ci.org/CMU-Perceptual-Computing-Lab/openpose)
-->
......
......@@ -145,7 +145,7 @@ OpenPose Library - Release Notes
1. Changed several functions on `core/`, `pose/`, `face/`, and `hand/` modules.
3. `CPU_ONLY` changed by `USE_CUDA` to keep format.
3. Main bugs fixed:
1. Scaling resize issue fixed: ~1-pixel offset due to not considering 0-based indexes.
1. Scaling resize issue fixed: approximately 1-pixel offset due to not considering 0-based indexes.
2. Ubuntu installer script now works even if Python pip was not installed previously.
3. Flags to set first and last frame as well as jumping frames backward and forward now works on image directory reader.
......@@ -203,7 +203,7 @@ OpenPose Library - Release Notes
4. CvMatToOpInput requires PoseModel to know the normalization to be performed.
5. Created `net/` module in order to reduce `core/` number of classes and files and for future scalability.
3. Main bugs fixed:
1. Slight speed up (~1%) for performing the non-maximum suppression stage only in the body part heatmaps channels, and not also in the PAF channels.
1. Slight speed up (around 1%) for performing the non-maximum suppression stage only in the body part heatmaps channels, and not also in the PAF channels.
2. Fixed core-dumped in PoseRenderer with GUI when changed element to be rendered to something else than skeleton.
3. 3-D visualizer does not crash on exit anymore.
4. Fake pause ('m' key pressed) works again.
......@@ -320,8 +320,9 @@ OpenPose Library - Release Notes
9. Renamed `--frame_keep_distortion` as `--frame_undistort`, which performs the opposite operation (the default value has been also changed to the opposite).
10. Renamed `--camera_parameter_folder` as `--camera_parameter_path` because it could also take a whole XML file path rather than its parent folder.
11. Default value of flag `--scale_gap` changed from 0.3 to 0.25.
12. Moved most sh scripts into the `scripts/` folder. Only models/getModels.sh and the *.bat files are kept under `models/` and `3rdparty/windows`.
12. Moved most sh scripts into the `scripts/` folder. Only models/getModels.sh and the `*.bat` files are kept under `models/` and `3rdparty/windows`.
13. For Python compatibility and scalability increase, template `TDatums` used for `include/openpose/wrapper/wrapper.hpp` has changed from `std::vector<Datum>` to `std::vector<std::shared_ptr<Datum>>`, including the respective changes in all the worker classes. In addition, some template classes have been simplified to only take 1 template parameter for user simplicity.
14. Renamed intRound, charRound, etc. by positiveIntRound, positiveCharRound, etc. so that people can realize it is not safe for negative numbers.
3. Main bugs fixed:
1. CMake-GUI was forcing to Release mode, allowed Debug modes too.
2. NMS returns in index 0 the number of found peaks. However, while the number of peaks was truncated to a maximum of 127, this index 0 was saving the real number instead of the truncated one.
......
......@@ -3,34 +3,37 @@
namespace op
{
// VERY IMPORTANT: These fast functions does NOT work for negative integer numbers.
// E.g., positiveIntRound(-180.f) = -179.
// Round functions
// Signed
template<typename T>
inline __device__ char charRound(const T a)
inline __device__ char positiveCharRound(const T a)
{
return char(a+0.5f);
}
template<typename T>
inline __device__ signed char sCharRound(const T a)
inline __device__ signed char positiveSCharRound(const T a)
{
return (signed char)(a+0.5f);
}
template<typename T>
inline __device__ int intRound(const T a)
inline __device__ int positiveIntRound(const T a)
{
return int(a+0.5f);
}
template<typename T>
inline __device__ long longRound(const T a)
inline __device__ long positiveLongRound(const T a)
{
return long(a+0.5f);
}
template<typename T>
inline __device__ long long longLongRound(const T a)
inline __device__ long long positiveLongLongRound(const T a)
{
return (long long)(a+0.5f);
}
......
......@@ -74,7 +74,7 @@ namespace op
&& frameLast > spProducer->get(CV_CAP_PROP_FRAME_COUNT)-1)
error("The desired last frame must be lower than the length of the video or the number of images."
" Current: " + std::to_string(frameLast) + " vs. "
+ std::to_string(intRound(spProducer->get(CV_CAP_PROP_FRAME_COUNT))-1) + ".",
+ std::to_string(positiveIntRound(spProducer->get(CV_CAP_PROP_FRAME_COUNT))-1) + ".",
__LINE__, __FUNCTION__, __FILE__);
// Set frame first and step
if (spProducer->getType() != ProducerType::FlirCamera && spProducer->getType() != ProducerType::IPCamera
......
......@@ -42,7 +42,7 @@ namespace op
private:
SpinnakerWrapper mSpinnakerWrapper;
Point<int> mResolution;
long long mFrameNameCounter;
unsigned long long mFrameNameCounter;
cv::Mat getRawFrame();
......
......@@ -137,12 +137,6 @@ namespace op
*/
void checkFrameIntegrity(cv::Mat& frame);
/**
* It performs flipping and rotation over the desired cv::Mat.
* @param cvMat cv::Mat with the frame matrix to be flipped and/or rotated.
*/
void flipAndRotate(cv::Mat& cvMat) const;
/**
* Protected function which checks that the frame producer has ended. If so, if resets
* or releases the producer according to mRepeatWhenFinished.
......
......@@ -8,34 +8,37 @@ namespace op
// E.g., std::round is really slow (~300 ms vs ~10 ms when I individually apply it to each element of a whole
// image array
// VERY IMPORTANT: These fast functions does NOT work for negative integer numbers.
// E.g., positiveIntRound(-180.f) = -179.
// Round functions
// Signed
template<typename T>
inline char charRound(const T a)
inline char positiveCharRound(const T a)
{
return char(a+0.5f);
}
template<typename T>
inline signed char sCharRound(const T a)
inline signed char positiveSCharRound(const T a)
{
return (signed char)(a+0.5f);
}
template<typename T>
inline int intRound(const T a)
inline int positiveIntRound(const T a)
{
return int(a+0.5f);
}
template<typename T>
inline long longRound(const T a)
inline long positiveLongRound(const T a)
{
return long(a+0.5f);
}
template<typename T>
inline long long longLongRound(const T a)
inline long long positiveLongLongRound(const T a)
{
return (long long)(a+0.5f);
}
......
......@@ -21,6 +21,14 @@ namespace op
const int borderMode = cv::BORDER_CONSTANT, const cv::Scalar& borderValue = cv::Scalar{0,0,0});
OP_API void keepRoiInside(cv::Rect& roi, const int imageWidth, const int imageHeight);
/**
* It performs rotation and flipping over the desired cv::Mat.
* @param cvMat cv::Mat with the frame matrix to be rotated and/or flipped.
* @param rotationAngle How much the cvMat element should be rotated. 0 would mean no rotation.
* @param flipFrame Whether to flip the cvMat element. Set to false to disable it.
*/
OP_API void rotateAndFlipFrame(cv::Mat& cvMat, const double rotationAngle, const bool flipFrame = false);
}
#endif // OPENPOSE_UTILITIES_OPEN_CV_HPP
......@@ -765,7 +765,7 @@ namespace op
renderers.emplace_back(std::static_pointer_cast<Renderer>(poseGpuRenderer));
// Display
const auto numberViews = (producerSharedPtr != nullptr
? intRound(producerSharedPtr->get(ProducerProperty::NumberViews)) : 1);
? positiveIntRound(producerSharedPtr->get(ProducerProperty::NumberViews)) : 1);
auto finalOutputSizeGui = finalOutputSize;
if (numberViews > 1 && finalOutputSizeGui.x > 0)
finalOutputSizeGui.x *= numberViews;
......
......@@ -448,9 +448,11 @@ namespace op
for (auto lm = 0; lm < numberPointsInLine; lm++)
{
const auto mX = fastMax(
0, fastMin(imageSize.width-1, intRound(fourPointsVector[i].x + lm*pointDirection[i].x)));
0, fastMin(
imageSize.width-1, positiveIntRound(fourPointsVector[i].x + lm*pointDirection[i].x)));
const auto mY = fastMax(
0, fastMin(imageSize.height-1, intRound(fourPointsVector[i].y + lm*pointDirection[i].y)));
0, fastMin(
imageSize.height-1, positiveIntRound(fourPointsVector[i].y + lm*pointDirection[i].y)));
const cv::Vec3b bgrValue = image.at<cv::Vec3b>(mY, mX);
sum += (bgrValue.val[0] + bgrValue.val[1] + bgrValue.val[2])/3;
count++;
......
......@@ -50,11 +50,11 @@ namespace op
error("Only 1 of the dimensions of net input resolution can be <= 0.",
__LINE__, __FUNCTION__, __FILE__);
if (poseNetInputSize.x <= 0)
poseNetInputSize.x = 16 * intRound(
poseNetInputSize.x = 16 * positiveIntRound(
poseNetInputSize.y * inputResolution.x / (float) inputResolution.y / 16.f
);
else // if (poseNetInputSize.y <= 0)
poseNetInputSize.y = 16 * intRound(
poseNetInputSize.y = 16 * positiveIntRound(
poseNetInputSize.x * inputResolution.y / (float) inputResolution.x / 16.f
);
}
......@@ -68,10 +68,10 @@ namespace op
error("All scales must be in the range [0, 1], i.e., 0 <= 1-scale_number*scale_gap <= 1",
__LINE__, __FUNCTION__, __FILE__);
const auto targetWidth = fastTruncate(intRound(poseNetInputSize.x * currentScale) / 16 * 16, 1,
poseNetInputSize.x);
const auto targetHeight = fastTruncate(intRound(poseNetInputSize.y * currentScale) / 16 * 16, 1,
poseNetInputSize.y);
const auto targetWidth = fastTruncate(
positiveIntRound(poseNetInputSize.x * currentScale) / 16 * 16, 1, poseNetInputSize.x);
const auto targetHeight = fastTruncate(
positiveIntRound(poseNetInputSize.y * currentScale) / 16 * 16, 1, poseNetInputSize.y);
const Point<int> targetSize{targetWidth, targetHeight};
scaleInputToNetInputs[i] = resizeGetScaleFactor(inputResolution, targetSize);
netInputSizes[i] = targetSize;
......
......@@ -63,7 +63,7 @@ namespace op
// [0, 255]
else if (heatMapScaleMode == ScaleMode::UnsignedChar)
for (auto i = 0u ; i < volumeBodyParts ; i++)
heatMapsPtr[i] = (float)intRound(fastTruncate(heatMapsPtr[i]) * 255.f);
heatMapsPtr[i] = (float)positiveIntRound(fastTruncate(heatMapsPtr[i]) * 255.f);
// Avoid values outside original range
else
for (auto i = 0u ; i < volumeBodyParts ; i++)
......
......@@ -52,7 +52,7 @@ namespace op
{
// GPU rendering
#ifdef USE_CUDA
// I prefer std::round(T&) over intRound(T) for std::atomic
// I prefer std::round(T&) over positiveIntRound(T) for std::atomic
const auto elementRendered = spElementToRender->load();
const auto numberPeople = faceKeypoints.getSize(0);
const Point<int> frameSize{outputData.getSize(1), outputData.getSize(0)};
......
......@@ -51,8 +51,9 @@ namespace op
}
}
void addPeopleIds(cv::Mat& cvOutputData, const Array<long long>& poseIds, const Array<float>& poseKeypoints,
const int borderMargin)
void addPeopleIds(
cv::Mat& cvOutputData, const Array<long long>& poseIds, const Array<float>& poseKeypoints,
const int borderMargin)
{
try
{
......@@ -68,27 +69,28 @@ namespace op
const auto indexSecondary = i * poseKeypointsArea + poseKeypoints.getSize(2);
if (poseKeypoints[indexMain+2] > isVisible || poseKeypoints[indexSecondary+2] > isVisible)
{
const auto xA = intRound(poseKeypoints[indexMain]);
const auto yA = intRound(poseKeypoints[indexMain+1]);
const auto xB = intRound(poseKeypoints[indexSecondary]);
const auto yB = intRound(poseKeypoints[indexSecondary+1]);
const auto xA = positiveIntRound(poseKeypoints[indexMain]);
const auto yA = positiveIntRound(poseKeypoints[indexMain+1]);
const auto xB = positiveIntRound(poseKeypoints[indexSecondary]);
const auto yB = positiveIntRound(poseKeypoints[indexSecondary+1]);
int x;
int y;
if (poseKeypoints[indexMain+2] > isVisible && poseKeypoints[indexSecondary+2] > isVisible)
{
const auto keypointRatio = intRound(0.15f * std::sqrt((xA-xB)*(xA-xB) + (yA-yB)*(yA-yB)));
const auto keypointRatio = positiveIntRound(
0.15f * std::sqrt((xA-xB)*(xA-xB) + (yA-yB)*(yA-yB)));
x = xA + 3*keypointRatio;
y = yA - 3*keypointRatio;
}
else if (poseKeypoints[indexMain+2] > isVisible)
{
x = xA + intRound(0.25f*borderMargin);
y = yA - intRound(0.25f*borderMargin);
x = xA + positiveIntRound(0.25f*borderMargin);
y = yA - positiveIntRound(0.25f*borderMargin);
}
else //if (poseKeypoints[indexSecondary+2] > isVisible)
{
x = xB + intRound(0.25f*borderMargin);
y = yB - intRound(0.5f*borderMargin);
x = xB + positiveIntRound(0.25f*borderMargin);
y = yB - positiveIntRound(0.5f*borderMargin);
}
putTextOnCvMat(cvOutputData, std::to_string(poseIds[i]), {x, y}, WHITE_SCALAR, false, cvOutputData.cols);
}
......@@ -125,7 +127,7 @@ namespace op
if (cvOutputData.empty())
error("Wrong input element (empty cvOutputData).", __LINE__, __FUNCTION__, __FILE__);
// Size
const auto borderMargin = intRound(fastMax(cvOutputData.cols, cvOutputData.rows) * 0.025);
const auto borderMargin = positiveIntRound(fastMax(cvOutputData.cols, cvOutputData.rows) * 0.025);
// Update fps
updateFps(mLastId, mFps, mFpsCounter, mFpsQueue, id, mNumberGpus);
// Fps or s/gpu
......@@ -133,8 +135,9 @@ namespace op
std::snprintf(charArrayAux, 15, "%4.1f fps", mFps);
// Recording inverse: sec/gpu
// std::snprintf(charArrayAux, 15, "%4.2f s/gpu", (mFps != 0. ? mNumberGpus/mFps : 0.));
putTextOnCvMat(cvOutputData, charArrayAux, {intRound(cvOutputData.cols - borderMargin), borderMargin},
WHITE_SCALAR, true, cvOutputData.cols);
putTextOnCvMat(
cvOutputData, charArrayAux, {positiveIntRound(cvOutputData.cols - borderMargin), borderMargin},
WHITE_SCALAR, true, cvOutputData.cols);
// Part to show
// Allowing some buffer when changing the part to show (if >= 2 GPUs)
// I.e. one GPU might return a previous part after the other GPU returns the new desired part, it looks
......
......@@ -147,7 +147,7 @@ namespace op
// [0, 255]
else if (heatMapScaleMode == ScaleMode::UnsignedChar)
for (auto i = 0u ; i < volumeBodyParts ; i++)
heatMapsPtr[i] = (float)intRound(fastTruncate(heatMapsPtr[i]) * 255.f);
heatMapsPtr[i] = (float)positiveIntRound(fastTruncate(heatMapsPtr[i]) * 255.f);
// Avoid values outside original range
else
for (auto i = 0u ; i < volumeBodyParts ; i++)
......@@ -295,11 +295,13 @@ namespace op
const auto minHandSize = fastMin(handRectangle.width, handRectangle.height);
// // Debugging -> red rectangle
// if (handRectangle.width > 0)
// cv::rectangle(cvInputDataCopied,
// cv::Point{intRound(handRectangle.x), intRound(handRectangle.y)},
// cv::Point{intRound(handRectangle.x + handRectangle.width),
// intRound(handRectangle.y + handRectangle.height)},
// cv::Scalar{(hand * 255.f),0.f,255.f}, 2);
// cv::rectangle(
// cvInputDataCopied,
// cv::Point{positiveIntRound(handRectangle.x),
// positiveIntRound(handRectangle.y)},
// cv::Point{positiveIntRound(handRectangle.x + handRectangle.width),
// positiveIntRound(handRectangle.y + handRectangle.height)},
// cv::Scalar{(hand * 255.f),0.f,255.f}, 2);
// Get parts
if (minHandSize > 1 && handRectangle.area() > 10)
{
......@@ -308,12 +310,13 @@ namespace op
{
// // Debugging -> green rectangle overwriting red one
// if (handRectangle.width > 0)
// cv::rectangle(cvInputDataCopied,
// cv::Point{intRound(handRectangle.x),
// intRound(handRectangle.y)},
// cv::Point{intRound(handRectangle.x + handRectangle.width),
// intRound(handRectangle.y + handRectangle.height)},
// cv::Scalar{(hand * 255.f),255.f,0.f}, 2);
// cv::rectangle(
// cvInputDataCopied,
// cv::Point{positiveIntRound(handRectangle.x),
// positiveIntRound(handRectangle.y)},
// cv::Point{positiveIntRound(handRectangle.x + handRectangle.width),
// positiveIntRound(handRectangle.y + handRectangle.height)},
// cv::Scalar{(hand * 255.f),255.f,0.f}, 2);
// Parameters
cv::Mat affineMatrix;
// Resize image to hands positions + cv::Mat -> float*
......@@ -339,16 +342,16 @@ namespace op
{1, handCurrent.getSize(1), handCurrent.getSize(2)}, 0.f);
const auto handRectangleScale = recenter(
handRectangle,
(float)(intRound(handRectangle.width * scale) / 2 * 2),
(float)(intRound(handRectangle.height * scale) / 2 * 2)
(float)(positiveIntRound(handRectangle.width * scale) / 2 * 2),
(float)(positiveIntRound(handRectangle.height * scale) / 2 * 2)
);
// // Debugging -> blue rectangle
// cv::rectangle(cvInputDataCopied,
// cv::Point{intRound(handRectangleScale.x),
// intRound(handRectangleScale.y)},
// cv::Point{intRound(handRectangleScale.x
// cv::Point{positiveIntRound(handRectangleScale.x),
// positiveIntRound(handRectangleScale.y)},
// cv::Point{positiveIntRound(handRectangleScale.x
// + handRectangleScale.width),
// intRound(handRectangleScale.y
// positiveIntRound(handRectangleScale.y
// + handRectangleScale.height)},
// cv::Scalar{255,0,0}, 2);
// Parameters
......
......@@ -46,14 +46,14 @@ namespace op
}
}
void HandGpuRenderer::renderHandInherited(Array<float>& outputData,
const std::array<Array<float>, 2>& handKeypoints)
void HandGpuRenderer::renderHandInherited(
Array<float>& outputData, const std::array<Array<float>, 2>& handKeypoints)
{
try
{
// GPU rendering
#ifdef USE_CUDA
// I prefer std::round(T&) over intRound(T) for std::atomic
// I prefer std::round(T&) over positiveIntRound(T) for std::atomic
const auto elementRendered = spElementToRender->load();
const auto numberPeople = handKeypoints[0].getSize(0);
const Point<int> frameSize{outputData.getSize(1), outputData.getSize(0)};
......
......@@ -16,7 +16,7 @@ namespace op
const auto vectorAToBY = candidateBPtr[3*j+1] - candidateAPtr[3*i+1];
const auto vectorAToBMax = fastMax(std::abs(vectorAToBX), std::abs(vectorAToBY));
const auto numberPointsInLine = fastMax(
5, fastMin(25, intRound(std::sqrt(5*vectorAToBMax))));
5, fastMin(25, positiveIntRound(std::sqrt(5*vectorAToBMax))));
const auto vectorNorm = T(std::sqrt( vectorAToBX*vectorAToBX + vectorAToBY*vectorAToBY ));
// If the peaksPtr are coincident. Don't connect them.
if (vectorNorm > 1e-6)
......@@ -33,9 +33,9 @@ namespace op
for (auto lm = 0; lm < numberPointsInLine; lm++)
{
const auto mX = fastMax(
0, fastMin(heatMapSize.x-1, intRound(sX + lm*vectorAToBXInLine)));
0, fastMin(heatMapSize.x-1, positiveIntRound(sX + lm*vectorAToBXInLine)));
const auto mY = fastMax(
0, fastMin(heatMapSize.y-1, intRound(sY + lm*vectorAToBYInLine)));
0, fastMin(heatMapSize.y-1, positiveIntRound(sY + lm*vectorAToBYInLine)));
const auto idx = mY * heatMapSize.x + mX;
const auto score = (vectorAToBNormX*mapX[idx] + vectorAToBNormY*mapY[idx]);
if (score > interThreshold)
......@@ -85,8 +85,8 @@ namespace op
const auto bodyPartB = bodyPartPairs[2*pairIndex+1];
const auto* candidateAPtr = peaksPtr + bodyPartA*peaksOffset;
const auto* candidateBPtr = peaksPtr + bodyPartB*peaksOffset;
const auto numberPeaksA = intRound(candidateAPtr[0]);
const auto numberPeaksB = intRound(candidateBPtr[0]);
const auto numberPeaksA = positiveIntRound(candidateAPtr[0]);
const auto numberPeaksB = positiveIntRound(candidateBPtr[0]);
// E.g., neck-nose connection. If one of them is empty (e.g., no noses detected)
// Add the non-empty elements into the peopleVector
......@@ -394,8 +394,8 @@ namespace op
const auto bodyPartB = bodyPartPairs[2*pairIndex+1];
const auto* candidateAPtr = peaksPtr + bodyPartA*peaksOffset;
const auto* candidateBPtr = peaksPtr + bodyPartB*peaksOffset;
const auto numberPeaksA = intRound(candidateAPtr[0]);
const auto numberPeaksB = intRound(candidateBPtr[0]);
const auto numberPeaksA = positiveIntRound(candidateAPtr[0]);
const auto numberPeaksB = positiveIntRound(candidateBPtr[0]);
const auto firstIndex = (int)pairIndex*pairScores.getSize(1)*pairScores.getSize(2);
// E.g., neck-nose connection. For each neck
for (auto indexA = 0; indexA < numberPeaksA; indexA++)
......@@ -725,7 +725,7 @@ namespace op
// Array<T> poseKeypoints2 = poseKeypoints.clone();
// const auto rootIndex = 1;
// const auto rootNumberIndex = rootIndex*(maxPeaks+1)*3;
// const auto numberPeople = intRound(peaksPtr[rootNumberIndex]);
// const auto numberPeople = positiveIntRound(peaksPtr[rootNumberIndex]);
// poseKeypoints.reset({numberPeople, (int)numberBodyParts, 3}, 0);
// poseScores.reset(numberPeople, 0);
// // // 48 channels
......@@ -800,8 +800,8 @@ namespace op
// // Set (x,y) coordinates from the distance
// const auto indexChannel = 2*bpChannel;
// // // Not refined method
// // const auto index = intRound(
// // rootY/scaleFactor)*heatMapSize.x + intRound(rootX/scaleFactor);
// // const auto index = positiveIntRound(rootY/scaleFactor)*heatMapSize.x
// + positiveIntRound(rootX/scaleFactor);
// // const Point<T> neckPartDist{
// // increaseRatio*(mapX[index]*SIGMA[indexChannel]+AVERAGE[indexChannel]),
// // increaseRatio*(mapY[index]*SIGMA[indexChannel+1]+AVERAGE[indexChannel+1])};
......@@ -812,11 +812,11 @@ namespace op
// Point<T> neckPartDistRefined{0, 0};
// auto counterRefinements = 0;
// // We must keep it inside the image size
// for (auto y = fastMax(0, intRound(rootY/scaleFactor) - constant);
// y < fastMin(heatMapSize.y, intRound(rootY/scaleFactor) + constant+1) ; y++)
// for (auto y = fastMax(0, positiveIntRound(rootY/scaleFactor) - constant);
// y < fastMin(heatMapSize.y, positiveIntRound(rootY/scaleFactor) + constant+1) ; y++)
// {
// for (auto x = fastMax(0, intRound(rootX/scaleFactor) - constant);
// x < fastMin(heatMapSize.x, intRound(rootX/scaleFactor) + constant+1) ; x++)
// for (auto x = fastMax(0, positiveIntRound(rootX/scaleFactor) - constant);
// x < fastMin(heatMapSize.x, positiveIntRound(rootX/scaleFactor) + constant+1) ; x++)
// {
// const auto index = y*heatMapSize.x + x;
// neckPartDistRefined.x += mapX[index];
......@@ -836,15 +836,17 @@ namespace op
// // Set (temporary) body part score
// poseKeypoints[{p,bpOrig,2}] = T(0.0501);
// // Associate estimated keypoint with closest one
// const auto xCleaned = fastMax(0, fastMin(heatMapSize.x-1, intRound(partX/scaleFactor)));
// const auto yCleaned = fastMax(0, fastMin(heatMapSize.y-1, intRound(partY/scaleFactor)));
// const auto xCleaned = fastMax(
// 0, fastMin(heatMapSize.x-1, positiveIntRound(partX/scaleFactor)));
// const auto yCleaned = fastMax(
// 0, fastMin(heatMapSize.y-1, positiveIntRound(partY/scaleFactor)));
// const auto partConfidence = heatMapPtr[
// bpOrig * heatMapOffset + yCleaned*heatMapSize.x + xCleaned];
// // If partConfidence is big enough, it means we are close to a keypoint
// if (partConfidence > T(0.05))
// {
// const auto candidateNumberIndex = bpOrig*(maxPeaks+1)*3;
// const auto numberCandidates = intRound(peaksPtr[candidateNumberIndex]);
// const auto numberCandidates = positiveIntRound(peaksPtr[candidateNumberIndex]);
// int closestIndex = -1;
// T closetValue = std::numeric_limits<T>::max();
// for (auto i = 0 ; i < numberCandidates ; i++)
......@@ -923,7 +925,8 @@ namespace op
// const auto* mapY = heatMapPtr + (offsetIndex+1) * heatMapOffset;
// const auto increaseRatio = scaleFactor*scaleDownFactor;
// // // Not refined method
// // const auto index = intRound(rootY/scaleFactor)*heatMapSize.x + intRound(rootX/scaleFactor);
// // const auto index = positiveIntRound(rootY/scaleFactor)*heatMapSize.x
// + positiveIntRound(rootX/scaleFactor);
// // const Point<T> neckPartDist{
// // increaseRatio*(mapX[index]*SIGMA[indexChannel]+AVERAGE[indexChannel]),
// // increaseRatio*(mapY[index]*SIGMA[indexChannel+1]+AVERAGE[indexChannel+1])};
......@@ -934,11 +937,11 @@ namespace op
// Point<T> neckPartDistRefined{0, 0};
// auto counterRefinements = 0;
// // We must keep it inside the image size
// for (auto y = fastMax(0, intRound(rootY/scaleFactor) - constant);
// y < fastMin(heatMapSize.y, intRound(rootY/scaleFactor) + constant+1) ; y++)
// for (auto y = fastMax(0, positiveIntRound(rootY/scaleFactor) - constant);
// y < fastMin(heatMapSize.y, positiveIntRound(rootY/scaleFactor) + constant+1) ; y++)
// {
// for (auto x = fastMax(0, intRound(rootX/scaleFactor) - constant);
// x < fastMin(heatMapSize.x, intRound(rootX/scaleFactor) + constant+1) ; x++)
// for (auto x = fastMax(0, positiveIntRound(rootX/scaleFactor) - constant);
// x < fastMin(heatMapSize.x, positiveIntRound(rootX/scaleFactor) + constant+1) ; x++)
// {
// const auto index = y*heatMapSize.x + x;
// neckPartDistRefined.x += mapX[index];
......@@ -958,15 +961,15 @@ namespace op
// // Set (temporary) body part score
// result[2] = T(0.0501);
// // Associate estimated keypoint with closest one
// const auto xCleaned = fastMax(0, fastMin(heatMapSize.x-1, intRound(partX/scaleFactor)));
// const auto yCleaned = fastMax(0, fastMin(heatMapSize.y-1, intRound(partY/scaleFactor)));
// const auto xCleaned = fastMax(0, fastMin(heatMapSize.x-1, positiveIntRound(partX/scaleFactor)));
// const auto yCleaned = fastMax(0, fastMin(heatMapSize.y-1, positiveIntRound(partY/scaleFactor)));
// const auto partConfidence = heatMapPtr[
// targetIndex * heatMapOffset + yCleaned*heatMapSize.x + xCleaned];
// // If partConfidence is big enough, it means we are close to a keypoint
// if (partConfidence > T(0.05))
// {
// const auto candidateNumberIndex = targetIndex*(maxPeaks+1)*3;
// const auto numberCandidates = intRound(peaksPtr[candidateNumberIndex]);
// const auto numberCandidates = positiveIntRound(peaksPtr[candidateNumberIndex]);
// int closestIndex = -1;
// T closetValue = std::numeric_limits<T>::max();
// for (auto i = 0 ; i < numberCandidates ; i++)
......@@ -1025,7 +1028,7 @@ namespace op
// const auto targetIndex = MAPPING[index];
// // Get all candidate keypoints
// const auto partNumberIndex = targetIndex*(maxPeaks+1)*3;
// const auto numberPartParts = intRound(peaksPtr[partNumberIndex]);
// const auto numberPartParts = positiveIntRound(peaksPtr[partNumberIndex]);
// std::vector<std::array<T, 3>> currentPartCandidates(numberPartParts);
// for (auto i = 0u ; i < currentPartCandidates.size() ; i++)
// {
......
......@@ -6,7 +6,6 @@
#include <openpose/gpu/cl2.hpp>
#endif
#include <openpose/net/resizeAndMergeBase.hpp>
#include <openpose/utilities/fastMath.hpp>
#include <openpose/net/resizeAndMergeCaffe.hpp>
namespace op
......@@ -56,19 +55,16 @@ namespace op
}
template <typename T>
void ResizeAndMergeCaffe<T>::Reshape(const std::vector<caffe::Blob<T>*>& bottom,
const std::vector<caffe::Blob<T>*>& top,
const T netFactor,
const T scaleFactor,
const bool mergeFirstDimension,
const int gpuID)
void ResizeAndMergeCaffe<T>::Reshape(
const std::vector<caffe::Blob<T>*>& bottom, const std::vector<caffe::Blob<T>*>& top, const T netFactor,
const T scaleFactor, const bool mergeFirstDimension, const int gpuID)
{
try
{
#ifdef USE_CAFFE
// Sanity checks
if (top.size() != 1)
error("top.size() != 1", __LINE__, __FUNCTION__, __FILE__);
error("top.size() != 1.", __LINE__, __FUNCTION__, __FILE__);
if (bottom.empty())
error("bottom cannot be empty.", __LINE__, __FUNCTION__, __FILE__);
// Data
......@@ -81,16 +77,16 @@ namespace op
// E.g., 100x100 image --> 200x200 --> 0-99 to 0-199 --> scale = 199/99 (not 2!)
// E.g., 101x101 image --> 201x201 --> scale = 2
// Test: pixel 0 --> 0, pixel 99 (ex 1) --> 199, pixel 100 (ex 2) --> 200
topShape[2] = intRound((topShape[2]*netFactor - 1.f) * scaleFactor) + 1;
topShape[3] = intRound((topShape[3]*netFactor - 1.f) * scaleFactor) + 1;
topShape[2] = (int)std::round((topShape[2]*netFactor - 1.f) * scaleFactor) + 1;
topShape[3] = (int)std::round((topShape[3]*netFactor - 1.f) * scaleFactor) + 1;
topBlob->Reshape(topShape);
// Array sizes
mTopSize = std::array<int, 4>{topBlob->shape(0), topBlob->shape(1), topBlob->shape(2),
topBlob->shape(3)};
mTopSize = std::array<int, 4>{
topBlob->shape(0), topBlob->shape(1), topBlob->shape(2), topBlob->shape(3)};
mBottomSizes.resize(bottom.size());
for (auto i = 0u ; i < mBottomSizes.size() ; i++)
mBottomSizes[i] = std::array<int, 4>{bottom[i]->shape(0), bottom[i]->shape(1),
bottom[i]->shape(2), bottom[i]->shape(3)};
mBottomSizes[i] = std::array<int, 4>{
bottom[i]->shape(0), bottom[i]->shape(1), bottom[i]->shape(2), bottom[i]->shape(3)};
#ifdef USE_OPENCL
// GPU ID
mGpuID = gpuID;
......
......@@ -302,8 +302,9 @@ namespace op
// Get scale net to output (i.e., image input)
// Note: In order to resize to input size, (un)comment the following lines
const auto scaleProducerToNetInput = resizeGetScaleFactor(inputDataSize, mNetOutputSize);
const Point<int> netSize{intRound(scaleProducerToNetInput*inputDataSize.x),
intRound(scaleProducerToNetInput*inputDataSize.y)};
const Point<int> netSize{
(int)std::round(scaleProducerToNetInput*inputDataSize.x),
(int)std::round(scaleProducerToNetInput*inputDataSize.y)};
mScaleNetToOutput = {(float)resizeGetScaleFactor(netSize, inputDataSize)};
// mScaleNetToOutput = 1.f;
// 3. Get peaks by Non-Maximum Suppression
......@@ -334,10 +335,10 @@ namespace op
/ mScaleNetToOutput;
// Make rectangle bigger to make sure the whole body is inside
cv::Rect cvRectangle{
intRound(rectangleF.x - 0.2*rectangleF.width),
intRound(rectangleF.y - 0.2*rectangleF.height),
intRound(rectangleF.width*1.4),
intRound(rectangleF.height*1.4)
positiveIntRound(rectangleF.x - 0.2*rectangleF.width),
positiveIntRound(rectangleF.y - 0.2*rectangleF.height),
positiveIntRound(rectangleF.width*1.4),
positiveIntRound(rectangleF.height*1.4)
};
keepRoiInside(cvRectangle, inputNetData[0].getSize(3), inputNetData[0].getSize(2));
// Input size
......@@ -372,8 +373,8 @@ namespace op
/*const*/ auto scaleNetToRoi = resizeGetScaleFactor(inputSizeInit, targetSize);
// Update rectangle to avoid black padding and instead take full advantage of the network area
const auto padding = Point<int>{
intRound((targetSize.x-1) / scaleNetToRoi + 1 - inputSizeInit.x),
intRound((targetSize.y-1) / scaleNetToRoi + 1 - inputSizeInit.y)
(int)std::round((targetSize.x-1) / scaleNetToRoi + 1 - inputSizeInit.x),
(int)std::round((targetSize.y-1) / scaleNetToRoi + 1 - inputSizeInit.y)
};
// Width requires padding
if (padding.x > 2 || padding.y > 2) // 2 pixels as threshold
......
......@@ -141,7 +141,7 @@ namespace op
// [0, 255]
else if (mHeatMapScaleMode == ScaleMode::UnsignedChar)
for (auto i = 0u ; i < volumeBodyParts ; i++)
heatMaps[i] = (float)intRound(fastTruncate(heatMaps[i]) * 255.f);
heatMaps[i] = (float)positiveIntRound(fastTruncate(heatMaps[i]) * 255.f);
// Avoid values outside original range
else
for (auto i = 0u ; i < volumeBodyParts ; i++)
......@@ -170,7 +170,7 @@ namespace op
// [0, 255]
else if (mHeatMapScaleMode == ScaleMode::UnsignedChar)
for (auto i = 0u ; i < channelOffset ; i++)
heatMapsPtr[i] = (float)intRound(fastTruncate(heatMapsPtr[i]) * 255.f);
heatMapsPtr[i] = (float)positiveIntRound(fastTruncate(heatMapsPtr[i]) * 255.f);
// Avoid values outside original range
else
for (auto i = 0u ; i < channelOffset ; i++)
......@@ -201,7 +201,7 @@ namespace op
// [0, 255]
else if (mHeatMapScaleMode == ScaleMode::UnsignedChar)
for (auto i = 0u ; i < volumePAFs ; i++)
heatMapsPtr[i] = (float)intRound(
heatMapsPtr[i] = (float)positiveIntRound(
fastTruncate(heatMapsPtr[i], -1.f) * 128.5f + 128.5f
);
// Avoid values outside original range
......@@ -245,7 +245,7 @@ namespace op
const auto* candidatesCpuPtr = getCandidatesCpuConstPtr();
for (auto part = 0u ; part < numberBodyParts ; part++)
{
const auto numberPartCandidates = intRound(candidatesCpuPtr[part*peaksArea]);
const auto numberPartCandidates = (int)std::round(candidatesCpuPtr[part*peaksArea]);
candidates[part].resize(numberPartCandidates);
const auto* partCandidatesPtr = &candidatesCpuPtr[part*peaksArea+3];
for (auto candidate = 0 ; candidate < numberPartCandidates ; candidate++)
......
......@@ -8,7 +8,7 @@ namespace op
const bool undistortImage, const int cameraIndex) :
Producer{ProducerType::FlirCamera, cameraParameterPath, undistortImage, -1},
mSpinnakerWrapper{cameraParameterPath, cameraResolution, undistortImage, cameraIndex},
mFrameNameCounter{0}
mFrameNameCounter{0ull}
{
try
{
......@@ -80,7 +80,7 @@ namespace op
try
{
const auto stringLength = 12u;
return toFixedLengthString( fastMax(0ll, longLongRound(mFrameNameCounter)), stringLength);
return toFixedLengthString(mFrameNameCounter, stringLength);
}
catch (const std::exception& e)
{
......
......@@ -88,7 +88,7 @@ namespace op
try
{
std::vector<cv::Mat> rawFrames;
for (auto i = 0 ; i < intRound(Producer::get(ProducerProperty::NumberViews)) ; i++)
for (auto i = 0 ; i < positiveIntRound(Producer::get(ProducerProperty::NumberViews)) ; i++)
rawFrames.emplace_back(getRawFrame());
return rawFrames;
}
......
......@@ -2,6 +2,7 @@
#include <openpose/utilities/check.hpp>
#include <openpose/utilities/fastMath.hpp>
#include <openpose/utilities/fileSystem.hpp>
#include <openpose/utilities/openCv.hpp>
#include <openpose/producer/producer.hpp>
namespace op
......@@ -119,7 +120,9 @@ namespace op
for (auto& frame : frames)
{
// Flip + rotate frame
flipAndRotate(frame);
const auto rotationAngle = mProperties[(unsigned char)ProducerProperty::Rotation];
const auto flipFrame = (mProperties[(unsigned char)ProducerProperty::Flip] == 1.);
rotateAndFlipFrame(frame, rotationAngle, flipFrame);
// Check frame integrity
checkFrameIntegrity(frame);
// If any frame invalid --> exit
......@@ -291,8 +294,8 @@ namespace op
|| (frame.rows != get(CV_CAP_PROP_FRAME_HEIGHT) && get(CV_CAP_PROP_FRAME_HEIGHT) > 0)))
{
log("Frame size changed. Returning empty frame.\nExpected vs. received sizes: "
+ std::to_string(intRound(get(CV_CAP_PROP_FRAME_WIDTH)))
+ "x" + std::to_string(intRound(get(CV_CAP_PROP_FRAME_HEIGHT)))
+ std::to_string(positiveIntRound(get(CV_CAP_PROP_FRAME_WIDTH)))
+ "x" + std::to_string(positiveIntRound(get(CV_CAP_PROP_FRAME_HEIGHT)))
+ " vs. " + std::to_string(frame.cols) + "x" + std::to_string(frame.rows),
Priority::Max, __LINE__, __FUNCTION__, __FILE__);
frame = cv::Mat();
......@@ -305,51 +308,6 @@ namespace op
}
}
void Producer::flipAndRotate(cv::Mat& frame) const
{
try
{
if (!frame.empty())
{
// Rotate it if desired
const auto rotationAngle = mProperties[(unsigned char)ProducerProperty::Rotation];
const auto flipFrame = (mProperties[(unsigned char)ProducerProperty::Flip] == 1.);
if (rotationAngle == 0.)
{
if (flipFrame)
cv::flip(frame, frame, 1);
}
else if (rotationAngle == 90.)
{
cv::transpose(frame, frame);
if (!flipFrame)
cv::flip(frame, frame, 0);
}
else if (rotationAngle == 180.)
{
if (flipFrame)
cv::flip(frame, frame, 0);
else
cv::flip(frame, frame, -1);
}
else if (rotationAngle == 270.)
{
cv::transpose(frame, frame);
if (flipFrame)
cv::flip(frame, frame, -1);
else
cv::flip(frame, frame, 1);
}
else
error("Rotation angle != {0, 90, 180, 270} degrees.", __LINE__, __FUNCTION__, __FILE__);
}
}
catch (const std::exception& e)
{
error(e.what(), __LINE__, __FUNCTION__, __FILE__);
}
}
void Producer::ifEndedResetOrRelease()
{
try
......@@ -424,7 +382,7 @@ namespace op
// set(frames, X) sets to frame X+delta, due to codecs issues)
else if (difference < -0.45 && mNumberSetPositionTrackingFps < numberSetPositionThreshold)
{
const auto sleepMs = intRound( (-difference*nsPerFrame*1e-6)*0.99 );
const auto sleepMs = positiveIntRound( (-difference*nsPerFrame*1e-6)*0.99 );
std::this_thread::sleep_for(std::chrono::milliseconds{sleepMs});
}
}
......
......@@ -60,7 +60,7 @@ namespace op
try
{
const auto stringLength = 12u;
return toFixedLengthString( fastMax(0ll, longLongRound(get(CV_CAP_PROP_POS_FRAMES))), stringLength);
return toFixedLengthString( fastMax(0ull, uLongLongRound(get(CV_CAP_PROP_POS_FRAMES))), stringLength);
}
catch (const std::exception& e)
{
......
......@@ -33,7 +33,8 @@ namespace op
try
{
if (capProperty == CV_CAP_PROP_FRAME_WIDTH)
return VideoCaptureReader::get(capProperty) / intRound(Producer::get(ProducerProperty::NumberViews));
return VideoCaptureReader::get(capProperty)
/ positiveIntRound(Producer::get(ProducerProperty::NumberViews));
else
return VideoCaptureReader::get(capProperty);
}
......@@ -73,7 +74,7 @@ namespace op
{
try
{
const auto numberViews = intRound(Producer::get(ProducerProperty::NumberViews));
const auto numberViews = positiveIntRound(Producer::get(ProducerProperty::NumberViews));
auto cvMats = VideoCaptureReader::getRawFrames();
// Split image
if (cvMats.size() == 1 && numberViews > 1)
......
......@@ -29,15 +29,15 @@ namespace op
const std::string logMessage{
"Desired webcam resolution " + std::to_string(mResolution.x) + "x"
+ std::to_string(mResolution.y) + " could not being set. Final resolution: "
+ std::to_string(intRound(get(CV_CAP_PROP_FRAME_WIDTH))) + "x"
+ std::to_string(intRound(get(CV_CAP_PROP_FRAME_HEIGHT))) };
+ std::to_string(positiveIntRound(get(CV_CAP_PROP_FRAME_WIDTH))) + "x"
+ std::to_string(positiveIntRound(get(CV_CAP_PROP_FRAME_HEIGHT))) };
log(logMessage, Priority::Max, __LINE__, __FUNCTION__, __FILE__);
}
}
// Set resolution
mResolution = Point<int>{
intRound(get(CV_CAP_PROP_FRAME_WIDTH)),
intRound(get(CV_CAP_PROP_FRAME_HEIGHT))};
positiveIntRound(get(CV_CAP_PROP_FRAME_WIDTH)),
positiveIntRound(get(CV_CAP_PROP_FRAME_HEIGHT))};
// Start buffering thread
mThreadOpened = true;
mThread = std::thread{&WebcamReader::bufferingThread, this};
......@@ -188,18 +188,30 @@ namespace op
const auto newNorm = (
cvMat.empty() ? mLastNorm : cv::norm(cvMat.row(cvMat.rows/2)));
if (mLastNorm == newNorm)
{
mDisconnectedCounter++;
if (mDisconnectedCounter > 1 && cvMat.empty())
log("Camera frame empty (it has occurred for the last " + std::to_string(mDisconnectedCounter)
+ " consecutive frames).", Priority::Max);
}
else
{
mLastNorm = newNorm;
mDisconnectedCounter = 0;
}
// Camera disconnected: black image
if (!cameraConnected || cvMat.empty())
// If camera disconnected: black image
if (!cameraConnected)
{
cvMat = cv::Mat(mResolution.y, mResolution.x, CV_8UC3, cv::Scalar{0,0,0});
putTextOnCvMat(cvMat, "Camera disconnected, reconnecting...", {cvMat.cols/16, cvMat.rows/2},
cv::Scalar{255, 255, 255}, false, intRound(2.3*cvMat.cols));
cv::Scalar{255, 255, 255}, false, positiveIntRound(2.3*cvMat.cols));
// Anti flip + anti rotate frame (so it is balanced with the final flip + rotate)
auto rotationAngle = -Producer::get(ProducerProperty::Rotation);
// Not using 0 or 180 might provoke a row/col dimension swap, thus an OP error
if (int(std::round(rotationAngle)) % 180 != 0.)
rotationAngle = 0;
const auto flipFrame = ((unsigned char)Producer::get(ProducerProperty::Flip) == 1.);
rotateAndFlipFrame(cvMat, rotationAngle, flipFrame);
}
// Move to buffer
if (!cvMat.empty())
......@@ -220,8 +232,7 @@ namespace op
try
{
// If unplugged
log("Webcam was unplugged, trying to reconnect it.", Priority::Max,
__LINE__, __FUNCTION__, __FUNCTION__);
log("Webcam was unplugged, trying to reconnect it.", Priority::Max);
// Sleep
std::this_thread::sleep_for(std::chrono::milliseconds{1000});
// Reset camera
......@@ -234,8 +245,8 @@ namespace op
}
// Camera replugged?
return (!isOpened()
&& (mResolution.x != intRound(get(CV_CAP_PROP_FRAME_WIDTH))
|| mResolution.y != intRound(get(CV_CAP_PROP_FRAME_HEIGHT))));
&& (mResolution.x != positiveIntRound(get(CV_CAP_PROP_FRAME_WIDTH))
|| mResolution.y != positiveIntRound(get(CV_CAP_PROP_FRAME_HEIGHT))));
}
catch (const std::exception& e)
{
......
......@@ -221,7 +221,7 @@ namespace op
i*poseKeypoints.getSize(1)*poseKeypoints.getSize(2) +
j*poseKeypoints.getSize(2) + 2];
const cv::Point lkPoint = personEntry.keypoints[j];
const cv::Point opPoint{intRound(x), intRound(y)};
const cv::Point opPoint{positiveIntRound(x), positiveIntRound(y)};
if (prob < confidenceThreshold)
personEntries[id].status[j] = 0;
......@@ -232,8 +232,9 @@ namespace op
if (distance < 5)
personEntries[id].keypoints[j] = lkPoint;
else if (distance < 10)
personEntries[id].keypoints[j] = cv::Point{intRound((lkPoint.x+opPoint.x)/2.),
intRound((lkPoint.y+opPoint.y)/2.)};
personEntries[id].keypoints[j] = cv::Point{
positiveIntRound((lkPoint.x+opPoint.x)/2.),
positiveIntRound((lkPoint.y+opPoint.y)/2.)};
else
personEntries[id].keypoints[j] = opPoint;
}
......@@ -420,7 +421,8 @@ namespace op
if (mRescale)
{
cv::Size rescaleSize{
intRound(mRescale), intRound(mImagePrevious.size().height/(mImagePrevious.size().width/mRescale))};
positiveIntRound(mRescale),
positiveIntRound(mImagePrevious.size().height/(mImagePrevious.size().width/mRescale))};
cv::resize(mImagePrevious, mImagePrevious, rescaleSize, 0, 0, cv::INTER_CUBIC);
}
// Save Last Ids
......@@ -440,7 +442,8 @@ namespace op
if (mRescale)
{
cv::Size rescaleSize{
intRound(mRescale), intRound(imageCurrent.size().height/(imageCurrent.size().width/mRescale))};
positiveIntRound(mRescale),
positiveIntRound(imageCurrent.size().height/(imageCurrent.size().width/mRescale))};
xScale = imageCurrent.size().width / (float)rescaleSize.width;
yScale = imageCurrent.size().height / (float)rescaleSize.height;
cv::resize(imageCurrent, imageCurrent, rescaleSize, 0, 0, cv::INTER_CUBIC);
......
......@@ -212,11 +212,12 @@ namespace op
const auto ratioAreas = fastMin(T(1), fastMax(personRectangle.width/(T)width,
personRectangle.height/(T)height));
// Size-dependent variables
const auto thicknessRatio = fastMax(intRound(std::sqrt(area)
* thicknessCircleRatio * ratioAreas), 2);
const auto thicknessRatio = fastMax(
positiveIntRound(std::sqrt(area)* thicknessCircleRatio * ratioAreas), 2);
// Negative thickness in cv::circle means that a filled circle is to be drawn.
const auto thicknessCircle = fastMax(1, (ratioAreas > T(0.05) ? thicknessRatio : -1));
const auto thicknessLine = fastMax(1, intRound(thicknessRatio * thicknessLineRatioWRTCircle));
const auto thicknessLine = fastMax(
1, positiveIntRound(thicknessRatio * thicknessLineRatioWRTCircle));
const auto radius = thicknessRatio / 2;
// Draw lines
......@@ -226,7 +227,7 @@ namespace op
const auto index2 = (person * numberKeypoints + pairs[pair+1]) * keypoints.getSize(2);
if (keypoints[index1+2] > threshold && keypoints[index2+2] > threshold)
{
const auto thicknessLineScaled = intRound(
const auto thicknessLineScaled = positiveIntRound(
thicknessLine * poseScales[pairs[pair+1] % numberScales]);
const auto colorIndex = pairs[pair+1]*3; // Before: colorIndex = pair/2*3;
const cv::Scalar color{
......@@ -234,8 +235,10 @@ namespace op
colors[(colorIndex+1) % numberColors],
colors[colorIndex % numberColors]
};
const cv::Point keypoint1{intRound(keypoints[index1]), intRound(keypoints[index1+1])};
const cv::Point keypoint2{intRound(keypoints[index2]), intRound(keypoints[index2+1])};
const cv::Point keypoint1{
positiveIntRound(keypoints[index1]), positiveIntRound(keypoints[index1+1])};
const cv::Point keypoint2{
positiveIntRound(keypoints[index2]), positiveIntRound(keypoints[index2+1])};
cv::line(frameBGR, keypoint1, keypoint2, color, thicknessLineScaled, lineType, shift);
}
}
......@@ -246,16 +249,17 @@ namespace op
const auto faceIndex = (person * numberKeypoints + part) * keypoints.getSize(2);
if (keypoints[faceIndex+2] > threshold)
{
const auto radiusScaled = intRound(radius * poseScales[part % numberScales]);
const auto thicknessCircleScaled = intRound(thicknessCircle * poseScales[part % numberScales]);
const auto radiusScaled = positiveIntRound(radius * poseScales[part % numberScales]);
const auto thicknessCircleScaled = positiveIntRound(
thicknessCircle * poseScales[part % numberScales]);
const auto colorIndex = part*3;
const cv::Scalar color{
colors[(colorIndex+2) % numberColors],
colors[(colorIndex+1) % numberColors],
colors[colorIndex % numberColors]
};
const cv::Point center{intRound(keypoints[faceIndex]),
intRound(keypoints[faceIndex+1])};
const cv::Point center{positiveIntRound(keypoints[faceIndex]),
positiveIntRound(keypoints[faceIndex+1])};
cv::circle(frameBGR, center, radiusScaled, color, thicknessCircleScaled, lineType,
shift);
}
......
......@@ -12,8 +12,8 @@ namespace op
const auto ratio = imageWidth/1280.;
// const auto fontScale = 0.75;
const auto fontScale = 0.8 * ratio;
const auto fontThickness = std::max(1, intRound(2*ratio));
const auto shadowOffset = std::max(1, intRound(2*ratio));
const auto fontThickness = std::max(1, positiveIntRound(2*ratio));
const auto shadowOffset = std::max(1, positiveIntRound(2*ratio));
int baseline = 0;
const auto textSize = cv::getTextSize(textToDisplay, font, fontScale, fontThickness, &baseline);
const cv::Size finalPosition{position.x - (normalizeWidth ? textSize.width : 0),
......@@ -60,7 +60,8 @@ namespace op
const auto offsetHeight = y * width;
for (auto x = 0 ; x < width ; x++)
{
const auto value = uchar( fastTruncate(intRound(arrayPtr[offsetHeight + x]), 0, 255) );
const auto value = uchar(
fastTruncate(positiveIntRound(arrayPtr[offsetHeight + x]), 0, 255));
cvMatROIPtr[x] = (unsigned char)(value);
}
}
......@@ -208,4 +209,48 @@ namespace op
error(e.what(), __LINE__, __FUNCTION__, __FILE__);
}
}
void rotateAndFlipFrame(cv::Mat& frame, const double rotationAngle, const bool flipFrame)
{
try
{
if (!frame.empty())
{
const auto rotationAngleInt = (int)std::round(rotationAngle) % 360;
if (rotationAngleInt == 0 || rotationAngleInt == 360)
{
if (flipFrame)
cv::flip(frame, frame, 1);
}
else if (rotationAngleInt == 90 || rotationAngleInt == -270)
{
cv::transpose(frame, frame);
if (!flipFrame)
cv::flip(frame, frame, 0);
}
else if (rotationAngleInt == 180 || rotationAngleInt == -180)
{
if (flipFrame)
cv::flip(frame, frame, 0);
else
cv::flip(frame, frame, -1);
}
else if (rotationAngleInt == 270 || rotationAngleInt == -90)
{
cv::transpose(frame, frame);
if (flipFrame)
cv::flip(frame, frame, -1);
else
cv::flip(frame, frame, 1);
}
else
error("Rotation angle = " + std::to_string(rotationAngleInt)
+ " != {0, 90, 180, 270} degrees.", __LINE__, __FUNCTION__, __FILE__);
}
}
catch (const std::exception& e)
{
error(e.what(), __LINE__, __FUNCTION__, __FILE__);
}
}
}
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册