Note: Currently using [travis-matrix-badges](https://github.com/bjfish/travis-matrix-badges) vs. traditional [![Build Status](https://travis-ci.org/CMU-Perceptual-Computing-Lab/openpose.svg?branch=master)](https://travis-ci.org/CMU-Perceptual-Computing-Lab/openpose)
1.**VERY IMPORTANT NOTE**: If you want to re-run the extrinsic parameter calibration over the same intrinsic XML files (e.g., if you move the camera location, but you know the instrinsics are the same), you must manually re-set to `1 0 0 0 0 1 0 0 0 0 1 0` the camera matrix of each XML file that will be used for `--combine_cam0_extrinsics`.
2. After intrinsics calibration, save undirtoted images for all the camera views:
3. Run the extrinsic calibration tool between each pair of close cameras. In this example:
- We assume camera 0 to the right, 1 in the middle-right, 2 in the middle-left, and 3 in the left.
...
...
@@ -151,3 +151,28 @@ Examples:
If you plan to use the calibration tool without using OpenPose, you can manually save a video sequence of your desired camera into each of the camera image folders (i.e., in the above example, the `~/Desktop/intrinsics_0`, `~/Desktop/intrinsics_1`, etc. folders).
If you wanna eventually run that camera with OpenPose, check [doc/modules/3d_reconstruction_module.md#using-a-different-camera-brand](./modules/3d_reconstruction_module.md#using-a-different-camera-brand).
## Naming Convention for the Output Images
The naming convention for the saved images is the following: `[%12d]_rendered[CAMERA_NUMBER_MINUS_1].png`, where `[CAMERA_NUMBER_MINUS_1]` is nothing for camera 0, `_1` for camera 1, `_2` for camera 2, etc. E.g., for 4 cameras:
```
000000000000_rendered.png
000000000000_rendered_1.png
000000000000_rendered_2.png
000000000000_rendered_3.png
000000000001_rendered.png
000000000001_rendered_1.png
000000000001_rendered_2.png
000000000001_rendered_3.png
[...]
```
OpenPose generates them with the base name `[%12d]_rendered`. Ideally, any other base number should work as long as the termination `[CAMERA_NUMBER_MINUS_1]` is kept consistent for all the camera views. E.g., you could call them also as follows (assuming 4 cameras):
```
a.png, a_1.png, a_2.png, a_3.png,
b.png, b_1.png, b_2.png, b_3.png,
etc.
```
Again, the critical step is to keep the file termination fixed as `_1`, `_2`, etc.
This module exposes a Python API for OpenPose. It is effectively a wrapper that replicates most of the functionality of the [op::Wrapper class](https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/include/openpose/wrapper/wrapper.hpp) and allows you to populate and retrieve data from the [op::Datum class](https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/include/openpose/core/datum.hpp) using standard Python and Numpy constructs.
The Python API is analagous to the C++ function calls. You may find them in [python/openpose/openpose_python.cpp#L194](https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/python/openpose/openpose_python.cpp#L194).
The Python API is rather simple: `op::Array<float>` and `cv::Mat` objects get casted to numpy arrays automatically. Every other data structure based on the standard library is automatically converted into Python objects. For example, an `std::vector<std::vector<float>>` would become `[[item, item], [item, item]]`, etc. We also provide a casting of `op::Rectangle` and `op::Point` which simply expose setter getter for [x, y, width, height], etc.