Skeleton Tracking

I’ve been working with python, Tensorflow, and OpenCV along with a couple of RealSense cameras for a project. As a break from it all, I wanted to test out the skeleton tracking SDK from Cubemos. Their SDK allows the tracking of 18 joints per person for up to 5 people in a given frame.

Using their trial license and one of my RealSense cameras the process was rather painless.

Lately, most of my development revolves around python using Conda as my python package manager and Docker to contain specific environments from which to deploy the resulting solutions.

I do revert to C++ and on the windows environment, Microsoft Visual Studio Community 2019 is used to compile stuff from source such as OpenCV. The Cubemos installer generates sample solutions for VS17. In my case, the installation failed and I had to resort to generating them by hand.

The error is because the OpenCV cmake file that comes bundled with Cubemos, does not recognize a VS19 environment. Fortunately, I have OpenCV 4.3 and built a version specific to my needs.

Alternatively, one can download OpenCV 4.3.0 and copy the required files to the Cubemos samples folder.

OpenCV build folder contents.
Cubmeos OpenCV dependency folder contents

Configure and Generate work as expected. OpenCV was built using VS19 thus the OpenCV Runtime is vc16. If you downloaded OpenCV 4.3.0 then the runtime would be vc15

Although the binaries for the demos come installed, I felt compiling for specific environment was a must, even though most of development is in python.

Sample Outputs

Each estimated joint location has a corresponding x,y,z coordinate from which one can use to provide specific solutions.

The demo app does not perform any smarts such as verifying if a person is standing or just a an picture of a person. In this example, it got confused with the tools hanging on the pegboard. The potential applications are plenty and limited by your imagination. A permanent licence costs $75US.

Visual Analytics using OpenCV and RealSense Camera

Context

The thought of using computer vision on a couple of projects has been bouncing in the back of my mind for a few years. I wanted to expand things and include more deep learning elements as well as evolve my use of  OpenCV from simple projects like counting objects to something with more challenging.

Why not add some smarts to hydroponics to monitor plant characteristics such as height and root health?

Camera

I purchased Intel’s RealSense D435 Depth camera and chose the D435 over the D415 because of the global shutter feature.  Stereo and IR features became must-have features to future proof the development. By completing a survey, a $25US coupon provides enough incentive to purchase it from Intel’s site.

The camera does not take much space and one may want to use something else rather than the tripod it comes with as it hard to keep stable.

Environment

The Windows docker installation uses hyper-V and somewhere along the way, the Ubuntu VM got corrupted. So for this exercise, Windows remains the dev OS with python as the dev language.

Python Steps

  1. Download and install Anaconda I use it to create python sandboxes and prefer it to virtualenv. I did not have a 3.7 python on this PC, so I let Anaconda set it all up.  If you open a plain old command prompt, conda will not be found. Use the anaconda command prompt as it sets all the paths.
  2. Optional but recommended – create a conda environment. e.g. conda create -n opencvdev
  3. Optional – activate opencvdev
  4. Install OpenCV using conda install -c conda-forge opencv
  5. Download and install the Intel RealSense SDK 2.0
  6. Test the camera with the standard utilities. It needs USB 3.0 and won’t work under USB 2.0
  7. Under the conda opencvdev env, run pip install pyrealsense2
  8. run python from the command line
(opencvdev) C:\Dev\source\python\vision>python
Python 3.6.6 | packaged by conda-forge | (default, Jul 26 2018, 11:48:23) [MSC v.1900 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import cv2 as cv
>>> cv.__version__
'3.4.3'
>>> import pyrealsense2 as rs
>>> rs.intrinsics()
width: 0, height: 0, ppx: 0, ppy: 0, fx: 0, fy: 0, model: None, coeffs: [0, 0, 0, 0, 0]
  • cv.__version__ returns a string with the opencv version
  • rs.instrinsics() returns the device info such as focal points, distortion coefficients. Nothing has been set up but the test is to see if the libraries are set up ok

Run one of the python wrapper examples. e.g. opencv_viewer_example.py to get something like this.

Finally, install Jupyter  for some interactive what-if development.

#matplotlib works well and a good substitute for imshow from opencv. 
conda install -c matplotlib 
conda install -c anaconda jupyter 
#Jupyter did not see my conda env and the following fixed it
python -m ipykernel install --user --name visionenv

#run jupyter
jupyter notebook

And Jupyter does job of running the opencv_viewer_example.py  as a notebook.

Problems Encountered

The dreadful Intel MKL FATAL ERROR: Cannot load mkl_intel_thread.dll kept rearing its ugly head.  I tried with no avail to mix and match package versions and abandoned the troubleshooting ship.

What worked is one of the two as they were performed at once before re-configuring the conda env.

  • Removing the paths to anacoda. Initially paths set  and ran everything from a plain old command prompt. I reverted to using the anaconda prompt.
  • Removed a bunch of apps including visual studio 2017 and a bunch of soft-synth plugins assuming something was found on the system path that conflicted with the newer conda dll version.

Back to Room One

Time to explore something outside my comfort zone. Image Processing.  I’ve gone through a few geek books during hard to find spare time. There are so many apps in the iOS world that I needed to dig a little deeper to get a better understanding what is under the hood.  iOS Programming: The Big Nerd Ranch Guide and Objective-C Programming: The Big Nerd Ranch Guide are good introductions. If you know C/C++ the Objective C book can be read rather quickly. I liked going through the exercises in the iOS programming book (well kindle version) to force me to navigate through the xcode/iOS documentation.  My real motivation is to do some image processing and opted to read the OpenCV 2 Computer Vision Application Programming Cookbook.

Continue reading