In this edition of our HumanTech technologies series, our colleague Markus Miezal, CEO and Co-Founder of Sci-track, shares the keys to the inertial tracking technology we are developing — a collaboration between our partners at Sci-track and RICOH.

Body tracking and its problems

The term inertial measurement unit (IMU) usually refers to a sensor package containing an accelerometer, a gyroscope and a magnetometer. They measure 3D acceleration (including gravity), 3D rotational velocity and the 3D magnetic field (i.e. a compass), usually at a high frequency (100Hz).

Every smartphone has an IMU. When the display flips, the accelerometer detects a different “down” direction from gravity. A common application for these sensors is orientation estimation. By fusing the information from the accelerometer and the gyroscope, gravity and linear acceleration are separated so that a clean downward direction can be measured. The magnetometer adds yaw information.

When placing these sensors on the body, one can calculate the orientations of every limb, but with additional information, such as a biomechanical model, one can get accurate orientations across the body or — what is the most common use-case — the relative segment orientations, i.e. the joint angles.

The dimensions of the body allow us to better predict the measured accelerations, mechanical limitations of certain joints can be exploited, and in our case, the biomechanical foot model allows us to estimate positional information through ground contact estimation*.

Using a model, however, requires that we know where the sensors are placed on the body. This is estimated in a small calibration step. But also, the model dimensions, i.e. the segment length, have to be known and may be wrong.

Magnetometers are another error source. Especially indoors, any ferromagnetic metal or current-carrying electrical wires locally disturb the earth’s magnetic field, so the measured “north” direction can be different for each sensor. We usually omit the use of the magnetometer completely.

Exploiting the relations between segments, we are able to maintain the yaw direction during motion but not in static situations. Also, a global yaw drift is introduced.

Adding a camera

By integrating a camera into the body and, in particular, a dual fish-eye camera, which is capable of capturing almost a full sphere, information about the surroundings and the body with respect to the surroundings can be obtained.

By detecting the body inside the image, the segment length and intrinsic segment orientations can be corrected (e.g. during drift in static conditions).

By monitoring the surroundings, the position information can be corrected and yaw drift omitted. Furthermore, the camera enables localisation and through object detection, context can be added to the estimate.

*Why can’t we get positions out of accelerometers?

Let’s say an IMU is static on a table. Neither accelerations nor rotations occur, so the IMU will measure gravity and zero rotational velocity. If I place the IMU in my car and drive on the highway at 180 km/h at a constant speed and then start measuring, as long as I don’t brake or accelerate, the accelerometer will only show gravity, and as long as I don’t turn, the rotational velocity will also measure zero. As you see, we cannot distinguish a static IMU on a table from an IMU travelling at a constant speed. We have to integrate the acceleration into velocity and to know the position, we have to integrate it again.

This will, however, always result in errors since the measurements are digital and usually biased. Digitalisation implies information loss since the measured value is quantized. For example, 16 bits correspond to 65535 different numbers, which are mapped to the sensor range of ±60m/s². The smallest change is, therefore, 1.83 mm/s². Biased means that the process of converting to a digital number comes with the problem of the zero point being shifted. So, instead of measuring zero, a small other value is measured, which we call bias. The high frequency of the IMUs adds to the problem so that the integrations quickly diverge, which we call drift.


Learn about another of our HumanTech technologies, which promises improved efficiency and accuracy for the architecture, engineering, and construction (AEC) industry: scan to BIM, explained by Mahdi Chamseddine, an M.Sc. researcher at the German Research Center for Artificial Intelligence (DFKI).

Stay tuned to our news, newsletter, and social media channels (LinkedIn, Twitter and YouTube) to follow our journey toward accelerating the digital transformation of construction!