We are developing cutting-edge wearable technologies to enhance safety and efficiency in construction environments. In this edition of our HumanTech technologies series, our colleague Markus Miezal, CEO and Co-Founder of Sci-track, explains one of our key innovations in this area, a magnetometer-free visual-inertial tracking system which offers precise motion tracking even in magnetically disturbed environments — a collaboration between our partners at Sci-track and RICOH.

Body tracking and its problems

The term inertial measurement unit (IMU) usually refers to a sensor package containing an accelerometer, a gyroscope and a magnetometer. They measure 3D acceleration (including gravity), 3D rotational velocity and the 3D magnetic field (i.e. a compass), usually at a high frequency (100Hz).

Every smartphone has an IMU. When the display flips, the accelerometer detects a different “down” direction from gravity. A common application for these sensors is orientation estimation. By fusing the information from the accelerometer and the gyroscope, gravity and linear acceleration are separated so that a clean downward direction can be measured. The magnetometer adds yaw information.

When these sensors are placed on the body, the orientations of every limb can be calculated. However, with additional information, such as a biomechanical model, accurate orientations across the body or — in the most common use case — the relative segment orientations, i.e., the joint angles, can also be obtained.

The dimensions of the body allow us to better predict the measured accelerations and mechanical limitations of certain joints can be exploited. In our case, the biomechanical foot model allows us to estimate positional information through ground contact estimation*.

Using a model, however, requires that we know where the sensors are placed on the body. This is estimated in a small calibration step. But also, the model dimensions, i.e., the segment length, must be known and may be wrong.

Magnetometers are another error source. Especially indoors, any ferromagnetic metal or current-carrying electrical wires locally disturb the earth’s magnetic field, so each sensor’s measured “north” direction can be different. Therefore, we usually completely omit the use of the magnetometer.

Exploiting the relations between segments, we are able to maintain the yaw direction during motion but not in static situations. Also, a global yaw drift is introduced.

Adding a camera

By integrating a camera into the body and, in particular, a dual fish-eye camera, which is capable of capturing almost a full sphere, information about the surroundings and the body with respect to the surroundings can be obtained.

HumanTech team_ Markus Miezal, CEO and Co-Founder of Sci-track

By integrating a camera into the body and, in particular, a dual fish-eye camera, which is capable of capturing almost a full sphere, information about the surroundings and the body with respect to the surroundings can be obtained.

By detecting the body inside the image, the segment length and intrinsic segment orientations can be corrected (e.g. during drift in static conditions).

By monitoring the surroundings, position information can be corrected, and yaw drift can be omitted. Furthermore, the camera enables localisation, and context can be added to the estimate through object detection.

*Why can’t we get positions out of accelerometers?

Let’s say an IMU is static on a table. Neither accelerations nor rotations occur, so the IMU will measure gravity and zero rotational velocity. If I place the IMU in my car and drive on the highway at 180 km/h at a constant speed and then start measuring, as long as I don’t brake or accelerate, the accelerometer will only show gravity, and as long as I don’t turn, the rotational velocity will also measure zero. We cannot distinguish a static IMU on a table from an IMU travelling at a constant speed. We have to integrate the acceleration into velocity, and to know the position, we have to integrate it again.

This will, however, always result in errors since the measurements are digital and usually biased. Digitalisation implies information loss since the measured value is quantized. For example, 16 bits correspond to 65535 different numbers, which are mapped to the sensor range of ±60m/s². The smallest change is, therefore, 1.83 mm/s². Biased means that the process of converting to a digital number comes with the problem of the zero point being shifted. So, instead of measuring zero, a small other value is measured, which we call bias. The high frequency of the IMUs adds to the problem so that the integrations quickly diverge, which we call drift.


Learn about another of our HumanTech technologies, which promises improved efficiency and accuracy for the architecture, engineering, and construction (AEC) industries: scan to BIM, explained by Mahdi Chamseddine, an M.Sc. researcher at the German Research Center for Artificial Intelligence (DFKI).

Stay tuned to our news, newsletter, and social media channels (LinkedIn, Twitter and YouTube) to follow our journey toward accelerating the digital transformation of construction!