As we approach the halfway point of HumanTech, we have taken the chance to ask our team to share the highlights of our latest developments and what our next steps will be.

Here is what ten of the partners involved in our project have told us.

HumanTech's progress update in month 16

Jason Rambach, HumanTech project coordinator, DFKI

At HumanTech, we are headed for the project half-point and our first review in January. In the last months, we have seen many of our technologies, such as the robotic perception and human interaction, the BIMxD platform and the full Scan2BIM pipeline, starting to take shape, as shown in our exciting demonstrator session in our last General Assembly Meeting in Oslo at the end of August. Currently, we are very excited to start to start work on integrating the technologies into our robotic platform in February 2024.

Meanwhile, we have proven the scientific excellence of our results at the ICCV 2023 conference, the premier event for computer vision internationally, where we presented a publication on Scan2CAD with retrieval and deformation of objects and received 3 awards at the BOP Object Pose Estimation challenge. Our next stop is WACV 2024 in January, where we will present our semantic segmentation work on panoramic images with depth information. In addition, we have started using the Horizon Results Booster service to improve our exploitation perspectives and will develop more user evaluation workshops in the coming months.

Gabor Sziebig, SINTEF

Our work in recent months has been twofold. On the one hand, we have made progress with human-robot collaboration scenarios. We are getting the first results on how construction professionals can use this system — and, later on, on our pilot sites. On the other hand, we have advanced on the overall system integration, where the simulator developed by RPTU is coming alive, and more and more functionalities are accessible. A minor contribution towards BIMxD generation is also worth mentioning, where we are able to refresh and update the algorithms we have developed in previous R&D projects.

Arantxa Renteria, Tecnalia

Regarding our work in wearables, we have defined our use case. We are acquiring signals from our Body Sensor Network from several volunteers to train the database and infer the detection of movements and gestures. We will use this data to develop an algorithm to predict the user’s intention and activation of our exoskeleton. The plan is to have a first prototype of this algorithm by January 2024. Future plans include developing the controller for the exoskeleton, integrating components (kinematics information from SciTrack) and tests.

About our work on robot teleoperation and learning by demonstration, we have developed drivers for different elements of the teleoperation console (haptic device, clutch) and a constraint-based admittance teleoperation controller (Force-feedback and Clutchless). We have defined the architecture of control middleware (ROS-control) and started SW developments. There is a first version of the robot platform simulator (UDP, control of the speed of the tracks), and we have integrated the control middleware and the simulator. Our next steps include developing means to enable/disable robot tools, real robot platform CAN driver and ROS node and completion of the control-middleware.

On robotic learning from demonstration, we have implemented an algorithm in a robot simulator with clean signals and a 6D mouse, a learning algorithm in a real robot and the first version of the user-friendly interface, created a signal processing block to homogenize teleoperated data, and developed a robot independent GUI. Our future work involves the 3D manufacturing of the mastic applicator, stabilization of the learning algorithm according to the sensed signals, implementation of artificial potential fields to adapt the generated trajectories to new situations, and extension of the learning algorithms to handle forces.

Finally, we worked on human factors at the beginning of the project by defining questionnaires. We will resume this task when the project validation phase starts.

Anurag Bansal, ACCIONA

In the last period, we visited our partners from DFKI and BAUBOT to discuss our progress and preparations for the Hackathon we have planned for next year in ACCIONA. In addition, we have provided samples of bricks, which we will use during next year’s SINTEF and DFKI demonstrations.

On the other hand, we are planning to do two poster presentations at international conferences focused on Innovations in Building & Construction in 2024. We would like to present a poster on Cobots (ACCIONA user case for human-robot collaboration for handing over bricks to masonry worker) and another on Exoskeletons (ACCIONA user case for usage of exoskeletons for assisting workers during lintel placement and other construction activities).

Lastly, we had conversations with different ACCIONA work-site teams and explained to them what HumanTech is about to see if they could provide us with their availability for next year’s workshop/focus group.

Patricia Rosen, BAuA

Together with our partners Tecnalia, TUS and ACCIONA, we collected our first user insights about some of our HumanTech technologies. We collected data at three different sites from potential users in the construction industry and were able to gain first results on comparing interactive robots, exoskeletons and extended reality (XR) glasses, for example, in relation to the perceived organisational relevance. Potential users also shared their expectations about the different technologies, as well as foreseen risks and opportunities, with us. The description of our procedure, analysis and results are part of our deliverable on ‘Worker Assessment Report’.

We are currently planning additional user assessments with specific target groups and evaluating our technologies more in-depth. We are looking forward to completing the human perspective with valuable information provided by different employees.

Based on the results we gathered, we submitted an abstract to the annual German Human Factors conference in the spring of 2024.

Hideaki Kanayama, Ricoh Europe

We are currently advancing in two areas:

Spherical camera prototype
We have been developing a compact spherical camera with a wide field of view of 360 degrees and hardware-based trigger input for seamless integration with the body sensor network.

The camera mounted on the helmet captures an ego-centric view of the worker, and its pose is estimated with an image-based machine-learning algorithm. The estimated pose compensates the drifting error of IMU-based pose that is caused by the electromagnetic field in the environment, which achieves more accurate pose estimation. Furthermore, this camera will be used in forthcoming tasks with other partners. I.e., the cameras are mounted on the UGV with stereo settings and exploited for real-time advanced perception and human safety, such as workers’ 3D pose and position estimation with a wide field of view.

Development of 3D integration algorithm for multi-sensor data
At Ricoh, we are developing a spherical RGB-depth scanner that can capture a 5 m space with a 360° field of view within 1 s. The device is handheld and portable, allowing it to scan areas where automated mobility cannot enter.

At HumanTech, our scans are aligned and integrated with wide-area 3D data acquired by UGV/UAV to create a complete digital twin. We have developed an image-based localization that combines Sfm (Structure from Motion) and RGBD-SLAM (Simultaneous Localization and Mapping), which enables the alignment among all of the multiple sensors that can acquire a spherical RGB image. While this technology has been developed for static scenes, the next step is to develop a daily update algorithm for scenes in actual construction sites.

Sebastian Mattes, Implenia

Implenia, as a construction company, represents the end user’s point of view: How can the solutions we are developing be used, which requirements and framework conditions should be observed, and what does a real construction site look like? Our approach is to go directly on-site as early as possible to check if our thoughts are correct.

In the last months, we have been involved in preparations for our bridge inspection pilot. Regarding progress management, continuous scan recordings are being done on a site in Germany. As for the training of the AI algorithms, there is a need for more data also from a different project. We are checking out which project would fit best and is also reachable for our partners to collect weekly or biweekly scans. Meanwhile, several aspects to take care of came up with the first project.

  • Data privacy: How to avoid capturing persons? How do we remove them from our data?
  • Darkness: As the days get shorter in winter, the daylight hours after the normal end time of projects become less. Sometimes, we would like to have a coloured scan or use the data otherwise, but as they are based on photos captured by the scanner, they aren’t useful in dark environments — even if it doesn’t matter for the point cloud and captured geometry.

Our next steps are mainly:

  • To organize access to the next scanning project
  • To give feedback on the realistic use of the developed use cases
  • (Perhaps) To evaluate and give input on training data for worker

Michael König, STRUCINSPECT

We are currently working on finalising our process for the bridge inspection pilot and are doing the first test with data we received from ZHAW. A big challenge is still in the BIM integration of standards that will be handled together with the University of Padova and Catenda. We are also involved in the point cloud segmentation and BIM reconstruction pipeline.

Florendia Fourli, Hypercliq

At Hypercliq, we have collaborated closely with our partners to examine the components of the HumanTech system and how they interact. We have pinpointed the necessary interfaces between humans, software, robots, and wearables and started defining the interoperability modules to facilitate their operation.

Additionally, to align our efforts with the EU’s Data strategy, we have conducted a comprehensive review of the current state of relevant initiatives at the European level. These initiatives have resulted in reference architectures, as well as completed or ongoing European R&D projects that address the construction sector’s needs. We specifically focused on their approaches to system architecture or the frameworks they’ve established or followed.

Building on the insights gained from these activities, we have developed the initial version of the HumanTech System Architecture. This serves as the foundation for further technical advancements, with the goal of providing HumanTech systems as components that can function independently or as part of an integrated solution. Finally, we have supported and documented the definition of the HumanTech use cases to be used as the basis for implementing the HumanTech pilots.

Elena Petrich, European Builders Confederation (EBC)

During the project’s first year, we provided feedback on the usability of technology, started discussing an SME-friendly training programme and supported the project’s developments through effective dissemination through its network.

EBC is currently envisaging, in cooperation with its French member CAPEB, a workshop targeting women in construction with the primary objective of gathering comments and observations from construction professionals on the human-centred technologies currently being developed by HumanTech.

As HumanTech progresses, EBC will play a role in the objectives of training, marketing, and sharing information on innovation among construction SMEs, the whole construction value chain and policymakers.


Learn more about our project and subscribe to our newsletter to keep updated with our progress!