The HumanTech resources
HumanTech deliverables
D1.1
The HumanTech vision and research requirements
The HumanTech project comprises research on different technologies to obtain a digital twin of the construction site or existing buildings, and the application of wearable devices and advanced robots in construction. All HumanTech partners envision that the application of all these technologies will lead to an advance of construction towards a safer and greener construction industry. This document will describe this unified HumanTech vision. As the main challenges for the construction industry were identified: a growing demand in construction induced by a growing demand for energy efficient renovations, infrastructure investments and urbanisation, climate change, resource scarcity and a shortage of labour. The main research requirements are extending open BIM standards for data exchange and interoperability, methods to obtain Dynamic Semantic Digital Twins of construction sites and other assets, unobtrusive wearables with intelligent transparency, robotic interfaces and learning from demonstration, on- site safe human-robot-collaboration and workflow capturing with XR visualization. The impact of HumanTech is expected in the fields of worker’s health and safety and a transition towards green, climate-friendly, and resource-efficient construction by reducing errors, increasing accuracy and efficient robotic task automation.
D1.3
HumanTech user requirements and ethics
This deliverable summarizes essential user requirements for interactive robots, ex- oskeletons and smart glasses. Likewise, an overview of working conditions of the European construction industry is given. Furthermore recommendations for the ethical handling of technologies and data are presented. The work package serves as basis for the application scenarios planned in the HumanTech project.
D2.1
BIMxD IDM with classes and attributes according to ISO 29481 and respective standards
This deliverable presents the IDM (Information Delivery Manual) for the HumanTech BIMxD platform specification. It represents the outcome of task T2.1, which focuses on BIMxD Formats and Specifications.
D2.2
Open-source BIM authoring tools
In this Deliverable, open-source tools are presented to create BIMxD objects from reconstructed semantics and geometry, object filtering and information extraction, inheritance of semantic labels from IFC objects and BIMxD update are presented.
D2.4
bSDD specification from BIMxD
This document, deliverable 2.4 is the first results of the analysis of the openBIM standards for the proposal of an IFC extension using bSDD. It provides an analysis of the advancements in interoperability and openBIM standards within the construction industry within the domain of the HumanTech Project. It explores the critical role of interoperability in integrating diverse systems and the transformative impact of openBIM over traditional closed BIM methodologies. The document delves into buildingSMART’s openBIM standards, emphasizing the importance of standards such as IFC and bSDD in facilitating information exchange. It also examines the standardization process for interoperability and highlights the integration of innovative elements like robotics and safety protocols through the HumanTech project. The conclusion addresses the need for ongoing adoption and adaptation of evolving openBIM standards and the future steps of the task T2.4.
D3.3
Hyperspectral material
This report encapsulates an exploration of hyperspectral imaging technologies for material identification processes in construction settings. The initial focus on sensor selection involved an in-depth investigation of the MicaSense RedEdge multispectral sensor. Unforeseen challenges led to a strategic shift, leveraging the available FX10 hyperspectral camera by Specim. The subsequent data acquisition phase covered the acquisition setup, software utilization, and the systematic creation of a dataset featuring diverse construction materials. Data analysis encompassed detailed preprocessing steps, including calibration and specific bands selection to replicate the data that would have been acquired with the initially considered RedEdge multispectral sensor. A Principal Component Analysis unveiled underlying patterns within the dataset, highlighting distinctive spectral signatures of different construction materials.
D3.4
Domain-adapted synthetic dataset for 3D semantic segmentation
In this report, we review the benefits of artificial data in semantic point cloud segmentation to improve segmentation on real-life data. This could also be a response to the lack of annotated data on some fields, and more especially for BIM models. Annotating data by hand is a very tedious task and has to be supervised in some ways. The main point of focus is 3D interiors scans obtained with a LiDAR sensor. Our work will be divided into two main parts. The first part targets artificial data generation using a simulated LiDAR sensor inside Unreal Engine 5. There is a need to have more annotated data and generating artificial datasets will be proven to be a viable alternative. Our reference will be the Stanford dataset which consists of real-world interior scans (alongside their annotations, meshes, semantics, surface normal, materials and textures). The second part consists in the evaluation of such artificially generated data. In order to measure the benefits of using artificial data, we will use domain adaptation and finetuning on two pretrained models for semantic segmentation. We will also discuss the need of “good” artificial data, especially when it comes to complex tasks – even regardless of the model, and methods to generate them.
D4.1
Body sensor network with integrated camera approach
This document describes the visual inertial sensor network which the workers wear in some use cases within the HumanTech project. The focus of this deliverable lies on the hardware properties as well as its calibration. As part of this hardware, the local processing device will be introduced as well. It is planned, that this powerful device will also host other applications in the context of the worker, such as the localization of Task 4.3 and the exoskeleton controller with intention prediction of Task 4.2 and will exchange data with them. The output of D4.1 is mainly used to predict the wearers intention in T4.2 but it may also interact with T4.3. It will be demonstrated within Pilot 1.
D5.1
Development of the ontology for demolition task planning
This report comprehensively outlines the efforts undertaken during Task 5.1, focusing on the development of a task planner for the automatic execution of demolition activities. At its core, it leverages a demolition ontology, an extension of ifcOWL, to establish a cognitive foundation. This ontology meticulously delineates the demolition environment, encompassing representations of walls, openings, and robots. Demolition task planning has a detailed focus on marking, drilling, and cutting operations. These tasks seamlessly unfold through the task planner, which meticulously assesses feasibility based on available resources.
D5.2
Scientific report on hybrid haptic-visual feedback mechanisms for efficient teleoperation of demolition robots
The aim of task 5.2 “Remote interfaces for demolition is the development of a user- centric control console for the remote operation of robots in the hazardous context of demolition. This console will integrate advanced features in viewing and haptics which will increase the sense of telepresence while augmenting the precision and dexterity of the operator. This deliverable contains a review of studies and techniques oriented to improve the performance and safety in teleoperation tasks with special focus on visual and force feedback. Some of them have been integrated in the teleoperation framework developed in HumanTech which is described in the las point of this document.
D5.4
Scientific report on unseen object class and shape estimation for robotic grasping
This deliverable describes the development of robotic perception algorithms for object pose estimation in HumanTech. Object Pose estimation based on camera images as input is a key-task for localizing objects for robotic grasping. We first provide an overview of the object pose estimation problem and the overall context of the task in the HumanTech project with respect to construction material robotic grasping. Subsequently, we describe the selected object pose estimation framework for the task, the state-of-the-art algorithm ZebraPose developed at DFKI. Finally, we describe the object pose estimation task on the HumanTech object of interest category, construction bricks. We detail the approach for generating and training our machine learning models exclusively on synthetic data and conclude with an evaluation of the brick grasping pose accuracy and the next steps for integration of the method on the human-tech robotic platform for real-time functionality.
D6.1
HumanTech micro-learning unit descriptors
This deliverable presents the HumanTech micro-learning unit descriptors and the steps taken to archive these results. Based on the work carried out in WP6 of the project, the descriptors of 12 modules, their content and the corresponding ULOs have been shared in this paper. These Micro-Learning Units will go on to be developed through the remainder of the HumanTech project and training will then be carried out with HEI, VET and industry professionals. This is a collaborative process, and the HumanTech partners and project results will heavily influence the resulting training material.
D6.3
HumanTech Worker Assessment Report
This deliverable summarises the sequential and ongoing evaluation of the HT wear- ables system and human-robot interactions. To identify the needs of the users at an early stage for incorporate them into the design process, an evaluation will be per- formed in a three-step approach. In this first deliverable the findings of a series of workshops held with workers and other type of stakeholders is presented. Based on the findings of Task 1.4, a lab-based approach was first carried out to evaluate usability, trust in automation and discomfort. At a further stage, the objective assessment will be carried out by evaluation of physiological sensor data collected in dedicated training sessions.
D8.1
HumanTech Impact Master Plan
Document detailing the project dissemination, exploitation and communication plans, outlining the target groups and their segments.
D8.2
HumanTech Dissemination & Communication Interim Report
This document details the activities of ecosystem building, dissemination, and communication carried out during the first year and a half of the HumanTech project as part of the master plan to maximise the project’s impact, outlining the schedule for the next period (M19-M36).