IoT-based multi-sensor fusion for adaptive cooperative assistive mobility devices

Loading...
Thumbnail Image

Authors

Oladele, Daniel Ayo

Journal Title

Journal ISSN

Volume Title

Publisher

Central University of Technology

Abstract

The increasing prevalence of mobility impairments among the global population has underscored the need for innovative assistive technologies to enhance the lives of individuals with disabilities. IoT-based Adaptive Cooperative Assistive Mobility Devices (IoT-ACAMDs) represent a transformative solution, integrating IoT capabilities to empower users with tailored support, promoting safety, and fostering independence. However, the integration of multiple sensors within ACAMDs poses challenges related to accuracy, reliability, and performance. This study addresses these challenges through the development of an IoT-based multi-sensor fusion architecture that combines LiDAR and camera data to create a unified bird’s-eye view (BEV) mapping of the environment. The study introduces the FastSeg3D algorithm, a novel approach for real-time segmentation of LiDAR 3D points, including the segmentation of drivable points in complex terrains. Utilising the FastSeg3D algorithm, deep learning techniques, Inverse Perspective Mapping (IPM), and Bayesian occupancy grid techniques, the study integrates LiDAR data with information from six monocular cameras, captured simultaneously from different angles. The fusion process results in a BEV mapping of the environment which can be used for simultaneous localisation and mapping (SLAM) of a mobile device (robot in which the sensors are mounted). The study outcomes demonstrate the effectiveness of FastSeg3D achieving real-time ground and non-ground point segmentation across varying terrain complexities with an impressive average speed of approximately 25 frames per second (fps) on a single core of a 7th-generation Intel i5 processor. Moreover, the multi-sensor fusion architecture achieves an inferencing fusion speed of 17.8 fps without augmentation and 11.3 fps with augmentation on a Ryzen 9 5950 CPU, maintaining an acceptable level of accuracy. The FastSeg3D algorithm also outperforms existing methods in terms of recall and intersection over union (IoU) metrics, achieving the highest recall of ground points (0.9758-0.998) and non-ground points (0.8975-0.9582) in 5 out of 10 scenarios, and the second highest recall of non-ground points in the remaining 5 scenarios. It also achieves the highest mean IoU (0.8711-0.944) in 9 out of 10 scenarios, and the second highest mean IoU in the 10th scenario with a difference of only 0.221. This study contributes a comprehensive literature review on sensor data preprocessing, automatic extrinsic calibration, and multi-sensor fusion techniques. A novel ground removal algorithm, an innovative multi-sensor fusion architecture, a view transformation algorithm, and a tailored IoT framework designed specifically for ACAMDs were developed during the study. This study also addresses the urgent demand for adaptive and robust navigation assistive devices for individuals with mobility impairments, representing a significant advancement in the field of multi-sensor fusion for assistive mobility devices.

Description

Phd (Engineering)--Electrical Engineering

Citation

Endorsement

Review

Supplemented By

Referenced By