Structural health monitoring (SHM) is the automation of the condition assessment process of an engineered system. When applied to geometrically large components or structures, such as those found in civil and aerospace infrastructure and systems, a critical challenge is in designing the sensing solution that could yield actionable information. This is a difficult task to conduct cost-effectively, because of the large surfaces under consideration and the localized nature of typical defects and damages. There have been significant research efforts in empowering conventional measurement technologies for applications to SHM in order to improve performance of the condition assessment process. Yet, the field implementation of these SHM solutions is still in its infancy, attributable to various economic and technical challenges. The objective of this Roadmap publication is to discuss modern measurement technologies that were developed for SHM purposes, along with their associated challenges and opportunities, and to provide a path to research and development efforts that could yield impactful field applications. The Roadmap is organized into four sections: distributed embedded sensing systems, distributed surface sensing systems, multifunctional materials, and remote sensing. Recognizing that many measurement technologies may overlap between sections, we define distributed sensing solutions as those that involve or imply the utilization of numbers of sensors geometrically organized within (embedded) or over (surface) the monitored component or system. Multi-functional materials are sensing solutions that combine multiple capabilities, for example those also serving structural functions. Remote sensing are solutions that are contactless, for example cell phones, drones, and satellites. It also includes the notion of remotely controlled robots.
Purpose-led Publishing is a coalition of three not-for-profit publishers in the field of physical sciences: AIP Publishing, the American Physical Society and IOP Publishing.
Together, as publishers that will always put purpose above profit, we have defined a set of industry standards that underpin high-quality, ethical scholarly communications.
We are proudly declaring that science is our only shareholder.
ISSN: 1361-6501
Launched in 1923 Measurement Science and Technology was the world's first scientific instrumentation and measurement journal and the first research journal produced by the Institute of Physics. It covers all aspects of the theory, practice and application of measurement, instrumentation and sensing across science and engineering.
Open all abstracts, in this tab
Simon Laflamme et al 2023 Meas. Sci. Technol. 34 093001
Adam Thompson et al 2021 Meas. Sci. Technol. 32 105013
Maximum permissible errors (MPEs) are an important measurement system specification and form the basis of periodic verification of a measurement system's performance. However, there is no standard methodology for determining MPEs, so when they are not provided, or not suitable for the measurement procedure performed, it is unclear how to generate an appropriate value with which to verify the system. Whilst a simple approach might be to take many measurements of a calibrated artefact and then use the maximum observed error as the MPE, this method requires a large number of repeat measurements for high confidence in the calculated MPE. Here, we present a statistical method of MPE determination, capable of providing MPEs with high confidence and minimum data collection. The method is presented with 1000 synthetic experiments and is shown to determine an overestimated MPE within 10% of an analytically true value in 99.2% of experiments, while underestimating the MPE with respect to the analytically true value in 0.8% of experiments (overestimating the value, on average, by 1.24%). The method is then applied to a real test case (probing form error for a commercial fringe projection system), where the efficiently determined MPE is overestimated by 0.3% with respect to an MPE determined using an arbitrarily chosen large number of measurements.
Mohammadmahdi Abedi et al 2024 Meas. Sci. Technol. 35 065601
In this study, a self-sensing and self-heating natural fibre-reinforced cementitious composite for the shotcrete technique was developed using Kenaf fibres. For this purpose, a series of Kenaf fibre concentrations were subjected to initial chemical treatment, followed by integration into the cement-based composite containing hybrid carbon nanotubes (CNT) and graphene nanoplatelets (GNP). The investigation encompassed an examination of mechanical, microstructural, sensing, and joule heating performances of the environmentally friendly shotcrete mixture, with subsequent comparisons drawn against a counterpart blend featuring a conventionally synthesized polypropylene (PP) fibre. Following the experimental phase, a comprehensive 3D nonlinear finite difference (3D NLFD) model of an urban twin road tunnel, completed with all relevant components, was meticulously formulated using the FLAC3D (fast lagrangian analysis of continua in 3 dimensions) code. This model was subjected to rigorous validation procedures. The performances of this green shotcrete mixture as the lining of the inner shell of the tunnel were assessed comparatively using this 3D numerical model under static and dynamic loading. The twin tunnel was subjected to a harmonic seismic load as a dynamic load with a duration of 15 s. The laboratory findings showed a reduction in the composite sensing and heating potentials in both cases of Kenaf and PP fibre reinforcement. Incorporating a specific quantity of fibre yields a substantial enhancement in both the mechanical characteristics and microstructural attributes of the composite. An analysis of digital image correlation demonstrated that Kenaf fibres were highly effective in controlling cracks in cement-based composites. Furthermore, based on the static and dynamic 3DNLFD analysis, this green cement-based composite demonstrated its potential for shotcrete applications as the lining of the inner shell of the tunnel. This study opens an appropriate perspective on the extensive and competent contribution of natural fibres for multifunctional sustainable, reliable and affordable cement-based composite developments for today's world.
Martin Kögler and Bryan Heilala 2020 Meas. Sci. Technol. 32 012002
Time-gated (TG) Raman spectroscopy (RS) has been shown to be an effective technical solution for the major problem whereby sample-induced fluorescence masks the Raman signal during spectral detection. Technical methods of fluorescence rejection have come a long way since the early implementations of large and expensive laboratory equipment, such as the optical Kerr gate. Today, more affordable small sized options are available. These improvements are largely due to advances in the production of spectroscopic and electronic components, leading to the reduction of device complexity and costs. An integral part of TG Raman spectroscopy is the temporally precise synchronization (picosecond range) between the pulsed laser excitation source and the sensitive and fast detector. The detector is able to collect the Raman signal during the short laser pulses, while fluorescence emission, which has a longer delay, is rejected during the detector dead-time. TG Raman is also resistant against ambient light as well as thermal emissions, due to its short measurement duty cycle.
In recent years, the focus in the study of ultra-sensitive and fast detectors has been on gated and intensified charge coupled devices (ICCDs), or on CMOS single-photon avalanche diode (SPAD) arrays, which are also suitable for performing TG RS. SPAD arrays have the advantage of being even more sensitive, with better temporal resolution compared to gated CCDs, and without the requirement for excessive detector cooling. This review aims to provide an overview of TG Raman from early to recent developments, its applications and extensions.
Louise Wright and Stuart Davidson 2024 Meas. Sci. Technol. 35 051001
Digital twinning is a rapidly growing area of research. Digital twins combine models and data to provide up-to-date information about the state of a system. They support reliable decision-making in fields such as structural monitoring and advanced manufacturing. The use of metrology data to update models in this way offers benefits in many areas, including metrology itself. The recent activities in digitalisation of metrology offer a great opportunity to make metrology data 'twin-friendly' and to incorporate digital twins into metrological processes. This paper discusses key features of digital twins that will inform their use in metrology and measurement, highlights the links between digital twins and virtual metrology, outlines what use metrology can make of digital twins and how metrology and measured data can support the use of digital twins, and suggests potential future developments that will maximise the benefits achieved.
Liisa M Hirvonen and Klaus Suhling 2017 Meas. Sci. Technol. 28 012003
Time-correlated single photon counting (TCSPC) is a widely used, robust and mature technique to measure the photon arrival time in applications such as fluorescence spectroscopy and microscopy, LIDAR and optical tomography. In the past few years there have been significant developments with wide-field TCSPC detectors, which can record the position as well as the arrival time of the photon simultaneously. In this review, we summarise different approaches used in wide-field TCSPC detection, and discuss their merits for different applications, with emphasis on fluorescence lifetime imaging.
Gustavo Quino et al 2021 Meas. Sci. Technol. 32 015203
Digital image correlation (DIC) is a widely used technique in experimental mechanics for full field measurement of displacements and strains. The subset matching based DIC requires surfaces containing a random pattern. Even though there are several techniques to create random speckle patterns, their applicability is still limited. For instance, traditional methods such as airbrush painting are not suitable in the following challenging scenarios: (i) when time available to produce the speckle pattern is limited and (ii) when dynamic loading conditions trigger peeling of the pattern. The development and application of some novel techniques to address these situations is presented in this paper. The developed techniques make use of commercially available materials such as temporary tattoo paper, adhesives and stamp kits. The presented techniques are shown to be quick, repeatable, consistent and stable even under impact loads and large deformations. Additionally, they offer the possibility to optimise and customise the speckle pattern. The speckling techniques presented in the paper are also versatile and can be quickly applied in a variety of materials.
A Sciacchitano 2019 Meas. Sci. Technol. 30 092001
Particle image velocimetry (PIV) has become the chief experimental technique for velocity field measurements in fluid flows. The technique yields quantitative visualizations of the instantaneous flow patterns, which are typically used to support the development of phenomenological models for complex flows or for validation of numerical simulations. However, due to the complex relationship between measurement errors and experimental parameters, the quantification of the PIV uncertainty is far from being a trivial task and has often relied upon subjective considerations. Recognizing the importance of methodologies for the objective and reliable uncertainty quantification (UQ) of experimental data, several PIV-UQ approaches have been proposed in recent years that aim at the determination of objective uncertainty bounds in PIV measurements.
This topical review on PIV uncertainty quantification aims to provide the reader with an overview of error sources in PIV measurements and to inform them of the most up-to-date approaches for PIV uncertainty quantification and propagation. The paper first introduces the general definitions and classifications of measurement errors and uncertainties, following the guidelines of the International Organization for Standards (ISO) and of renowned books on the topic. Details on the main PIV error sources are given, considering the entire measurement chain from timing and synchronization of the data acquisition system, to illumination, mechanical properties of the tracer particles, imaging of those, analysis of the particle motion, data validation and reduction. The focus is on planar PIV experiments for the measurement of two- or three-component velocity fields.
Approaches for the quantification of the uncertainty of PIV data are discussed. Those are divided into a-priori UQ approaches, which provide a general figure for the uncertainty of PIV measurements, and a-posteriori UQ approaches, which are data-based and aim at quantifying the uncertainty of specific sets of data. The findings of a-priori PIV-UQ based on theoretical modelling of the measurement chain as well as on numerical or experimental assessments are discussed. The most up-to-date approaches for a-posteriori PIV-UQ are introduced, highlighting their capabilities and limitations.
As many PIV experiments aim at determining flow properties derived from the velocity fields (e.g. vorticity, time-average velocity, Reynolds stresses, pressure), the topic of PIV uncertainty propagation is tackled considering the recent investigations based on Taylor series and Monte Carlo methods. Finally, the uncertainty quantification of 3D velocity measurements by volumetric approaches (tomographic PIV and Lagrangian particle tracking) is discussed.
Fernando Zigunov and John J Charonko 2024 Meas. Sci. Technol. 35 065302
Experimentally-measured pressure fields play an important role in understanding many fluid dynamics problems. Unfortunately, pressure fields are difficult to measure directly with non-invasive, spatially resolved diagnostics, and calculations of pressure from velocity have proven sensitive to error in the data. Omnidirectional line integration methods are usually more accurate and robust to these effects as compared to implicit Poisson equations, but have seen slower uptake due to the higher computational and memory costs, particularly in 3D domains. This paper demonstrates how omnidirectional line integration approaches can be converted to a matrix inversion problem. This novel formulation uses an iterative approach so that the boundary conditions are updated each step, preserving the convergence behavior of omnidirectional schemes while also keeping the computational efficiency of Poisson solvers. This method is implemented in Matlab and also as a GPU-accelerated code in CUDA-C++. The behavior of the new method is demonstrated on 2D and 3D synthetic and experimental data. Three-dimensional grid sizes of up to 125 million grid points are tractable with this method, opening exciting opportunities to perform volumetric pressure field estimation from 3D PIV measurements.
Bora O Cakir et al 2024 Meas. Sci. Technol. 35 075201
The degraded resolution and sensitivity characteristics of background-oriented schlieren (BOS) can be recovered by utilizing an optical flow (OF)-based image processing scheme. However, the background patterns conventionally employed in BOS setups suit the needs of the cross-correlation approach, whereas OF is based on a completely different mathematical background. Thus, in order to characterize the resolution and sensitivity response of OF-based BOS to the background generation configurations, a parametric study is performed. First, a synthetic assessment based on an analytical solution of a one-dimensional shock tube problem is conducted. Then, a numerical assessment utilizing direct numerical simulation data of density-driven turbulence is performed. Finally, the applicability of the documented conclusions in realistic scenarios is tested through an experimental assessment over a plume of a swirling heated jet.
Open all abstracts, in this tab
Arash Nemati et al 2024 Meas. Sci. Technol. 35 075405
Neutron imaging has gained increasing attention in recent years. A notable domain is the in-situ study of flow and concentration of hydrogen-rich materials. This demands precise quantification of the evolving concentrations. Several implementations deviate from the ideal conditions that allow the direct applicability of the Beer–Lambert law to assess this concentration. The objective of this work is to address these deviations by applying both calibration and correction procedures to ensure and validate accurate quantitative measurements during 2D and 3D neutron imaging conducted at the cold neutron source at the NeXT instrument of the Institute Laue–Langevin, Grenoble, France. Linear attenuation coefficients and non-linear correlations have been proposed to measure the water concentration based on the sample-to-detector distance. Furthermore, the effectiveness of the black body grid correction method, introduced by Boillat et al (2018 Opt. Express26 15769), is evaluated which accounts for spurious deviations arising from the scattering of neutrons from the sample and the surrounding environment. The applicability of the Beer–Lambert law without any data correction is found to be reasonable within limited equivalent thickness (e.g. below 4 mm of water) beyond which the correction algorithm proves highly effective in eliminating spurious effects. Notably, this correction method maintains its effectiveness even with transmissions below 1%. We examine here the impact of grid location and resolution with respect to sample heterogeneity.
Hu Wang et al 2024 Meas. Sci. Technol. 35 076129
As one of the key components in rotating machinery, the rolling element bearing has been widely used in actual production, such as wind turbines, vehicles and machine tools. A bearing's remaining useful life (RUL) is an important indicator for its performance assessment, which is related to maintenance and production safety. To overcome the insensitivity of the conventional health indicator (HI) on bearing degradation assessment, this study proposes a subspace clustering method based on manifold learning to evaluate the evolution of health status, which describes the degenerate distribution via a two-class model and realizes the identification of the degradation of each stage. Motivated by the inconsistent degradation process in the application of actual bearing, this study proposes a multi-stage degradation identification criterion in an adaptive way, which can effectively identify different degradation rates of bearing. Based on the different degradation states, a multi-stage degradation exponential model is established to accurately predict the RUL. The effectiveness of the proposed method is validated through open datasets. The experimental results prove that the proposed method can effectively identify different degradation rates and accurately give the boundary time of the multi-stage degradation. The RUL prediction accuracy is significantly improved compared with traditional HI.
Yun Wang et al 2024 Meas. Sci. Technol. 35 075209
In traditional stitching measurements, the central sub-aperture is usually used as the reference and is not suitable for incomplete spherical surfaces without central sub-aperture. The measurement of each sub-aperture requires manual readjusting the position and attitude of the measured component, resulting in low measurement accuracy and low measurement efficiency. In response to this issue, this study proposed a method based on confocal focusing for large aperture angle incomplete spherical surface stitching measurement. Automatically and accurately determine the position of the common focus through confocal focusing technology, the positioning accuracy of the measured component is improved. A sub-aperture stitching model was built, coordinate mapping rotation algorithm and overlapping area error compensation algorithm were applied to the surface shape of each sub-aperture, achieving stitching measurement of large aperture angle incomplete spherical surfaces. Finally, the confocal interference stitching measurement system was built to carry out stitching experiments of sphere and large aperture angle incomplete spherical surface. The stitching experimental results indicated that the method increases the measurement accuracy of PV by 2.2 times, increases the measurement accuracy of RMS by 2.3 times, and increases the measurement efficiency by 1.6 times. Therefore, this method provides a high-precision measurement method for detecting large aperture angle spherical surface shape.
Hamidreza Eivazi et al 2024 Meas. Sci. Technol. 35 075303
High-resolution reconstruction of flow-field data from low-resolution and noisy measurements is of interest due to the prevalence of such problems in experimental fluid mechanics, where the measurement data are in general sparse, incomplete and noisy. Deep-learning approaches have been shown suitable for such super-resolution tasks. However, a high number of high-resolution examples is needed, which may not be available for many cases. Moreover, the obtained predictions may lack in complying with the physical principles, e.g. mass and momentum conservation. Physics-informed deep learning provides frameworks for integrating data and physical laws for learning. In this study, we apply physics-informed neural networks (PINNs) for super-resolution of flow-field data both in time and space from a limited set of noisy measurements without having any high-resolution reference data. Our objective is to obtain a continuous solution of the problem, providing a physically-consistent prediction at any point in the solution domain. We demonstrate the applicability of PINNs for the super-resolution of flow-field data in time and space through three canonical cases: Burgers' equation, two-dimensional vortex shedding behind a circular cylinder and the minimal turbulent channel flow. The robustness of the models is also investigated by adding synthetic Gaussian noise. Furthermore, we show the capabilities of PINNs to improve the resolution and reduce the noise in a real experimental dataset consisting of hot-wire-anemometry measurements. Our results show the adequate capabilities of PINNs in the context of data augmentation for experiments in fluid mechanics.
Zipeng Li et al 2024 Meas. Sci. Technol. 35 076128
Utilizing unsupervised domain adaptation for intelligent fault diagnosis (IFD) has demonstrated significant potential for ensuring the security of machinery systems. Nonetheless, the inherent imbalance attribute of collected data affects the performance of diagnostic model. Especially, for machines working under varied conditions, the acquired unlabeled data frequently exhibits diverse degrees of distributional deviations, thus further undermining the transferable model's generalization capability. To address this challenge, we introduce a method termed Dynamic Unsupervised Imbalanced Domain Adaptation (DUIDA) for IFD. Employment of class rebalancing and label-dependent margin regularization strategies optimizes the selection of decision boundaries which counteract the distributional deviations introduced by the imbalance. In addition, by integrating a dynamic weighting mechanism, encompassing both adversarial-based and MMD-based domain adaptation, our model becomes versatile across varied UIDA tasks, assigning higher weights to fundamental faulty features. Finally, our empirical analyses on two faulty bearing datasets substantiate the efficacy and superior performance of the proposed framework across diverse operational scenarios.
Open all abstracts, in this tab
Xin Li et al 2024 Meas. Sci. Technol. 35 072002
The health condition of rolling bearings has a direct impact on the safe operation of rotating machinery. And their working environment is harsh and the working condition is complex, which brings challenges to fault diagnosis. With the development of computer technology, deep learning has been applied in the field of fault diagnosis and has rapidly developed. Among them, convolutional neural network (CNN) has received great attention from researchers due to its powerful data mining ability and feature adaptive learning ability. Based on recent research hotspots, the development history and trend of CNN is summarized and analyzed. Firstly, the basic structure of CNN is introduced and the important progress of classical CNN models for rolling bearing fault diagnosis in recent years is studied. The problems with the classic CNN algorithm have been pointed out. Secondly, to solve the above problems, combined with recent research achievements, various methods and principles for optimizing CNN are introduced and compared from the perspectives of deep feature extraction, hyperparameter optimization, network structure optimization. Although significant progress has been made in the research of fault diagnosis of rolling bearings based on CNN, there is still room for improvement and development in addressing issues such as low accuracy of imbalanced data, weak model generalization, and poor network interpretability. Therefore, the future development trend of CNN networks is discussed finally. And transfer learning models are introduced to improve the generalization ability of CNN and interpretable CNN is used to increase the interpretability of CNN networks.
Victor H R Cardoso et al 2024 Meas. Sci. Technol. 35 072001
This work addresses the historical development of techniques and methodologies oriented to the measurement of the internal diameter of transparent tubes since the original contributions of Anderson and Barr published in 1923 in the first issue of Measurement Science and Technology. The progresses on this field are summarized and highlighted the emergence and significance of the measurement approaches supported by the optical fiber.
Weiqing Liao et al 2024 Meas. Sci. Technol. 35 062002
Mechanical fault diagnosis is crucial for ensuring the normal operation of mechanical equipment. With the rapid development of deep learning technology, the methods based on big data-driven provide a new perspective for the fault diagnosis of machinery. However, mechanical equipment operates in the normal condition most of the time, resulting in the collected data being imbalanced, which affects the performance of mechanical fault diagnosis. As a new approach for generating data, generative adversarial network (GAN) can effectively address the issues of limited data and imbalanced data in practical engineering applications. This paper provides a comprehensive review of GAN for mechanical fault diagnosis. Firstly, the development of GAN-based mechanical fault diagnosis, the basic theory of GAN and various GAN variants (GANs) are briefly introduced. Subsequently, GANs are summarized and categorized from the perspective of labels and models, and the corresponding applications are outlined. Lastly, the limitations of current research, future challenges, future trends and selecting the GAN in the practical application are discussed.
Jianghong Zhou et al 2024 Meas. Sci. Technol. 35 062001
Predictive maintenance (PdM) is currently the most cost-effective maintenance method for industrial equipment, offering improved safety and availability of mechanical assets. A crucial component of PdM is the remaining useful life (RUL) prediction for machines, which has garnered increasing attention. With the rapid advancements in industrial internet of things and artificial intelligence technologies, RUL prediction methods, particularly those based on pattern recognition (PR) technology, have made significant progress. However, a comprehensive review that systematically analyzes and summarizes these state-of-the-art PR-based prognostic methods is currently lacking. To address this gap, this paper presents a comprehensive review of PR-based RUL prediction methods. Firstly, it summarizes commonly used evaluation indicators based on accuracy metrics, prediction confidence metrics, and prediction stability metrics. Secondly, it provides a comprehensive analysis of typical machine learning methods and deep learning networks employed in RUL prediction. Furthermore, it delves into cutting-edge techniques, including advanced network models and frontier learning theories in RUL prediction. Finally, the paper concludes by discussing the current main challenges and prospects in the field. The intended audience of this article includes practitioners and researchers involved in machinery PdM, aiming to provide them with essential foundational knowledge and a technical overview of the subject matter.
Zheyu Wang et al 2024 Meas. Sci. Technol. 35 052003
The market for service robots is expanding as labor costs continue to rise. Faced with intricate working environments, fault detection and diagnosis are crucial to ensure the proper functioning of service robots. The objective of this review is to systematically investigate the realm of service robots' fault diagnosis through the application of Structural Topic Modeling. A total of 289 papers were included, culminating in ten topics, including advanced algorithm application, data learning-based evaluation, automated equipment maintenance, actuator diagnosis for manipulator, non-parametric method, distributed diagnosis in multi-agent systems, signal-based anomaly analysis, integrating complex control framework, event knowledge assistance, mobile robot particle filtering method. These topics spanned service robot hardware and software failures, diverse service robot systems, and a range of advanced algorithms for fault detection in service robots. Asia-Pacific, Europe, and the Americas, recognized as three pivotal regions propelling the advancement of service robots, were employed as covariates in this review to investigate regional disparities. The review found that current research tends to favor the use of artificial intelligence (AI) algorithms to address service robots' complex system faults and vast volumes of data. The topics of algorithms, data learning, automated maintenance, and signal analysis are advancing with the support of AI, gaining increasing popularity as a burgeoning trend. Additionally, variations in research focus across different regions were found. The Asia-Pacific region tends to prioritize algorithm-related studies, while Europe and the Americas show a greater emphasis on robot safety issues. The integration of diverse technologies holds the potential to bring forth new opportunities for future service robot fault diagnosis.Simultaneously, regional standards about data, communication, and other aspects can streamline the development of methods for service robots' fault diagnosis.
Open all abstracts, in this tab
Zhao et al
Infrared absorptiometry is a widely used non-intrusive method for measuring the thickness of liquid films. The accuracy of that measurement depends crucially on having high-accuracy data of the absorption coefficient of the laser light used, which is, however, not easily available, especially for the wavelength range where the absorption is strong. Here we propose a method to calibrate the absorption coefficients in such cases. By measuring the light intensity reduction while scanning through a liquid film formed in a wedge, whose angle can be adjusted and determined a priori from interferometry, the absorption coefficient of the liquid can be accurately obtained without the need to create a flat liquid film with exact known thickness. The method is verified by calibrating the absorption coefficient of pure water at an infrared wavelength and the result agrees very well with the values found in the literature. As a demonstration of the application of the method, the absorption coefficients of soap solutions with different compositions were calibrated and used to measure the thicknesses of draining soap films. The results from the absorptiometry are in good agreement with the film thickness measured simultaneously from interferometry.
Zhu et al
Reliable and precise information pertaining to the position, velocity, and attitude is essential for automated driving. This paper proposes FGO-MFI, a cost-effective and robust multi-sensor fusion and integration localization framework that utilizes factor graph optimization. Firstly, a tightly coupled GNSS (Global Navigation Satellite Systems)/on-board sensor fusion localization framework is established to estimate vehicle states, including position, velocity, and attitude. To address the large drift rate of the IMU (Inertial Measurement Unit), this study introduces a novel IMU/Dynamics pre-integration method based on the vehicle dynamics model. We establish a two-degree-of-freedom vehicle dynamics model utilizing measurements from the wheel speed sensor and steering wheel angle sensor. The IMU/Dynamics factor is devised through a close integration of the model output and IMU pre-integration, enabling the construction of precise odometry with low-cost on-board sensors. Then, to address the issue of the non-Gaussian distribution of GNSS pseudorange error, this paper employs a GMM (Gaussian Mixture Model) to characterize the pseudorange noise, which is then applied to further sensor fusion. Given the time-varying nature of pseudorange noise, the EM (expectation maximization) algorithm is utilized to estimate GMM parameters online, leveraging the pseudorange residuals within a sliding window. Comprehensive experiments, inclusive of challenging scenarios such as urban canyons, tunnels, and wooded areas, have been carried out. They affirm the superior performance of the proposed method. Experiments have shown that our method demonstrates reliable localization across different statuses of GNSS signal, exhibiting a 39.4% improvement in the root mean square error of position error when compared to the state-of-the-art. Additionally, this FGO-MFI is a general sensor data fusion framework and is able to incorporate diverse sensor measurements, for example, from cameras and LiDARs to provide more reliable and accurate localization information.
Tang et al
This study presents a radar-optical fusion detection method for unmanned aerial vehicles (UAVs) in maritime environments. Radar and camera technologies are integrated to improve the detection capabilities of the platforms. The proposed method involves generating Regions of Interest (ROI) by projecting radar traces onto optical images through matrix transformation and geometric centroid registration. The generated ROI are matched with YOLO detection boxes using the intersection-over-union (IoU) algorithm, enabling radar-optical fusion detection. A modified algorithm, called SPN-YOLOv7-tiny, is developed to address the challenge of detecting small UAV targets that are easily missed in images. In this algorithm, the convolutional layers in the backbone network are replaced with a space-to-depth convolution (SPD-Conv), and a small object detection layer is added. In addition, the loss function was replaced with a NWD (Normalized weighted Distance) loss function. Experimental results demonstrate that compared to the original YOLOv7-tiny method, SPN-YOLOv7-tiny achieves an improved mAP@0.5 (mean average precision at an IoU threshold of 0.5) from 0.852 to 0.93, while maintaining a high frame rate of 135.1 frames per second. Moreover, the proposed radar-optical fusion detection method achieves an accuracy of 96.98%, surpassing the individual detection results of the radar and camera. The proposed method effectively addresses the detection challenges posed by closely spaced overlapping targets on a radar chart.
Yang et al
With the increasing demands for long-endurance and high-accuracy inertial navigation system (INS), gravity disturbance has been identified as one of the major error sources with decisive effects on the performance of INS. To address the problem, this paper proposes an autonomous and high-accuracy gravity disturbance compensation scheme for rotary INS (RINS). The high-accuracy velocity measured by the laser Doppler velocimeter is fused with the angular velocity measured by the gyroscope to obtain navigation parameters, such as velocity, position and attitude, that are not affected by gravity disturbance. The navigation parameters independent of gravity disturbance are matched with the gravity disturbance-related navigation parameters output by RINS, and a measurement model containing gravity disturbance information is obtained. Besides, the intrinsic coupling relationship between gravity disturbance, gravity disturbance rate and gravity disturbance gradient is revealed, and a state-space model is established to accurately reflect the time-varying characteristics of gravity disturbance. Furthermore, the gravity disturbance is estimated and compensated in real-time through the optimal estimation algorithm. The results of vehicle experiments indicate that the gravity disturbance estimation precision of the proposed scheme is better than 2.15 mGal (1σ), and its horizontal position accuracy is better than 50 m at a driving distance of 80 km.
Zhao et al
Quantitative measurement of smartphone screen scratches is crucial for pricing in the used smartphone market. Traditional manual visual inspection methods suffer from inherent limitations, namely being labor-intensive, subjective, and prone to inaccuracy. Hence, this study proposes a vision-based measurement method as a viable solution to overcome these challenges. The algorithm uses the Hessian enhancement to extract scratch features, applies adaptive thresholding to distinguish features from the background, and employs morphological reconstruction to reconstruct complete scratches. The topological analysis splits and mergers intersecting scratches, enabling individual segmentation. Finally, four metrics for measuring screen scratches include length, brightness, contrast, and maximum width to quantitatively characterize the damage of screen scratches. Experiments showed that the proposed algorithm outperforms other vision-based methods, with an accuracy of 99.6% in estimating the scratch length and a running time of 43.7 ms, which fully meets the efficiency and accuracy requirements of industrial application.
Open all abstracts, in this tab
Hamidreza Eivazi et al 2024 Meas. Sci. Technol. 35 075303
High-resolution reconstruction of flow-field data from low-resolution and noisy measurements is of interest due to the prevalence of such problems in experimental fluid mechanics, where the measurement data are in general sparse, incomplete and noisy. Deep-learning approaches have been shown suitable for such super-resolution tasks. However, a high number of high-resolution examples is needed, which may not be available for many cases. Moreover, the obtained predictions may lack in complying with the physical principles, e.g. mass and momentum conservation. Physics-informed deep learning provides frameworks for integrating data and physical laws for learning. In this study, we apply physics-informed neural networks (PINNs) for super-resolution of flow-field data both in time and space from a limited set of noisy measurements without having any high-resolution reference data. Our objective is to obtain a continuous solution of the problem, providing a physically-consistent prediction at any point in the solution domain. We demonstrate the applicability of PINNs for the super-resolution of flow-field data in time and space through three canonical cases: Burgers' equation, two-dimensional vortex shedding behind a circular cylinder and the minimal turbulent channel flow. The robustness of the models is also investigated by adding synthetic Gaussian noise. Furthermore, we show the capabilities of PINNs to improve the resolution and reduce the noise in a real experimental dataset consisting of hot-wire-anemometry measurements. Our results show the adequate capabilities of PINNs in the context of data augmentation for experiments in fluid mechanics.
Isaac Spotts et al 2024 Meas. Sci. Technol. 35 075208
To improve the temporal resolution in an optical delay system that uses a conventional mechanical delay stage, we integrate an in-line liquid crystal (LC) wave retarder. Previous implementations of LC optical delay methods are limited due to the small temporal window provided. Using a conventional mechanical delay stage system in series with the LC wave retarder, the temporal window is lengthened. Additionally, the limitation on temporal resolution resulting from the minimum optical path alteration (resolution of 400 nm) of the conventionally used mechanical delay stage is reduced via the in-line wave retarder (resolution of 50 nm). Interferometric autocorrelation measurements are conducted at multiple laser emission frequencies (349, 357, 375, 394, and 405 THz) using the in-line LC and conventional mechanical delay stage systems. The in-line LC system is compared to the conventional mechanical delay stage system to determine the improvements in temporal resolution relating to maximum resolvable frequency. This work demonstrates that the integration of the in-line LC system can extend the maximum resolvable frequency from 375 to 3000 THz. The in-line LC system is also applied for measurement of terahertz pulses.
Yuvarajendra Anjaneya Reddy et al 2024 Meas. Sci. Technol.
Current optical flow-based neural networks for Particle Image Velocimetry (PIV) are largely trained on synthetic datasets emulating real-world scenarios. While synthetic datasets provide greater control and variation than what can be achieved using experimental datasets for supervised learning, it requires a deeper understanding of how or what factors dictate the learning behaviors of deep neural networks for PIV. In this study, we investigate the performance of the Recurrent All-Pairs Field Transforms (RAFT-PIV) network, the current state-of-the-art deep learning architecture for PIV, by testing it on unseen experimentally generated datasets. The results from RAFT-PIV are compared with a conventional cross-correlation-based method, Adaptive PIV. The experimental PIV datasets were generated for a typical scenario of flow past a circular cylinder in a rectangular channel. These test datasets encompassed variations in particle diameters, particle seeding densities, and flow speeds, all falling within the parameter range used for training RAFT-PIV. We also explore how different image pre-processing techniques can impact and potentially enhance the performance of RAFT-PIV on real-world datasets. Thorough testing with real-world experimental PIV datasets reveals the resilience of the optical flow-based method's variations to PIV hyperparameters, in contrast to the conventional PIV technique. The ensemble-averaged Root Mean Squared Errors (RMSE) between the RAFT-PIV and Adaptive PIV estimations generally range between 0.5 to 2 [px] and show a slight reduction as particle densities increase or Reynolds numbers decrease. Furthermore, findings indicate that employing image pre-processing techniques to enhance input particle image quality doesn't improve RAFT-PIV predictions; instead, it incurs higher computational costs and impacts estimations of small-scale structures.
A Spaett and B G Zagar 2024 Meas. Sci. Technol. 35 075013
Fully developed laser speckle patterns are, due to their high contrast and statistical nature, well suited to measure strain and displacement via an appropriately designed measurement system. Laser speckle patterns are formed when a sufficiently coherent light source, such as a HeNe-laser, illuminates an optically rough surface. Therefore, methods based on laser speckle patterns can be applied to any surface scatterer with a minimum mean surface roughness of about a quarter of the laser's wavelength. This includes also materials such as thin natural and technical fibres as well as foils, for which the presented measurement system, including the digital signal processing, was designed. In order to achieve the best possible resolution of a speckle-based measurement system, combined with a sufficiently small measurement uncertainty, all available design parameters must be optimised. One of these parameters is the speckle size, which is dependant on the properties of the imaging optics. In this paper a subjective laser speckle-based measurement system based on a so-called 4f-optical setup is presented. This setup allows the speckle size to be controlled in both axial and lateral dimensions separately, which is achieved with the help of an aperture in the Fourier plane of the optics. It is shown that the optimal speckle size for the presented measurement system, not only depends on the physical setup, but also on the signal processing applied. The signal processing routine estimates displacements of the speckle pattern, leading to an estimate for the strain. Additionally, it is demonstrated that the optimal speckle size can be lower than the commonly reported optimum between two and five pixel pitches, necessary to circumvent aliasing in the image data. While this is shown for a measurement setup using 4f-optics, the results are of general importance to speckle-based strain or displacement measurement systems and should thus be taken into account.
Ata Can Çorakçı et al 2024 Meas. Sci. Technol.
In this paper, application of a Two-Equations Two-Unknowns (2E-2U) method is described for calibration of hydrophones and projectors below 1 kHz in a laboratory test tank. At low frequencies, amplitude and phase measurements for the calibration of the hydrophones and projectors in the test tank are diffucult to perform since echo-free time of the laboratory test tank is not large enough due to transducer initial transients and tank wall boundary reflections. To overcome these diffuculties, the 2E-2U method is applied to received (windowed) signals obtained during calibration measurements. Thus, the calibration measurements become possible at a frequency down to 250 Hz. These measurements in the test tank are performed for a hydrophone and a developed flextensional projector. First, the receive sensitivities for the hydrophone are calculated and validated by comparisons with pressure calibration in a closed chamber. Good agreements are obtained between two measurement platforms, with a maximum difference of 0.5 dB and uncertainty of 1.3 dB. Then, transmitting voltage response (TVR) of the flextensional projector is calculated and compared with the calibration data obtained from the method defined in the relevant standards. Good agreements are obtained between two TVR data with a maximum difference of 1.1 dB and uncertainty of 1.7 dB.
S Soman et al 2024 Meas. Sci. Technol. 35 075905
Inspection of surface and nanostructure imperfections play an important role in high-throughput manufacturing across various industries. This paper introduces a novel, parallelised version of the metrology and inspection technique: Coherent Fourier scatterometry (CFS). The proposed strategy employs parallelisation with multiple probes, facilitated by a diffraction grating generating multiple optical beams and detection using an array of split detectors. The article details the optical setup, design considerations, and presents results, including independent detection verification, calibration curves for different beams, and a data stitching process for composite scans. The study concludes with discussions on the system's limitations and potential avenues for future development, emphasizing the significance of enhancing scanning speed for the widespread adoption of CFS as a commercial metrology tool.
Zelin Zhou et al 2024 Meas. Sci. Technol. 35 076304
Global navigation satellite system (GNSS) positioning performance in the urban dense environment experiences significant deterioration due to frequent non-line-of-sight (NLOS) and multipath errors. An accurate weighting scheme is critical for positioning, especially in urban environment. Traditional methods for determining the weights of observations typically rely on the carrier-to-noise density ratio (C/N0) and the elevations from satellites to receivers. Nevertheless, the performance of these methods is degraded in the dense urban settings, as C/N0 and elevation measurements fail to fully capture the intricacies of NLOS and multipath errors. In this paper, a novel GNSS observations weighting scheme based on Hopular GNSS signal classifier, which can accurately identify the LOS/NLOS signals using medium-sized training dataset, is proposed to improve the urban kinematic navigation solution in real-time kinematic positioning mode. Four GNSS features: C/N0, time-differenced code-minus-carrier, loss of lock indicator and satellite's elevation, are employed in the training of the Hopular based signal classifier. The performance of the new method is validated using two urban kinematic datasets collected by a U-blox F9P receiver with a low-cost antenna, in downtown Calgary. For the first testing dataset, the results show that the Hopular based weighting scheme outperforms the three most commonly used GNSS observations weighting schemes: C/N0, elevation, and a combined C/N0-elevation approach. Approximately 10.089 m of horizontal root-mean-squared (RMS) positioning error and 12.592 m of vertical RMS error are achieved using the proposed method; with improvements of 78.83%, 46.82% and 43.27% on horizontal positioning accuracy and 54.00%, 47.51% and 49.69% on vertical positioning accuracy, compared to using C/N0, elevation and C/N0-elevation combined weighting schemes, respectively. For the second testing dataset, a similar performance is achieved with nearly 11.631 m of horizontal RMS error and 10.158 m of vertical RMS error; improvements of 64.58%, 32.90% and 22.40% on horizontal positioning accuracy and 71.99%, 65.24% and 55.88% on vertical positioning accuracy are achieved, compared to using C/N0, elevation and C/N0-elevation combined weighting schemes, respectively.
Jakub Svatos and Jan Holub 2024 Meas. Sci. Technol. 35 076122
This paper analyses the efficiency of various frequency cepstral coefficients (FCC) in a non-speech application, specifically in classifying acoustic impulse events-gunshots. There are various methods for such event identification available. The majority of these methods are based on time or frequency domain algorithms. However, both of these domains have their limitations and disadvantages. In this article, an FCC, combining the advantages of both frequency and time domains, is presented and analyzed. These originally speech features showed potential not only in speech-related applications but also in other acoustic applications. The comparison of the classification efficiency based on features obtained using four different FCC, namely mel-FCC (MFCC), inverse mel-frequency cepstral coefficients (IMFCC), linear-frequency cepstral coefficients (LFCC), and gammatone-frequency cepstral coefficients (GTCC) is presented. An optimal frame length for an FCC calculation is also explored. Various gunshots from short guns and rifle guns of different calibers and multiple acoustic impulse events, similar to the gunshots, to represent false alarms are used. More than 600 acoustic events records have been acquired and used for training and validation of two designed classifiers, support vector machine, and neural network. Accuracy, recall and Matthew's correlation coefficient measure the classification success rate. The results reveal the superiority of GFCC to other analyzed methods.
Geoffrey de Villiers et al 2024 Meas. Sci. Technol.
Gravity measurements have uses in a wide range of fields including geological mapping and mine-shaft inspection. The specific application in question sets limits on the survey and the amount of information that can be obtained. For example, in a conventional gravity survey at the Earth's surface a gravimeter is translated on a two-dimensional planar grid taking measurements of vertical component of gravity. If, however, the survey points cannot be chosen so freely, for example if the gravimeter is constrained to operate in a tunnel where only a one-dimensional line of data could be taken, less information will be obtained. To address this situation, we investigate an alternative approach, in the form of an instrument which rotates around a central point measuring the gravitational potential on the boundary of a sphere around the centre of the instrument. The ability to record additional components of gravity by rotating the gravimeter will give more information than obtained with a single measurement traditionally taken at each point on a survey, consequently reducing ambiguities in interpretation. We term a device which measures the potential, or its radial derivatives, around the surface of a sphere a gravitational eye. In this article we explore ideas of resolution and propose a thought experiment for comparing the performance of diverse types of gravitational eye. We also discuss radial analytic continuation towards sources of gravity and the resulting resolution enhancement, before finally discussing the possibility of using cold-atom gravimetry and gradiometry to construct a gravitational eye. If realised, the gravitational eye will offer revolutionary capability enabling the maximum information to be obtained about features in all directions around it.
Simon Burkhard and Alain Küng 2024 Meas. Sci. Technol. 35 075008
A method is presented for fitting the projected centres of spheres in cone beam x-ray imaging. By using a suitable coordinate system, the method allows direct and exact calculation of the sphere centre without fitting the projection shape with an ellipse and correcting from the ellipse centre to the sphere centre. Advantages in numerical implementation result from the number of unknown variables being reduced compared to ellipse fits. Additionally, the orientation of the detector relative to the x-ray source can be obtained from fitting the shapes of projections of multiple spheres without knowledge of the positions or dimensions of the spheres. The accuracy of the method is compared to other techniques using simulated x-ray projections.