Abstract:X-ray computed tomography is a powerful tool for volumetric imaging, where three-dimensional (3D) images are generated from a large number of individual X-ray projection images. Collecting the required number of low noise projection images is however time-consuming and so the technique is not currently applicable when spatial information needs to be collected with high temporal resolution, such as in the study of dynamic processes. In our previous work, inspired by stereo vision, we developed stereo X-ray imaging methods that operate with only two X-ray projection images. Previously we have shown how this allowed us to map point and line fiducial markers into 3D space at significantly faster temporal resolutions. In this paper, we make two further contributions. Firstly, instead of utilising internal fiducial markers, we demonstrate the applicability of the method to the 3D mapping of sharp object corners, a problem of interest in measuring the deformation of manufactured components under different loads. Furthermore, we demonstrate how the approach can be applied to real stereo X-ray data, even in settings where we do not have the annotated real training data that was required for the training of our previous Machine Learning approach. This is achieved by substituting the real data with a relatively simple synthetic training dataset designed to mimic key aspects of the real data.
Abstract:X-ray tomography is a powerful volumetric imaging technique, but detailed three dimensional (3D) imaging requires the acquisition of a large number of individual X-ray images, which is time consuming. For applications where spatial information needs to be collected quickly, for example, when studying dynamic processes, standard X-ray tomography is therefore not applicable. Inspired by stereo vision, in this paper, we develop X-ray imaging methods that work with two X-ray projection images. In this setting, without the use of additional strong prior information, we no longer have enough information to fully recover the 3D tomographic images. However, up to a point, we are nevertheless able to extract spatial locations of point and line features. From stereo vision, it is well known that, for a known imaging geometry, once the same point is identified in two images taken from different directions, then the point's location in 3D space is exactly specified. The challenge is the matching of points between images. As X-ray transmission images are fundamentally different from the surface reflection images used in standard computer vision, we here develop a different feature identification and matching approach. In fact, once point like features are identified, if there are limited points in the image, then they can often be matched exactly. In fact, by utilising a third observation from an appropriate direction, matching becomes unique. Once matched, point locations in 3D space are easily computed using geometric considerations. Linear features, with clear end points, can be located using a similar approach.