To improve the spatial resolution of power Doppler (PD) imaging, we explored null subtraction imaging (NSI) as an alternative beamforming technique to delay-and-sum (DAS). NSI is a nonlinear beamforming approach that uses three different apodizations on receive and incoherently sums the beamformed envelopes. NSI uses a null in the beam pattern to improve the lateral resolution, which we apply here for improving PD spatial resolution both with and without contrast microbubbles. In this study, we used NSI with singular value decomposition (SVD)-based clutter filtering and noise equalization to generate high-resolution PD images. An element sensitivity correction scheme was also performed to further improve the image quality of PD images using NSI. First, a microbubble trace experiment was performed to quantitatively evaluate the performance of NSI based PD. Then, both contrast-enhanced and contrast free ultrasound data were collected from a rat brain. Higher spatial resolution and image quality were observed from the NSI-based PD microvessel images compared to microvessel images generated by traditional DAS-based beamforming.
Semantic segmentation is important for scene understanding. To address the scenes of adverse illumination conditions of natural images, thermal infrared (TIR) images are introduced. Most existing RGB-T semantic segmentation methods follow three cross-modal fusion paradigms, i.e. encoder fusion, decoder fusion, and feature fusion. Some methods, unfortunately, ignore the properties of RGB and TIR features or the properties of features at different levels. In this paper, we propose a novel feature fusion-based network for RGB-T semantic segmentation, named \emph{LASNet}, which follows three steps of location, activation, and sharpening. The highlight of LASNet is that we fully consider the characteristics of cross-modal features at different levels, and accordingly propose three specific modules for better segmentation. Concretely, we propose a Collaborative Location Module (CLM) for high-level semantic features, aiming to locate all potential objects. We propose a Complementary Activation Module for middle-level features, aiming to activate exact regions of different objects. We propose an Edge Sharpening Module (ESM) for low-level texture features, aiming to sharpen the edges of objects. Furthermore, in the training phase, we attach a location supervision and an edge supervision after CLM and ESM, respectively, and impose two semantic supervisions in the decoder part to facilitate network convergence. Experimental results on two public datasets demonstrate that the superiority of our LASNet over relevant state-of-the-art methods. The code and results of our method are available at https://github.com/MathLee/LASNet.