Alert button
Picture for Lei Tian

Lei Tian

Alert button

MixTEA: Semi-supervised Entity Alignment with Mixture Teaching

Nov 08, 2023
Feng Xie, Xin Song, Xiang Zeng, Xuechen Zhao, Lei Tian, Bin Zhou, Yusong Tan

Semi-supervised entity alignment (EA) is a practical and challenging task because of the lack of adequate labeled mappings as training data. Most works address this problem by generating pseudo mappings for unlabeled entities. However, they either suffer from the erroneous (noisy) pseudo mappings or largely ignore the uncertainty of pseudo mappings. In this paper, we propose a novel semi-supervised EA method, termed as MixTEA, which guides the model learning with an end-to-end mixture teaching of manually labeled mappings and probabilistic pseudo mappings. We firstly train a student model using few labeled mappings as standard. More importantly, in pseudo mapping learning, we propose a bi-directional voting (BDV) strategy that fuses the alignment decisions in different directions to estimate the uncertainty via the joint matching confidence score. Meanwhile, we also design a matching diversity-based rectification (MDR) module to adjust the pseudo mapping learning, thus reducing the negative influence of noisy mappings. Extensive results on benchmark datasets as well as further analyses demonstrate the superiority and the effectiveness of our proposed method.

* Findings of EMNLP 2023; 11 pages, 4 figures; code see https://github.com/Xiefeng69/MixTEA 
Viaarxiv icon

EventLFM: Event Camera integrated Fourier Light Field Microscopy for Ultrafast 3D imaging

Oct 01, 2023
Ruipeng Guo, Qianwan Yang, Andrew S. Chang, Guorong Hu, Joseph Greene, Christopher V. Gabel, Sixian You, Lei Tian

Ultrafast 3D imaging is indispensable for visualizing complex and dynamic biological processes. Conventional scanning-based techniques necessitate an inherent tradeoff between the acquisition speed and space-bandwidth product (SBP). While single-shot 3D wide-field techniques have emerged as an attractive solution, they are still bottlenecked by the synchronous readout constraints of conventional CMOS architectures, thereby limiting the data throughput by frame rate to maintain a high SBP. Here, we present EventLFM, a straightforward and cost-effective system that circumnavigates these challenges by integrating an event camera with Fourier light field microscopy (LFM), a single-shot 3D wide-field imaging technique. The event camera operates on a novel asynchronous readout architecture, thereby bypassing the frame rate limitations intrinsic to conventional CMOS systems. We further develop a simple and robust event-driven LFM reconstruction algorithm that can reliably reconstruct 3D dynamics from the unique spatiotemporal measurements from EventLFM. We experimentally demonstrate that EventLFM can robustly image fast-moving and rapidly blinking 3D samples at KHz frame rates and furthermore, showcase EventLFM's ability to achieve 3D tracking of GFP-labeled neurons in freely moving C. elegans. We believe that the combined ultrafast speed and large 3D SBP offered by EventLFM may open up new possibilities across many biomedical applications.

Viaarxiv icon

Local Conditional Neural Fields for Versatile and Generalizable Large-Scale Reconstructions in Computational Imaging

Jul 22, 2023
Hao Wang, Jiabei Zhu, Yunzhe Li, QianWan Yang, Lei Tian

Figure 1 for Local Conditional Neural Fields for Versatile and Generalizable Large-Scale Reconstructions in Computational Imaging
Figure 2 for Local Conditional Neural Fields for Versatile and Generalizable Large-Scale Reconstructions in Computational Imaging
Figure 3 for Local Conditional Neural Fields for Versatile and Generalizable Large-Scale Reconstructions in Computational Imaging
Figure 4 for Local Conditional Neural Fields for Versatile and Generalizable Large-Scale Reconstructions in Computational Imaging

Deep learning has transformed computational imaging, but traditional pixel-based representations limit their ability to capture continuous, multiscale details of objects. Here we introduce a novel Local Conditional Neural Fields (LCNF) framework, leveraging a continuous implicit neural representation to address this limitation. LCNF enables flexible object representation and facilitates the reconstruction of multiscale information. We demonstrate the capabilities of LCNF in solving the highly ill-posed inverse problem in Fourier ptychographic microscopy (FPM) with multiplexed measurements, achieving robust, scalable, and generalizable large-scale phase retrieval. Unlike traditional neural fields frameworks, LCNF incorporates a local conditional representation that promotes model generalization, learning multiscale information, and efficient processing of large-scale imaging data. By combining an encoder and a decoder conditioned on a learned latent vector, LCNF achieves versatile continuous-domain super-resolution image reconstruction. We demonstrate accurate reconstruction of wide field-of-view, high-resolution phase images using only a few multiplexed measurements. LCNF robustly captures the continuous object priors and eliminates various phase artifacts, even when it is trained on imperfect datasets. The framework exhibits strong generalization, reconstructing diverse objects even with limited training data. Furthermore, LCNF can be trained on a physics simulator using natural images and successfully applied to experimental measurements on biological samples. Our results highlight the potential of LCNF for solving large-scale inverse problems in computational imaging, with broad applicability in various deep-learning-based techniques.

Viaarxiv icon

Channel Measurement, Modeling, and Simulation for 6G: A Survey and Tutorial

May 26, 2023
Jianhua Zhang, Jiaxin Lin, Pan Tang, Yuxiang Zhang, Huixin Xu, Tianyang Gao, Haiyang Miao, Zeyong Chai, Zhengfu Zhou, Yi Li, Huiwen Gong, Yameng Liu, Zhiqiang Yuan, Ximan Liu, Lei Tian, Shaoshi Yang, Liang Xia, Guangyi Liu, Ping Zhang

Technology research and standardization work of sixth generation (6G) has been carried out worldwide. Channel research is the prerequisite of 6G technology evaluation and optimization. This paper presents a survey and tutorial on channel measurement, modeling, and simulation for 6G. We first highlight the challenges of channel for 6G systems, including higher frequency band, extremely large antenna array, new technology combinations, and diverse application scenarios. A review of channel measurement and modeling for four possible 6G enabling technologies is then presented, i.e., terahertz communication, massive multiple-input multiple-output communication, joint communication and sensing, and reconfigurable intelligent surface. Finally, we introduce a 6G channel simulation platform and provide examples of its implementation. The goal of this paper is to help both professionals and non-professionals know the progress of 6G channel research, understand the 6G channel model, and use it for 6G simulation.

* 37 pages,30 figures 
Viaarxiv icon

3GPP-Like THz Channel Modeling for Indoor Office and Urban Microcellular Scenarios

May 24, 2023
Zhaowei Chang, Jianhua Zhang, Pan Tang, Lei Tian, Yadong Yang, Jiaxin Lin, and Guangyi Liu

Figure 1 for 3GPP-Like THz Channel Modeling for Indoor Office and Urban Microcellular Scenarios
Figure 2 for 3GPP-Like THz Channel Modeling for Indoor Office and Urban Microcellular Scenarios
Figure 3 for 3GPP-Like THz Channel Modeling for Indoor Office and Urban Microcellular Scenarios
Figure 4 for 3GPP-Like THz Channel Modeling for Indoor Office and Urban Microcellular Scenarios

Terahertz (THz) communication is envisioned as the possible technology for the sixth-generation (6G) communication system. THz channel propagation characteristics are the basis of designing and evaluating for THz communication system. In this paper, THz channel measurements at 100 GHz and 132 GHz are conducted in an indoor office scenario and an urban microcellular (UMi) scenario, respectively. Based on the measurement, the 3GPP-like channel parameters are extracted and analyzed. Moreover, the parameters models are available for the simulation of the channel impulse response by the geometry-based stochastic model (GBSM). Then, the comparisons between measurement-based parameter models and 3rd Generation Partnership Project (3GPP) channel models are investigated. It is observed that the case with path loss approaching free space exists in the NLoS scenario. Besides, the cluster number are 4 at LoS and 5 at NLoS in the indoor office and 4 at LoS and 3 at NLoS in the UMi, which are much less than 3GPP. The multipath component (MPC) in the THz channel distributes more simpler and more sparsely than the 3GPP millimeter wave (mm-wave) channel models. Furthermore, the ergodic capacity of mm-wave and THz are evaluated by the proposed THz GBSM implementation framework. The THz measurement model predicts the smallest capacity, indicating that high carrier frequency is limited to the single transmission mechanism of reflection and results in the reduction of cluster numbers and ergodic capacity. Generally, these results are helpful to understand and model the THz channel and apply the THz communication technique for 6G.

* 13 pages, 12 figures, 3 tables 
Viaarxiv icon

Robust single-shot 3D fluorescence imaging in scattering media with a simulator-trained neural network

Mar 22, 2023
Jeffrey Alido, Joseph Greene, Yujia Xue, Guorong Hu, Yunzhe Li, Kevin J. Monk, Brett T. DeBenedicts, Ian G. Davison, Lei Tian

Figure 1 for Robust single-shot 3D fluorescence imaging in scattering media with a simulator-trained neural network
Figure 2 for Robust single-shot 3D fluorescence imaging in scattering media with a simulator-trained neural network
Figure 3 for Robust single-shot 3D fluorescence imaging in scattering media with a simulator-trained neural network
Figure 4 for Robust single-shot 3D fluorescence imaging in scattering media with a simulator-trained neural network

Imaging through scattering is a pervasive and difficult problem in many biological applications. The high background and the exponentially attenuated target signals due to scattering fundamentally limits the imaging depth of fluorescence microscopy. Light-field systems are favorable for high-speed volumetric imaging, but the 2D-to-3D reconstruction is fundamentally ill-posed and scattering exacerbates the condition of the inverse problem. Here, we develop a scattering simulator that models low-contrast target signals buried in heterogeneous strong background. We then train a deep neural network solely on synthetic data to descatter and reconstruct a 3D volume from a single-shot light-field measurement with low signal-to-background ratio (SBR). We apply this network to our previously developed Computational Miniature Mesoscope and demonstrate the robustness of our deep learning algorithm on a 75 micron thick fixed mouse brain section and on bulk scattering phantoms with different scattering conditions. The network can robustly reconstruct emitters in 3D with a 2D measurement of SBR as low as 1.05 and as deep as a scattering length. We analyze fundamental tradeoffs based on network design factors and out-of-distribution data that affect the deep learning model's generalizability to real experimental data. Broadly, we believe that our simulator-based deep learning approach can be applied to a wide range of imaging through scattering techniques where experimental paired training data is lacking.

Viaarxiv icon

Roadmap on Deep Learning for Microscopy

Mar 07, 2023
Giovanni Volpe, Carolina Wählby, Lei Tian, Michael Hecht, Artur Yakimovich, Kristina Monakhova, Laura Waller, Ivo F. Sbalzarini, Christopher A. Metzler, Mingyang Xie, Kevin Zhang, Isaac C. D. Lenton, Halina Rubinsztein-Dunlop, Daniel Brunner, Bijie Bai, Aydogan Ozcan, Daniel Midtvedt, Hao Wang, Nataša Sladoje, Joakim Lindblad, Jason T. Smith, Marien Ochoa, Margarida Barroso, Xavier Intes, Tong Qiu, Li-Yu Yu, Sixian You, Yongtao Liu, Maxim A. Ziatdinov, Sergei V. Kalinin, Arlo Sheridan, Uri Manor, Elias Nehme, Ofri Goldenberg, Yoav Shechtman, Henrik K. Moberg, Christoph Langhammer, Barbora Špačková, Saga Helgadottir, Benjamin Midtvedt, Aykut Argun, Tobias Thalheim, Frank Cichos, Stefano Bo, Lars Hubatsch, Jesus Pineda, Carlo Manzo, Harshith Bachimanchi, Erik Selander, Antoni Homs-Corbera, Martin Fränzl, Kevin de Haan, Yair Rivenson, Zofia Korczak, Caroline Beck Adiels, Mite Mijalkov, Dániel Veréb, Yu-Wei Chang, Joana B. Pereira, Damian Matuszewski, Gustaf Kylberg, Ida-Maria Sintorn, Juan C. Caicedo, Beth A Cimini, Muyinatu A. Lediju Bell, Bruno M. Saraiva, Guillaume Jacquemet, Ricardo Henriques, Wei Ouyang, Trang Le, Estibaliz Gómez-de-Mariscal, Daniel Sage, Arrate Muñoz-Barrutia, Ebba Josefson Lindqvist, Johanna Bergman

Figure 1 for Roadmap on Deep Learning for Microscopy
Figure 2 for Roadmap on Deep Learning for Microscopy
Figure 3 for Roadmap on Deep Learning for Microscopy
Figure 4 for Roadmap on Deep Learning for Microscopy

Through digital imaging, microscopy has evolved from primarily being a means for visual observation of life at the micro- and nano-scale, to a quantitative tool with ever-increasing resolution and throughput. Artificial intelligence, deep neural networks, and machine learning are all niche terms describing computational methods that have gained a pivotal role in microscopy-based research over the past decade. This Roadmap is written collectively by prominent researchers and encompasses selected aspects of how machine learning is applied to microscopy image data, with the aim of gaining scientific knowledge by improved image quality, automated detection, segmentation, classification and tracking of objects, and efficient merging of information from multiple imaging modalities. We aim to give the reader an overview of the key developments and an understanding of possibilities and limitations of machine learning for microscopy. It will be of interest to a wide cross-disciplinary audience in the physical sciences and life sciences.

Viaarxiv icon

Channel Sparsity Variation and Model-Based Analysis on 6, 26, and 132 GHz Measurements

Feb 17, 2023
Ximan Liu, Jianhua Zhang, Pan Tang, Lei Tian, Harsh Tataria, Shu Sun, Mansoor Shafi

Figure 1 for Channel Sparsity Variation and Model-Based Analysis on 6, 26, and 132 GHz Measurements
Figure 2 for Channel Sparsity Variation and Model-Based Analysis on 6, 26, and 132 GHz Measurements
Figure 3 for Channel Sparsity Variation and Model-Based Analysis on 6, 26, and 132 GHz Measurements
Figure 4 for Channel Sparsity Variation and Model-Based Analysis on 6, 26, and 132 GHz Measurements

In this paper, the level of sparsity is examined at 6, 26, and 132 GHz carrier frequencies by conducting channel measurements in an indoor office environment. By using the Gini index (value between 0 and 1) as a metric for characterizing sparsity, we show that increasing carrier frequency leads to increased levels of sparsity. The measured channel impulse responses are used to derive a Third-Generation Partnership Project (3GPP)-style propagation model, used to calculate the Gini index for the comparison of the channel sparsity between the measurement and simulation based on the 3GPP model. Our results show that the mean value of the Gini index in measurement is over twice the value in simulation, implying that the 3GPP channel model does not capture the effects of sparsity in the delay domain as frequency increases. In addition, a new intra-cluster power allocation model based on measurements is proposed to characterize the effects of sparsity in the delay domain of the 3GPP channel model. The accuracy of the proposed model is analyzed using theoretical derivations and simulations. Using the derived intra-cluster power allocation model, the mean value of the Gini index is 0.97, while the spread of variability is restricted to 0.01, demonstrating that the proposed model is suitable for 3GPP-type channels. To our best knowledge, this paper is the first to perform measurements and analysis at three different frequencies for the evaluation of channel sparsity in the same environment.

Viaarxiv icon

Cross-Domain Label Propagation for Domain Adaptation with Discriminative Graph Self-Learning

Feb 17, 2023
Lei Tian, Yongqiang Tang, Liangchen Hu, Wensheng Zhang

Figure 1 for Cross-Domain Label Propagation for Domain Adaptation with Discriminative Graph Self-Learning
Figure 2 for Cross-Domain Label Propagation for Domain Adaptation with Discriminative Graph Self-Learning
Figure 3 for Cross-Domain Label Propagation for Domain Adaptation with Discriminative Graph Self-Learning
Figure 4 for Cross-Domain Label Propagation for Domain Adaptation with Discriminative Graph Self-Learning

Domain adaptation manages to transfer the knowledge of well-labeled source data to unlabeled target data. Many recent efforts focus on improving the prediction accuracy of target pseudo-labels to reduce conditional distribution shift. In this paper, we propose a novel domain adaptation method, which infers target pseudo-labels through cross-domain label propagation, such that the underlying manifold structure of two domain data can be explored. Unlike existing cross-domain label propagation methods that separate domain-invariant feature learning, affinity matrix constructing and target labels inferring into three independent stages, we propose to integrate them into a unified optimization framework. In such way, these three parts can boost each other from an iterative optimization perspective and thus more effective knowledge transfer can be achieved. Furthermore, to construct a high-quality affinity matrix, we propose a discriminative graph self-learning strategy, which can not only adaptively capture the inherent similarity of the data from two domains but also effectively exploit the discriminative information contained in well-labeled source data and pseudo-labeled target data. An efficient iterative optimization algorithm is designed to solve the objective function of our proposal. Notably, the proposed method can be extended to semi-supervised domain adaptation in a simple but effective way and the corresponding optimization problem can be solved with the identical algorithm. Extensive experiments on six standard datasets verify the significant superiority of our proposal in both unsupervised and semi-supervised domain adaptation settings.

Viaarxiv icon