Alert button
Picture for Tingting Liu

Tingting Liu

Alert button

Super-resolution imaging through a multimode fiber: the physical upsampling of speckle-driven

Jul 11, 2023
Chuncheng Zhang, Tingting Liu, Zhihua Xie, Yu Wang, Tong Liu, Qian Chen, Xiubao Sui

Figure 1 for Super-resolution imaging through a multimode fiber: the physical upsampling of speckle-driven
Figure 2 for Super-resolution imaging through a multimode fiber: the physical upsampling of speckle-driven
Figure 3 for Super-resolution imaging through a multimode fiber: the physical upsampling of speckle-driven
Figure 4 for Super-resolution imaging through a multimode fiber: the physical upsampling of speckle-driven

Following recent advancements in multimode fiber (MMF), miniaturization of imaging endoscopes has proven crucial for minimally invasive surgery in vivo. Recent progress enabled by super-resolution imaging methods with a data-driven deep learning (DL) framework has balanced the relationship between the core size and resolution. However, most of the DL approaches lack attention to the physical properties of the speckle, which is crucial for reconciling the relationship between the magnification of super-resolution imaging and the quality of reconstruction quality. In the paper, we find that the interferometric process of speckle formation is an essential basis for creating DL models with super-resolution imaging. It physically realizes the upsampling of low-resolution (LR) images and enhances the perceptual capabilities of the models. The finding experimentally validates the role played by the physical upsampling of speckle-driven, effectively complementing the lack of information in data-driven. Experimentally, we break the restriction of the poor reconstruction quality at great magnification by inputting the same size of the speckle with the size of the high-resolution (HR) image to the model. The guidance of our research for endoscopic imaging may accelerate the further development of minimally invasive surgery.

Viaarxiv icon

Hyperspectral Image Super-Resolution via Dual-domain Network Based on Hybrid Convolution

Apr 20, 2023
Tingting Liu, Yuan Liu, Chuncheng Zhang, Yuan Liyin, Xiubao Sui, Qian Chen

Figure 1 for Hyperspectral Image Super-Resolution via Dual-domain Network Based on Hybrid Convolution
Figure 2 for Hyperspectral Image Super-Resolution via Dual-domain Network Based on Hybrid Convolution
Figure 3 for Hyperspectral Image Super-Resolution via Dual-domain Network Based on Hybrid Convolution
Figure 4 for Hyperspectral Image Super-Resolution via Dual-domain Network Based on Hybrid Convolution

Since the number of incident energies is limited, it is difficult to directly acquire hyperspectral images (HSI) with high spatial resolution. Considering the high dimensionality and correlation of HSI, super-resolution (SR) of HSI remains a challenge in the absence of auxiliary high-resolution images. Furthermore, it is very important to extract the spatial features effectively and make full use of the spectral information. This paper proposes a novel HSI super-resolution algorithm, termed dual-domain network based on hybrid convolution (SRDNet). Specifically, a dual-domain network is designed to fully exploit the spatial-spectral and frequency information among the hyper-spectral data. To capture inter-spectral self-similarity, a self-attention learning mechanism (HSL) is devised in the spatial domain. Meanwhile the pyramid structure is applied to increase the acceptance field of attention, which further reinforces the feature representation ability of the network. Moreover, to further improve the perceptual quality of HSI, a frequency loss(HFL) is introduced to optimize the model in the frequency domain. The dynamic weighting mechanism drives the network to gradually refine the generated frequency and excessive smoothing caused by spatial loss. Finally, In order to better fully obtain the mapping relationship between high-resolution space and low-resolution space, a hybrid module of 2D and 3D units with progressive upsampling strategy is utilized in our method. Experiments on a widely used benchmark dataset illustrate that the proposed SRDNet method enhances the texture information of HSI and is superior to state-of-the-art methods.

Viaarxiv icon

Low-Light Image Enhancement by Learning Contrastive Representations in Spatial and Frequency Domains

Mar 23, 2023
Yi Huang, Xiaoguang Tu, Gui Fu, Tingting Liu, Bokai Liu, Ming Yang, Ziliang Feng

Figure 1 for Low-Light Image Enhancement by Learning Contrastive Representations in Spatial and Frequency Domains
Figure 2 for Low-Light Image Enhancement by Learning Contrastive Representations in Spatial and Frequency Domains
Figure 3 for Low-Light Image Enhancement by Learning Contrastive Representations in Spatial and Frequency Domains
Figure 4 for Low-Light Image Enhancement by Learning Contrastive Representations in Spatial and Frequency Domains

Images taken under low-light conditions tend to suffer from poor visibility, which can decrease image quality and even reduce the performance of the downstream tasks. It is hard for a CNN-based method to learn generalized features that can recover normal images from the ones under various unknow low-light conditions. In this paper, we propose to incorporate the contrastive learning into an illumination correction network to learn abstract representations to distinguish various low-light conditions in the representation space, with the purpose of enhancing the generalizability of the network. Considering that light conditions can change the frequency components of the images, the representations are learned and compared in both spatial and frequency domains to make full advantage of the contrastive learning. The proposed method is evaluated on LOL and LOL-V2 datasets, the results show that the proposed method achieves better qualitative and quantitative results compared with other state-of-the-arts.

Viaarxiv icon

Understanding Long Programming Languages with Structure-Aware Sparse Attention

May 27, 2022
Tingting Liu, Chengyu Wang, Cen Chen, Ming Gao, Aoying Zhou

Figure 1 for Understanding Long Programming Languages with Structure-Aware Sparse Attention
Figure 2 for Understanding Long Programming Languages with Structure-Aware Sparse Attention
Figure 3 for Understanding Long Programming Languages with Structure-Aware Sparse Attention
Figure 4 for Understanding Long Programming Languages with Structure-Aware Sparse Attention

Programming-based Pre-trained Language Models (PPLMs) such as CodeBERT have achieved great success in many downstream code-related tasks. Since the memory and computational complexity of self-attention in the Transformer grow quadratically with the sequence length, PPLMs typically limit the code length to 512. However, codes in real-world applications are generally long, such as code searches, which cannot be processed efficiently by existing PPLMs. To solve this problem, in this paper, we present SASA, a Structure-Aware Sparse Attention mechanism, which reduces the complexity and improves performance for long code understanding tasks. The key components in SASA are top-$k$ sparse attention and Abstract Syntax Tree (AST)-based structure-aware attention. With top-$k$ sparse attention, the most crucial attention relation can be obtained with a lower computational cost. As the code structure represents the logic of the code statements, which is a complement to the code sequence characteristics, we further introduce AST structures into attention. Extensive experiments on CodeXGLUE tasks show that SASA achieves better performance than the competing baselines.

* sigir 2022 accepted, code will be available at https://github.com/alibaba/EasyNLP 
Viaarxiv icon

Empathic Conversations: A Multi-level Dataset of Contextualized Conversations

May 25, 2022
Damilola Omitaomu, Shabnam Tafreshi, Tingting Liu, Sven Buechel, Chris Callison-Burch, Johannes Eichstaedt, Lyle Ungar, João Sedoc

Figure 1 for Empathic Conversations: A Multi-level Dataset of Contextualized Conversations
Figure 2 for Empathic Conversations: A Multi-level Dataset of Contextualized Conversations
Figure 3 for Empathic Conversations: A Multi-level Dataset of Contextualized Conversations
Figure 4 for Empathic Conversations: A Multi-level Dataset of Contextualized Conversations

Empathy is a cognitive and emotional reaction to an observed situation of others. Empathy has recently attracted interest because it has numerous applications in psychology and AI, but it is unclear how different forms of empathy (e.g., self-report vs counterpart other-report, concern vs. distress) interact with other affective phenomena or demographics like gender and age. To better understand this, we created the {\it Empathic Conversations} dataset of annotated negative, empathy-eliciting dialogues in which pairs of participants converse about news articles. People differ in their perception of the empathy of others. These differences are associated with certain characteristics such as personality and demographics. Hence, we collected detailed characterization of the participants' traits, their self-reported empathetic response to news articles, their conversational partner other-report, and turn-by-turn third-party assessments of the level of self-disclosure, emotion, and empathy expressed. This dataset is the first to present empathy in multiple forms along with personal distress, emotion, personality characteristics, and person-level demographic information. We present baseline models for predicting some of these features from conversations.

* 21 pages 
Viaarxiv icon

Providing Location Information at Edge Networks: A Federated Learning-Based Approach

May 17, 2022
Xin Cheng, Tingting Liu, Feng Shu, Chuan Ma, Jun Li, Jiangzhou Wang

Figure 1 for Providing Location Information at Edge Networks: A Federated Learning-Based Approach
Figure 2 for Providing Location Information at Edge Networks: A Federated Learning-Based Approach
Figure 3 for Providing Location Information at Edge Networks: A Federated Learning-Based Approach
Figure 4 for Providing Location Information at Edge Networks: A Federated Learning-Based Approach

Recently, the development of mobile edge computing has enabled exhilarating edge artificial intelligence (AI) with fast response and low communication cost. The location information of edge devices is essential to support the edge AI in many scenarios, like smart home, intelligent transportation systems and integrated health care. Taking advantages of deep learning intelligence, the centralized machine learning (ML)-based positioning technique has received heated attention from both academia and industry. However, some potential issues, such as location information leakage and huge data traffic, limit its application. Fortunately, a newly emerging privacy-preserving distributed ML mechanism, named federated learning (FL), is expected to alleviate these concerns. In this article, we illustrate a framework of FL-based localization system as well as the involved entities at edge networks. Moreover, the advantages of such system are elaborated. On practical implementation of it, we investigate the field-specific issues associated with system-level solutions, which are further demonstrated over a real-word database. Moreover, future challenging open problems in this field are outlined.

Viaarxiv icon

EasyNLP: A Comprehensive and Easy-to-use Toolkit for Natural Language Processing

Apr 30, 2022
Chengyu Wang, Minghui Qiu, Taolin Zhang, Tingting Liu, Lei Li, Jianing Wang, Ming Wang, Jun Huang, Wei Lin

Figure 1 for EasyNLP: A Comprehensive and Easy-to-use Toolkit for Natural Language Processing
Figure 2 for EasyNLP: A Comprehensive and Easy-to-use Toolkit for Natural Language Processing
Figure 3 for EasyNLP: A Comprehensive and Easy-to-use Toolkit for Natural Language Processing
Figure 4 for EasyNLP: A Comprehensive and Easy-to-use Toolkit for Natural Language Processing

The success of Pre-Trained Models (PTMs) has reshaped the development of Natural Language Processing (NLP). Yet, it is not easy to obtain high-performing models and deploy them online for industrial practitioners. To bridge this gap, EasyNLP is designed to make it easy to build NLP applications, which supports a comprehensive suite of NLP algorithms. It further features knowledge-enhanced pre-training, knowledge distillation and few-shot learning functionalities for large-scale PTMs, and provides a unified framework of model training, inference and deployment for real-world applications. Currently, EasyNLP has powered over ten business units within Alibaba Group and is seamlessly integrated to the Platform of AI (PAI) products on Alibaba Cloud. The source code of our EasyNLP toolkit is released at GitHub (https://github.com/alibaba/EasyNLP).

* 8 pages 
Viaarxiv icon

Cross-Platform Difference in Facebook and Text Messages Language Use: Illustrated by Depression Diagnosis

Feb 09, 2022
Tingting Liu, Salvatore Giorgi, Xiangyu Tao, Douglas Bellew, Brenda Curtis, Lyle Ungar

Figure 1 for Cross-Platform Difference in Facebook and Text Messages Language Use: Illustrated by Depression Diagnosis
Figure 2 for Cross-Platform Difference in Facebook and Text Messages Language Use: Illustrated by Depression Diagnosis
Figure 3 for Cross-Platform Difference in Facebook and Text Messages Language Use: Illustrated by Depression Diagnosis
Figure 4 for Cross-Platform Difference in Facebook and Text Messages Language Use: Illustrated by Depression Diagnosis

How does language differ across one's Facebook status updates vs. one's text messages (SMS)? In this study, we show how Facebook and SMS use differs in psycho-linguistic characteristics and how these differences drive downstream analyses with an illustration of depression diagnosis. We use a sample of consenting participants who shared Facebook status updates, SMS data, and answered a standard psychological depression screener. We quantify domain differences using psychologically driven lexical methods and find that language on Facebook involves more personal concerns, experiences, and content features while the language in SMS contains more informal and style features. Next, we estimate depression from both text domains, using a depression model trained on Facebook data, and find a drop in accuracy when predicting self-reported depression assessments from the SMS-based depression estimates. Finally, we evaluate a simple domain adaption correction based on words driving the cross-platform differences and applied it to the SMS-derived depression estimates, resulting in significant improvement in prediction. Our work shows the Facebook vs. SMS difference in language use and suggests the necessity of cross-domain adaption for text-based predictions.

* 5 pages, 1 figure 
Viaarxiv icon

Federated Learning Based Proactive Handover in Millimeter-wave Vehicular Networks

Jan 18, 2021
Kaiqiang Qi, Tingting Liu, Chenyang Yang

Figure 1 for Federated Learning Based Proactive Handover in Millimeter-wave Vehicular Networks
Figure 2 for Federated Learning Based Proactive Handover in Millimeter-wave Vehicular Networks
Figure 3 for Federated Learning Based Proactive Handover in Millimeter-wave Vehicular Networks
Figure 4 for Federated Learning Based Proactive Handover in Millimeter-wave Vehicular Networks

Proactive handover can avoid frequent handovers and reduce handover delay, which plays an important role in maintaining the quality of service (QoS) for mobile users in millimeter-wave vehicular networks. To reduce the communication cost of training the learning model for proactive handover, we propose a federated learning (FL) framework. The proposed FL framework can accommodate the limited storage capacity of each user, increase the number of users who participate in the FL, and adapt to the dynamic mobility pattern. Simulation results validate the effectiveness of the proposed FL framework. Compared to reactive handover schemes, the proposed handover scheme can reduce unnecessary handovers and improve the QoS of users simultaneously.

Viaarxiv icon