Alert button
Picture for Jie Zheng

Jie Zheng

Alert button

Large Language Models for Automated Open-domain Scientific Hypotheses Discovery

Sep 06, 2023
Zonglin Yang, Xinya Du, Junxian Li, Jie Zheng, Soujanya Poria, Erik Cambria

Figure 1 for Large Language Models for Automated Open-domain Scientific Hypotheses Discovery
Figure 2 for Large Language Models for Automated Open-domain Scientific Hypotheses Discovery
Figure 3 for Large Language Models for Automated Open-domain Scientific Hypotheses Discovery
Figure 4 for Large Language Models for Automated Open-domain Scientific Hypotheses Discovery

Hypothetical induction is recognized as the main reasoning type when scientists make observations about the world and try to propose hypotheses to explain those observations. Past research on hypothetical induction has a limited setting that (1) the observation annotations of the dataset are not raw web corpus but are manually selected sentences (resulting in a close-domain setting); and (2) the ground truth hypotheses annotations are mostly commonsense knowledge, making the task less challenging. In this work, we propose the first NLP dataset for social science academic hypotheses discovery, consisting of 50 recent papers published in top social science journals. Raw web corpora that are necessary for developing hypotheses in the published papers are also collected in the dataset, with the final goal of creating a system that automatically generates valid, novel, and helpful (to human researchers) hypotheses, given only a pile of raw web corpora. The new dataset can tackle the previous problems because it requires to (1) use raw web corpora as observations; and (2) propose hypotheses even new to humanity. A multi-module framework is developed for the task, as well as three different feedback mechanisms that empirically show performance gain over the base framework. Finally, our framework exhibits high performance in terms of both GPT-4 based evaluation and social science expert evaluation.

Viaarxiv icon

Biological Factor Regulatory Neural Network

Apr 11, 2023
Xinnan Dai, Caihua Shan, Jie Zheng, Xiaoxiao Li, Dongsheng Li

Figure 1 for Biological Factor Regulatory Neural Network
Figure 2 for Biological Factor Regulatory Neural Network
Figure 3 for Biological Factor Regulatory Neural Network
Figure 4 for Biological Factor Regulatory Neural Network

Genes are fundamental for analyzing biological systems and many recent works proposed to utilize gene expression for various biological tasks by deep learning models. Despite their promising performance, it is hard for deep neural networks to provide biological insights for humans due to their black-box nature. Recently, some works integrated biological knowledge with neural networks to improve the transparency and performance of their models. However, these methods can only incorporate partial biological knowledge, leading to suboptimal performance. In this paper, we propose the Biological Factor Regulatory Neural Network (BFReg-NN), a generic framework to model relations among biological factors in cell systems. BFReg-NN starts from gene expression data and is capable of merging most existing biological knowledge into the model, including the regulatory relations among genes or proteins (e.g., gene regulatory networks (GRN), protein-protein interaction networks (PPI)) and the hierarchical relations among genes, proteins and pathways (e.g., several genes/proteins are contained in a pathway). Moreover, BFReg-NN also has the ability to provide new biologically meaningful insights because of its white-box characteristics. Experimental results on different gene expression-based tasks verify the superiority of BFReg-NN compared with baselines. Our case studies also show that the key insights found by BFReg-NN are consistent with the biological literature.

Viaarxiv icon

Constrained Reinforcement Learning for Short Video Recommendation

May 26, 2022
Qingpeng Cai, Ruohan Zhan, Chi Zhang, Jie Zheng, Guangwei Ding, Pinghua Gong, Dong Zheng, Peng Jiang

Figure 1 for Constrained Reinforcement Learning for Short Video Recommendation
Figure 2 for Constrained Reinforcement Learning for Short Video Recommendation
Figure 3 for Constrained Reinforcement Learning for Short Video Recommendation
Figure 4 for Constrained Reinforcement Learning for Short Video Recommendation

The wide popularity of short videos on social media poses new opportunities and challenges to optimize recommender systems on the video-sharing platforms. Users provide complex and multi-faceted responses towards recommendations, including watch time and various types of interactions with videos. As a result, established recommendation algorithms that concern a single objective are not adequate to meet this new demand of optimizing comprehensive user experiences. In this paper, we formulate the problem of short video recommendation as a constrained Markov Decision Process (MDP), where platforms want to optimize the main goal of user watch time in long term, with the constraint of accommodating the auxiliary responses of user interactions such as sharing/downloading videos. To solve the constrained MDP, we propose a two-stage reinforcement learning approach based on actor-critic framework. At stage one, we learn individual policies to optimize each auxiliary response. At stage two, we learn a policy to (i) optimize the main response and (ii) stay close to policies learned at the first stage, which effectively guarantees the performance of this main policy on the auxiliaries. Through extensive simulations, we demonstrate effectiveness of our approach over alternatives in both optimizing the main goal as well as balancing the others. We further show the advantage of our approach in live experiments of short video recommendations, where it significantly outperforms other baselines in terms of watch time and interactions from video views. Our approach has been fully launched in the production system to optimize user experiences on the platform.

Viaarxiv icon

Exploring the Impact of Negative Samples of Contrastive Learning: A Case Study of Sentence Embedding

Mar 16, 2022
Rui Cao, Yihao Wang, Yuxin Liang, Ling Gao, Jie Zheng, Jie Ren, Zheng Wang

Figure 1 for Exploring the Impact of Negative Samples of Contrastive Learning: A Case Study of Sentence Embedding
Figure 2 for Exploring the Impact of Negative Samples of Contrastive Learning: A Case Study of Sentence Embedding
Figure 3 for Exploring the Impact of Negative Samples of Contrastive Learning: A Case Study of Sentence Embedding
Figure 4 for Exploring the Impact of Negative Samples of Contrastive Learning: A Case Study of Sentence Embedding

Contrastive learning is emerging as a powerful technique for extracting knowledge from unlabeled data. This technique requires a balanced mixture of two ingredients: positive (similar) and negative (dissimilar) samples. This is typically achieved by maintaining a queue of negative samples during training. Prior works in the area typically uses a fixed-length negative sample queue, but how the negative sample size affects the model performance remains unclear. The opaque impact of the number of negative samples on performance when employing contrastive learning aroused our in-depth exploration. This paper presents a momentum contrastive learning model with negative sample queue for sentence embedding, namely MoCoSE. We add the prediction layer to the online branch to make the model asymmetric and together with EMA update mechanism of the target branch to prevent the model from collapsing. We define a maximum traceable distance metric, through which we learn to what extent the text contrastive learning benefits from the historical information of negative samples. Our experiments find that the best results are obtained when the maximum traceable distance is at a certain range, demonstrating that there is an optimal range of historical information for a negative sample queue. We evaluate the proposed unsupervised MoCoSE on the semantic text similarity (STS) task and obtain an average Spearman's correlation of $77.27\%$. Source code is available at https://github.com/xbdxwyh/mocose.

* 16 pages, 13 figures, accepted to appear in the Findings of ACL 2022 
Viaarxiv icon

ESOD:Edge-based Task Scheduling for Object Detection

Oct 20, 2021
Yihao Wang, Ling Gao, Jie Ren, Rui Cao, Hai Wang, Jie Zheng, Quanli Gao

Figure 1 for ESOD:Edge-based Task Scheduling for Object Detection
Figure 2 for ESOD:Edge-based Task Scheduling for Object Detection
Figure 3 for ESOD:Edge-based Task Scheduling for Object Detection
Figure 4 for ESOD:Edge-based Task Scheduling for Object Detection

Object Detection on the mobile system is a challenge in terms of everything. Nowadays, many object detection models have been designed, and most of them concentrate on precision. However, the computation burden of those models on mobile systems is unacceptable. Researchers have designed some lightweight networks for mobiles by sacrificing precision. We present a novel edge-based task scheduling framework for object detection (termed as ESOD). In detail, we train a DNN model (termed as pre-model) to predict which object detection model to use for the coming task and offloads to which edge servers by physical characteristics of the image task (e.g., brightness, saturation). The results show that ESOD can reduce latency and energy consumption by an average of 22.13% and 29.60% and improve the mAP to 45.8(with 0.9 mAP better), respectively, compared with the SOTA DETR model.

* Accepted by The Ninth International Conference on Advanced Cloud and Big Data 
Viaarxiv icon

PASTO: Strategic Parameter Optimization in Recommendation Systems -- Probabilistic is Better than Deterministic

Aug 20, 2021
Weicong Ding, Hanlin Tang, Jingshuo Feng, Lei Yuan, Sen Yang, Guangxu Yang, Jie Zheng, Jing Wang, Qiang Su, Dong Zheng, Xuezhong Qiu, Yongqi Liu, Yuxuan Chen, Yang Liu, Chao Song, Dongying Kong, Kai Ren, Peng Jiang, Qiao Lian, Ji Liu

Figure 1 for PASTO: Strategic Parameter Optimization in Recommendation Systems -- Probabilistic is Better than Deterministic
Figure 2 for PASTO: Strategic Parameter Optimization in Recommendation Systems -- Probabilistic is Better than Deterministic
Figure 3 for PASTO: Strategic Parameter Optimization in Recommendation Systems -- Probabilistic is Better than Deterministic
Figure 4 for PASTO: Strategic Parameter Optimization in Recommendation Systems -- Probabilistic is Better than Deterministic

Real-world recommendation systems often consist of two phases. In the first phase, multiple predictive models produce the probability of different immediate user actions. In the second phase, these predictions are aggregated according to a set of 'strategic parameters' to meet a diverse set of business goals, such as longer user engagement, higher revenue potential, or more community/network interactions. In addition to building accurate predictive models, it is also crucial to optimize this set of 'strategic parameters' so that primary goals are optimized while secondary guardrails are not hurt. In this setting with multiple and constrained goals, this paper discovers that a probabilistic strategic parameter regime can achieve better value compared to the standard regime of finding a single deterministic parameter. The new probabilistic regime is to learn the best distribution over strategic parameter choices and sample one strategic parameter from the distribution when each user visits the platform. To pursue the optimal probabilistic solution, we formulate the problem into a stochastic compositional optimization problem, in which the unbiased stochastic gradient is unavailable. Our approach is applied in a popular social network platform with hundreds of millions of daily users and achieves +0.22% lift of user engagement in a recommendation task and +1.7% lift in revenue in an advertising optimization scenario comparing to using the best deterministic parameter strategy.

Viaarxiv icon

Learning to Remove: Towards Isotropic Pre-trained BERT Embedding

Apr 12, 2021
Yuxin Liang, Rui Cao, Jie Zheng, Jie Ren, Ling Gao

Figure 1 for Learning to Remove: Towards Isotropic Pre-trained BERT Embedding
Figure 2 for Learning to Remove: Towards Isotropic Pre-trained BERT Embedding
Figure 3 for Learning to Remove: Towards Isotropic Pre-trained BERT Embedding
Figure 4 for Learning to Remove: Towards Isotropic Pre-trained BERT Embedding

Pre-trained language models such as BERT have become a more common choice of natural language processing (NLP) tasks. Research in word representation shows that isotropic embeddings can significantly improve performance on downstream tasks. However, we measure and analyze the geometry of pre-trained BERT embedding and find that it is far from isotropic. We find that the word vectors are not centered around the origin, and the average cosine similarity between two random words is much higher than zero, which indicates that the word vectors are distributed in a narrow cone and deteriorate the representation capacity of word embedding. We propose a simple, and yet effective method to fix this problem: remove several dominant directions of BERT embedding with a set of learnable weights. We train the weights on word similarity tasks and show that processed embedding is more isotropic. Our method is evaluated on three standardized tasks: word similarity, word analogy, and semantic textual similarity. In all tasks, the word embedding processed by our method consistently outperforms the original embedding (with average improvement of 13% on word analogy and 16% on semantic textual similarity) and two baseline methods. Our method is also proven to be more robust to changes of hyperparameter.

Viaarxiv icon

To Compress, or Not to Compress: Characterizing Deep Learning Model Compression for Embedded Inference

Oct 21, 2018
Qing Qin, Jie Ren, Jialong Yu, Ling Gao, Hai Wang, Jie Zheng, Yansong Feng, Jianbin Fang, Zheng Wang

Figure 1 for To Compress, or Not to Compress: Characterizing Deep Learning Model Compression for Embedded Inference
Figure 2 for To Compress, or Not to Compress: Characterizing Deep Learning Model Compression for Embedded Inference
Figure 3 for To Compress, or Not to Compress: Characterizing Deep Learning Model Compression for Embedded Inference
Figure 4 for To Compress, or Not to Compress: Characterizing Deep Learning Model Compression for Embedded Inference

The recent advances in deep neural networks (DNNs) make them attractive for embedded systems. However, it can take a long time for DNNs to make an inference on resource-constrained computing devices. Model compression techniques can address the computation issue of deep inference on embedded devices. This technique is highly attractive, as it does not rely on specialized hardware, or computation-offloading that is often infeasible due to privacy concerns or high latency. However, it remains unclear how model compression techniques perform across a wide range of DNNs. To design efficient embedded deep learning solutions, we need to understand their behaviors. This work develops a quantitative approach to characterize model compression techniques on a representative embedded deep learning architecture, the NVIDIA Jetson Tx2. We perform extensive experiments by considering 11 influential neural network architectures from the image classification and the natural language processing domains. We experimentally show that how two mainstream compression techniques, data quantization and pruning, perform on these network architectures and the implications of compression techniques to the model storage size, inference time, energy consumption and performance metrics. We demonstrate that there are opportunities to achieve fast deep inference on embedded systems, but one must carefully choose the compression settings. Our results provide insights on when and how to apply model compression techniques and guidelines for designing efficient embedded deep learning systems.

* 8 pages, To appear in ISPA 2018 
Viaarxiv icon

SL$^2$MF: Predicting Synthetic Lethality in Human Cancers via Logistic Matrix Factorization

Oct 20, 2018
Yong Liu, Min Wu, Chenghao Liu, Xiao-Li Li, Jie Zheng

Figure 1 for SL$^2$MF: Predicting Synthetic Lethality in Human Cancers via Logistic Matrix Factorization
Figure 2 for SL$^2$MF: Predicting Synthetic Lethality in Human Cancers via Logistic Matrix Factorization
Figure 3 for SL$^2$MF: Predicting Synthetic Lethality in Human Cancers via Logistic Matrix Factorization
Figure 4 for SL$^2$MF: Predicting Synthetic Lethality in Human Cancers via Logistic Matrix Factorization

Synthetic lethality (SL) is a promising concept for novel discovery of anti-cancer drug targets. However, wet-lab experiments for detecting SLs are faced with various challenges, such as high cost, low consistency across platforms or cell lines. Therefore, computational prediction methods are needed to address these issues. This paper proposes a novel SL prediction method, named SL2MF, which employs logistic matrix factorization to learn latent representations of genes from the observed SL data. The probability that two genes are likely to form SL is modeled by the linear combination of gene latent vectors. As known SL pairs are more trustworthy than unknown pairs, we design importance weighting schemes to assign higher importance weights for known SL pairs and lower importance weights for unknown pairs in SL2MF. Moreover, we also incorporate biological knowledge about genes from protein-protein interaction (PPI) data and Gene Ontology (GO). In particular, we calculate the similarity between genes based on their GO annotations and topological properties in the PPI network. Extensive experiments on the SL interaction data from SynLethDB database have been conducted to demonstrate the effectiveness of SL2MF.

Viaarxiv icon