Alert button
Picture for Haotian Ye

Haotian Ye

Alert button

Forward Laplacian: A New Computational Framework for Neural Network-based Variational Monte Carlo

Jul 17, 2023
Ruichen Li, Haotian Ye, Du Jiang, Xuelan Wen, Chuwei Wang, Zhe Li, Xiang Li, Di He, Ji Chen, Weiluo Ren, Liwei Wang

Figure 1 for Forward Laplacian: A New Computational Framework for Neural Network-based Variational Monte Carlo
Figure 2 for Forward Laplacian: A New Computational Framework for Neural Network-based Variational Monte Carlo
Figure 3 for Forward Laplacian: A New Computational Framework for Neural Network-based Variational Monte Carlo
Figure 4 for Forward Laplacian: A New Computational Framework for Neural Network-based Variational Monte Carlo

Neural network-based variational Monte Carlo (NN-VMC) has emerged as a promising cutting-edge technique of ab initio quantum chemistry. However, the high computational cost of existing approaches hinders their applications in realistic chemistry problems. Here, we report the development of a new NN-VMC method that achieves a remarkable speed-up by more than one order of magnitude, thereby greatly extending the applicability of NN-VMC to larger systems. Our key design is a novel computational framework named Forward Laplacian, which computes the Laplacian associated with neural networks, the bottleneck of NN-VMC, through an efficient forward propagation process. We then demonstrate that Forward Laplacian is not only versatile but also facilitates more developments of acceleration methods across various aspects, including optimization for sparse derivative matrix and efficient neural network design. Empirically, our approach enables NN-VMC to investigate a broader range of atoms, molecules and chemical reactions for the first time, providing valuable references to other ab initio methods. The results demonstrate a great potential in applying deep learning methods to solve general quantum mechanical problems.

Viaarxiv icon

Towards Revealing the Mystery behind Chain of Thought: a Theoretical Perspective

May 24, 2023
Guhao Feng, Yuntian Gu, Bohang Zhang, Haotian Ye, Di He, Liwei Wang

Figure 1 for Towards Revealing the Mystery behind Chain of Thought: a Theoretical Perspective
Figure 2 for Towards Revealing the Mystery behind Chain of Thought: a Theoretical Perspective
Figure 3 for Towards Revealing the Mystery behind Chain of Thought: a Theoretical Perspective

Recent studies have discovered that Chain-of-Thought prompting (CoT) can dramatically improve the performance of Large Language Models (LLMs), particularly when dealing with complex tasks involving mathematics or reasoning. Despite the enormous empirical success, the underlying mechanisms behind CoT and how it unlocks the potential of LLMs remain elusive. In this paper, we take a first step towards theoretically answering these questions. Specifically, we examine the capacity of LLMs with CoT in solving fundamental mathematical and decision-making problems. We start by giving an impossibility result showing that any bounded-depth Transformer cannot directly output correct answers for basic arithmetic/equation tasks unless the model size grows super-polynomially with respect to the input length. In contrast, we then prove by construction that autoregressive Transformers of a constant size suffice to solve both tasks by generating CoT derivations using a commonly-used math language format. Moreover, we show LLMs with CoT are capable of solving a general class of decision-making problems known as Dynamic Programming, thus justifying its power in tackling complex real-world tasks. Finally, extensive experiments on four tasks show that, while Transformers always fail to predict the answers directly, they can consistently learn to generate correct solutions step-by-step given sufficient CoT demonstrations.

* 33 pages 
Viaarxiv icon

A study of conceptual language similarity: comparison and evaluation

May 22, 2023
Haotian Ye, Yihong Liu, Hinrich Schütze

Figure 1 for A study of conceptual language similarity: comparison and evaluation
Figure 2 for A study of conceptual language similarity: comparison and evaluation
Figure 3 for A study of conceptual language similarity: comparison and evaluation
Figure 4 for A study of conceptual language similarity: comparison and evaluation

An interesting line of research in natural language processing (NLP) aims to incorporate linguistic typology to bridge linguistic diversity and assist the research of low-resource languages. While most works construct linguistic similarity measures based on lexical or typological features, such as word order and verbal inflection, recent work has introduced a novel approach to defining language similarity based on how they represent basic concepts, which is complementary to existing similarity measures. In this work, we study the conceptual similarity in detail and evaluate it extensively on a binary classification task.

Viaarxiv icon

Crosslingual Transfer Learning for Low-Resource Languages Based on Multilingual Colexification Graphs

May 22, 2023
Yihong Liu, Haotian Ye, Leonie Weissweiler, Hinrich Schütze

Figure 1 for Crosslingual Transfer Learning for Low-Resource Languages Based on Multilingual Colexification Graphs
Figure 2 for Crosslingual Transfer Learning for Low-Resource Languages Based on Multilingual Colexification Graphs
Figure 3 for Crosslingual Transfer Learning for Low-Resource Languages Based on Multilingual Colexification Graphs
Figure 4 for Crosslingual Transfer Learning for Low-Resource Languages Based on Multilingual Colexification Graphs

Colexification in comparative linguistics refers to the phenomenon of a lexical form conveying two or more distinct meanings. In this paper, we propose simple and effective methods to build multilingual graphs from colexification patterns: ColexNet and ColexNet+. ColexNet's nodes are concepts and its edges are colexifications. In ColexNet+, concept nodes are in addition linked through intermediate nodes, each representing an ngram in one of 1,334 languages. We use ColexNet+ to train high-quality multilingual embeddings $\overrightarrow{\mbox{ColexNet+}}$ that are well-suited for transfer learning scenarios. Existing work on colexification patterns relies on annotated word lists. This limits scalability and usefulness in NLP. In contrast, we identify colexification patterns of more than 2,000 concepts across 1,335 languages directly from an unannotated parallel corpus. In our experiments, we first show that ColexNet has a high recall on CLICS, a dataset of crosslingual colexifications. We then evaluate $\overrightarrow{\mbox{ColexNet+}}$ on roundtrip translation, verse retrieval and verse classification and show that our embeddings surpass several baselines in a transfer learning setting. This demonstrates the benefits of colexification for multilingual NLP.

Viaarxiv icon

Taxi1500: A Multilingual Dataset for Text Classification in 1500 Languages

May 15, 2023
Chunlan Ma, Ayyoob ImaniGooghari, Haotian Ye, Ehsaneddin Asgari, Hinrich Schütze

Figure 1 for Taxi1500: A Multilingual Dataset for Text Classification in 1500 Languages
Figure 2 for Taxi1500: A Multilingual Dataset for Text Classification in 1500 Languages
Figure 3 for Taxi1500: A Multilingual Dataset for Text Classification in 1500 Languages
Figure 4 for Taxi1500: A Multilingual Dataset for Text Classification in 1500 Languages

While natural language processing tools have been developed extensively for some of the world's languages, a significant portion of the world's over 7000 languages are still neglected. One reason for this is that evaluation datasets do not yet cover a wide range of languages, including low-resource and endangered ones. We aim to address this issue by creating a text classification dataset encompassing a large number of languages, many of which currently have little to no annotated data available. We leverage parallel translations of the Bible to construct such a dataset by first developing applicable topics and employing a crowdsourcing tool to collect annotated data. By annotating the English side of the data and projecting the labels onto other languages through aligned verses, we generate text classification datasets for more than 1500 languages. We extensively benchmark several existing multilingual language models using our dataset. To facilitate the advancement of research in this area, we will release our dataset and code.

Viaarxiv icon

A Crosslingual Investigation of Conceptualization in 1335 Languages

May 15, 2023
Yihong Liu, Haotian Ye, Leonie Weissweiler, Philipp Wicke, Renhao Pei, Robert Zangenfeind, Hinrich Schütze

Figure 1 for A Crosslingual Investigation of Conceptualization in 1335 Languages
Figure 2 for A Crosslingual Investigation of Conceptualization in 1335 Languages
Figure 3 for A Crosslingual Investigation of Conceptualization in 1335 Languages
Figure 4 for A Crosslingual Investigation of Conceptualization in 1335 Languages

Languages differ in how they divide up the world into concepts and words; e.g., in contrast to English, Swahili has a single concept for `belly' and `womb'. We investigate these differences in conceptualization across 1,335 languages by aligning concepts in a parallel corpus. To this end, we propose Conceptualizer, a method that creates a bipartite directed alignment graph between source language concepts and sets of target language strings. In a detailed linguistic analysis across all languages for one concept (`bird') and an evaluation on gold standard data for 32 Swadesh concepts, we show that Conceptualizer has good alignment accuracy. We demonstrate the potential of research on conceptualization in NLP with two experiments. (1) We define crosslingual stability of a concept as the degree to which it has 1-1 correspondences across languages, and show that concreteness predicts stability. (2) We represent each language by its conceptualization pattern for 83 concepts, and define a similarity measure on these representations. The resulting measure for the conceptual similarity of two languages is complementary to standard genealogical, typological, and surface similarity measures. For four out of six language families, we can assign languages to their correct family based on conceptual similarity with accuracy between 54\% and 87\%.

* ACL 2023 
Viaarxiv icon

Discovering Latent Knowledge in Language Models Without Supervision

Dec 07, 2022
Collin Burns, Haotian Ye, Dan Klein, Jacob Steinhardt

Figure 1 for Discovering Latent Knowledge in Language Models Without Supervision
Figure 2 for Discovering Latent Knowledge in Language Models Without Supervision
Figure 3 for Discovering Latent Knowledge in Language Models Without Supervision
Figure 4 for Discovering Latent Knowledge in Language Models Without Supervision

Existing techniques for training language models can be misaligned with the truth: if we train models with imitation learning, they may reproduce errors that humans make; if we train them to generate text that humans rate highly, they may output errors that human evaluators can't detect. We propose circumventing this issue by directly finding latent knowledge inside the internal activations of a language model in a purely unsupervised way. Specifically, we introduce a method for accurately answering yes-no questions given only unlabeled model activations. It works by finding a direction in activation space that satisfies logical consistency properties, such as that a statement and its negation have opposite truth values. We show that despite using no supervision and no model outputs, our method can recover diverse knowledge represented in large language models: across 6 models and 10 question-answering datasets, it outperforms zero-shot accuracy by 4\% on average. We also find that it cuts prompt sensitivity in half and continues to maintain high accuracy even when models are prompted to generate incorrect answers. Our results provide an initial step toward discovering what language models know, distinct from what they say, even when we don't have access to explicit ground truth labels.

Viaarxiv icon

On the Power of Pre-training for Generalization in RL: Provable Benefits and Hardness

Oct 19, 2022
Haotian Ye, Xiaoyu Chen, Liwei Wang, Simon S. Du

Generalization in Reinforcement Learning (RL) aims to learn an agent during training that generalizes to the target environment. This paper studies RL generalization from a theoretical aspect: how much can we expect pre-training over training environments to be helpful? When the interaction with the target environment is not allowed, we certify that the best we can obtain is a near-optimal policy in an average sense, and we design an algorithm that achieves this goal. Furthermore, when the agent is allowed to interact with the target environment, we give a surprising result showing that asymptotically, the improvement from pre-training is at most a constant factor. On the other hand, in the non-asymptotic regime, we design an efficient algorithm and prove a distribution-based regret bound in the target environment that is independent of the state-action space.

Viaarxiv icon

Towards a Theoretical Framework of Out-of-Distribution Generalization

Jun 13, 2021
Haotian Ye, Chuanlong Xie, Tianle Cai, Ruichen Li, Zhenguo Li, Liwei Wang

Figure 1 for Towards a Theoretical Framework of Out-of-Distribution Generalization
Figure 2 for Towards a Theoretical Framework of Out-of-Distribution Generalization
Figure 3 for Towards a Theoretical Framework of Out-of-Distribution Generalization
Figure 4 for Towards a Theoretical Framework of Out-of-Distribution Generalization

Generalization to out-of-distribution (OOD) data, or domain generalization, is one of the central problems in modern machine learning. Recently, there is a surge of attempts to propose algorithms for OOD that mainly build upon the idea of extracting invariant features. Although intuitively reasonable, theoretical understanding of what kind of invariance can guarantee OOD generalization is still limited, and generalization to arbitrary out-of-distribution is clearly impossible. In this work, we take the first step towards rigorous and quantitative definitions of 1) what is OOD; and 2) what does it mean by saying an OOD problem is learnable. We also introduce a new concept of expansion function, which characterizes to what extent the variance is amplified in the test domains over the training domains, and therefore give a quantitative meaning of invariant features. Based on these, we prove OOD generalization error bounds. It turns out that OOD generalization largely depends on the expansion function. As recently pointed out by Gulrajani and Lopez-Paz (2020), any OOD learning algorithm without a model selection module is incomplete. Our theory naturally induces a model selection criterion. Extensive experiments on benchmark OOD datasets demonstrate that our model selection criterion has a significant advantage over baselines.

Viaarxiv icon