Alert button
Picture for Vy A. Vo

Vy A. Vo

Alert button

Scope is all you need: Transforming LLMs for HPC Code

Aug 18, 2023
Tal Kadosh, Niranjan Hasabnis, Vy A. Vo, Nadav Schneider, Neva Krien, Abdul Wasay, Nesreen Ahmed, Ted Willke, Guy Tamir, Yuval Pinter, Timothy Mattson, Gal Oren

Figure 1 for Scope is all you need: Transforming LLMs for HPC Code
Figure 2 for Scope is all you need: Transforming LLMs for HPC Code
Figure 3 for Scope is all you need: Transforming LLMs for HPC Code
Figure 4 for Scope is all you need: Transforming LLMs for HPC Code

With easier access to powerful compute resources, there is a growing trend in the field of AI for software development to develop larger and larger language models (LLMs) to address a variety of programming tasks. Even LLMs applied to tasks from the high-performance computing (HPC) domain are huge in size (e.g., billions of parameters) and demand expensive compute resources for training. We found this design choice confusing - why do we need large LLMs trained on natural languages and programming languages unrelated to HPC for HPC-specific tasks? In this line of work, we aim to question design choices made by existing LLMs by developing smaller LLMs for specific domains - we call them domain-specific LLMs. Specifically, we start off with HPC as a domain and propose a novel tokenizer named Tokompiler, designed specifically for preprocessing code in HPC and compilation-centric tasks. Tokompiler leverages knowledge of language primitives to generate language-oriented tokens, providing a context-aware understanding of code structure while avoiding human semantics attributed to code structures completely. We applied Tokompiler to pre-train two state-of-the-art models, SPT-Code and Polycoder, for a Fortran code corpus mined from GitHub. We evaluate the performance of these models against the conventional LLMs. Results demonstrate that Tokompiler significantly enhances code completion accuracy and semantic understanding compared to traditional tokenizers in normalized-perplexity tests, down to ~1 perplexity score. This research opens avenues for further advancements in domain-specific LLMs, catering to the unique demands of HPC and compilation tasks.

Viaarxiv icon

Brain encoding models based on multimodal transformers can transfer across language and vision

May 20, 2023
Jerry Tang, Meng Du, Vy A. Vo, Vasudev Lal, Alexander G. Huth

Figure 1 for Brain encoding models based on multimodal transformers can transfer across language and vision
Figure 2 for Brain encoding models based on multimodal transformers can transfer across language and vision
Figure 3 for Brain encoding models based on multimodal transformers can transfer across language and vision
Figure 4 for Brain encoding models based on multimodal transformers can transfer across language and vision

Encoding models have been used to assess how the human brain represents concepts in language and vision. While language and vision rely on similar concept representations, current encoding models are typically trained and tested on brain responses to each modality in isolation. Recent advances in multimodal pretraining have produced transformers that can extract aligned representations of concepts in language and vision. In this work, we used representations from multimodal transformers to train encoding models that can transfer across fMRI responses to stories and movies. We found that encoding models trained on brain responses to one modality can successfully predict brain responses to the other modality, particularly in cortical regions that represent conceptual meaning. Further analysis of these encoding models revealed shared semantic dimensions that underlie concept representations in language and vision. Comparing encoding models trained using representations from multimodal and unimodal transformers, we found that multimodal transformers learn more aligned representations of concepts in language and vision. Our results demonstrate how multimodal transformers can provide insights into the brain's capacity for multimodal processing.

Viaarxiv icon

Memory in humans and deep language models: Linking hypotheses for model augmentation

Oct 07, 2022
Omri Raccah, Phoebe Chen, Ted L. Willke, David Poeppel, Vy A. Vo

Figure 1 for Memory in humans and deep language models: Linking hypotheses for model augmentation
Figure 2 for Memory in humans and deep language models: Linking hypotheses for model augmentation
Figure 3 for Memory in humans and deep language models: Linking hypotheses for model augmentation
Figure 4 for Memory in humans and deep language models: Linking hypotheses for model augmentation

The computational complexity of the self-attention mechanism in Transformer models significantly limits their ability to generalize over long temporal durations. Memory-augmentation, or the explicit storing of past information in external memory for subsequent predictions, has become a constructive avenue for mitigating this limitation. We argue that memory-augmented Transformers can benefit substantially from considering insights from the memory literature in humans. We detail an approach to integrating evidence from the human memory system through the specification of cross-domain linking hypotheses. We then provide an empirical demonstration to evaluate the use of surprisal as a linking hypothesis, and further identify the limitations of this approach to inform future research.

* 5 figures 
Viaarxiv icon

Slower is Better: Revisiting the Forgetting Mechanism in LSTM for Slower Information Decay

May 12, 2021
Hsiang-Yun Sherry Chien, Javier S. Turek, Nicole Beckage, Vy A. Vo, Christopher J. Honey, Ted L. Willke

Figure 1 for Slower is Better: Revisiting the Forgetting Mechanism in LSTM for Slower Information Decay
Figure 2 for Slower is Better: Revisiting the Forgetting Mechanism in LSTM for Slower Information Decay
Figure 3 for Slower is Better: Revisiting the Forgetting Mechanism in LSTM for Slower Information Decay
Figure 4 for Slower is Better: Revisiting the Forgetting Mechanism in LSTM for Slower Information Decay

Sequential information contains short- to long-range dependencies; however, learning long-timescale information has been a challenge for recurrent neural networks. Despite improvements in long short-term memory networks (LSTMs), the forgetting mechanism results in the exponential decay of information, limiting their capacity to capture long-timescale information. Here, we propose a power law forget gate, which instead learns to forget information along a slower power law decay function. Specifically, the new gate learns to control the power law decay factor, p, allowing the network to adjust the information decay rate according to task demands. Our experiments show that an LSTM with power law forget gates (pLSTM) can effectively capture long-range dependencies beyond hundreds of elements on image classification, language modeling, and categorization tasks, improving performance over the vanilla LSTM. We also inspected the revised forget gate by varying the initialization of p, setting p to a fixed value, and ablating cells in the pLSTM network. The results show that the information decay can be controlled by the learnable decay factor p, which allows pLSTM to achieve its superior performance. Altogether, we found that LSTM with the proposed forget gate can learn long-term dependencies, outperforming other recurrent networks in multiple domains; such gating mechanism can be integrated into other architectures for improving the learning of long timescale information in recurrent neural networks.

* 16 pages, 10 figures 
Viaarxiv icon

Multi-timescale representation learning in LSTM Language Models

Sep 27, 2020
Shivangi Mahto, Vy A. Vo, Javier S. Turek, Alexander G. Huth

Figure 1 for Multi-timescale representation learning in LSTM Language Models
Figure 2 for Multi-timescale representation learning in LSTM Language Models
Figure 3 for Multi-timescale representation learning in LSTM Language Models
Figure 4 for Multi-timescale representation learning in LSTM Language Models

Although neural language models are effective at capturing statistics of natural language, their representations are challenging to interpret. In particular, it is unclear how these models retain information over multiple timescales. In this work, we construct explicitly multi-timescale language models by manipulating the input and forget gate biases in a long short-term memory (LSTM) network. The distribution of timescales is selected to approximate power law statistics of natural language through a combination of exponentially decaying memory cells. We then empirically analyze the timescale of information routed through each part of the model using word ablation experiments and forget gate visualizations. These experiments show that the multi-timescale model successfully learns representations at the desired timescales, and that the distribution includes longer timescales than a standard LSTM. Further, information about high-,mid-, and low-frequency words is routed preferentially through units with the appropriate timescales. Thus we show how to construct language models with interpretable representations of different information timescales.

Viaarxiv icon