Picture for Xingyi Cheng

Xingyi Cheng

MSAGPT: Neural Prompting Protein Structure Prediction via MSA Generative Pre-Training

Add code
Jun 11, 2024
Viaarxiv icon

xTrimoPGLM: Unified 100B-Scale Pre-trained Transformer for Deciphering the Language of Protein

Add code
Jan 11, 2024
Viaarxiv icon

xTrimoGene: An Efficient and Scalable Representation Learner for Single-Cell RNA-Seq Data

Add code
Nov 26, 2023
Viaarxiv icon

Won't Get Fooled Again: Answering Questions with False Premises

Add code
Jul 05, 2023
Figure 1 for Won't Get Fooled Again: Answering Questions with False Premises
Figure 2 for Won't Get Fooled Again: Answering Questions with False Premises
Figure 3 for Won't Get Fooled Again: Answering Questions with False Premises
Figure 4 for Won't Get Fooled Again: Answering Questions with False Premises
Viaarxiv icon

Revisiting Out-of-distribution Robustness in NLP: Benchmark, Analysis, and LLMs Evaluations

Add code
Jun 07, 2023
Figure 1 for Revisiting Out-of-distribution Robustness in NLP: Benchmark, Analysis, and LLMs Evaluations
Figure 2 for Revisiting Out-of-distribution Robustness in NLP: Benchmark, Analysis, and LLMs Evaluations
Figure 3 for Revisiting Out-of-distribution Robustness in NLP: Benchmark, Analysis, and LLMs Evaluations
Figure 4 for Revisiting Out-of-distribution Robustness in NLP: Benchmark, Analysis, and LLMs Evaluations
Viaarxiv icon

K-AID: Enhancing Pre-trained Language Models with Domain Knowledge for Question Answering

Add code
Sep 22, 2021
Figure 1 for K-AID: Enhancing Pre-trained Language Models with Domain Knowledge for Question Answering
Figure 2 for K-AID: Enhancing Pre-trained Language Models with Domain Knowledge for Question Answering
Figure 3 for K-AID: Enhancing Pre-trained Language Models with Domain Knowledge for Question Answering
Figure 4 for K-AID: Enhancing Pre-trained Language Models with Domain Knowledge for Question Answering
Viaarxiv icon

Dual-View Distilled BERT for Sentence Embedding

Add code
Apr 18, 2021
Figure 1 for Dual-View Distilled BERT for Sentence Embedding
Figure 2 for Dual-View Distilled BERT for Sentence Embedding
Figure 3 for Dual-View Distilled BERT for Sentence Embedding
Figure 4 for Dual-View Distilled BERT for Sentence Embedding
Viaarxiv icon

Question Directed Graph Attention Network for Numerical Reasoning over Text

Add code
Sep 16, 2020
Figure 1 for Question Directed Graph Attention Network for Numerical Reasoning over Text
Figure 2 for Question Directed Graph Attention Network for Numerical Reasoning over Text
Figure 3 for Question Directed Graph Attention Network for Numerical Reasoning over Text
Figure 4 for Question Directed Graph Attention Network for Numerical Reasoning over Text
Viaarxiv icon

SpellGCN: Incorporating Phonological and Visual Similarities into Language Models for Chinese Spelling Check

Add code
May 13, 2020
Figure 1 for SpellGCN: Incorporating Phonological and Visual Similarities into Language Models for Chinese Spelling Check
Figure 2 for SpellGCN: Incorporating Phonological and Visual Similarities into Language Models for Chinese Spelling Check
Figure 3 for SpellGCN: Incorporating Phonological and Visual Similarities into Language Models for Chinese Spelling Check
Figure 4 for SpellGCN: Incorporating Phonological and Visual Similarities into Language Models for Chinese Spelling Check
Viaarxiv icon

Symmetric Regularization based BERT for Pair-wise Semantic Reasoning

Add code
Sep 08, 2019
Figure 1 for Symmetric Regularization based BERT for Pair-wise Semantic Reasoning
Figure 2 for Symmetric Regularization based BERT for Pair-wise Semantic Reasoning
Figure 3 for Symmetric Regularization based BERT for Pair-wise Semantic Reasoning
Figure 4 for Symmetric Regularization based BERT for Pair-wise Semantic Reasoning
Viaarxiv icon