Alert button
Picture for Yequan Wang

Yequan Wang

Alert button

Not all Layers of LLMs are Necessary during Inference

Add code
Bookmark button
Alert button
Mar 04, 2024
Siqi Fan, Xin Jiang, Xiang Li, Xuying Meng, Peng Han, Shuo Shang, Aixin Sun, Yequan Wang, Zhongyuan Wang

Figure 1 for Not all Layers of LLMs are Necessary during Inference
Figure 2 for Not all Layers of LLMs are Necessary during Inference
Figure 3 for Not all Layers of LLMs are Necessary during Inference
Figure 4 for Not all Layers of LLMs are Necessary during Inference
Viaarxiv icon

Discerning and Resolving Knowledge Conflicts through Adaptive Decoding with Contextual Information-Entropy Constraint

Add code
Bookmark button
Alert button
Feb 19, 2024
Xiaowei Yuan, Zhao Yang, Yequan Wang, Shengping Liu, Jun Zhao, Kang Liu

Viaarxiv icon

Spectral-based Graph Neural Networks for Complementary Item Recommendation

Add code
Bookmark button
Alert button
Jan 11, 2024
Haitong Luo, Xuying Meng, Suhang Wang, Hanyun Cao, Weiyao Zhang, Yequan Wang, Yujun Zhang

Viaarxiv icon

Spectral-based Graph Neutral Networks for Complementary Item Recommendation

Add code
Bookmark button
Alert button
Jan 04, 2024
Haitong Luo, Xuying Meng, Suhang Wang, Hanyun Cao, Weiyao Zhang, Yequan Wang, Yujun Zhang

Viaarxiv icon

BiPFT: Binary Pre-trained Foundation Transformer with Low-rank Estimation of Binarization Residual Polynomials

Add code
Bookmark button
Alert button
Dec 14, 2023
Xingrun Xing, Li Du, Xinyuan Wang, Xianlin Zeng, Yequan Wang, Zheng Zhang, Jiajun Zhang

Viaarxiv icon

FLM-101B: An Open LLM and How to Train It with $100K Budget

Add code
Bookmark button
Alert button
Sep 17, 2023
Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang

Figure 1 for FLM-101B: An Open LLM and How to Train It with $100K Budget
Figure 2 for FLM-101B: An Open LLM and How to Train It with $100K Budget
Figure 3 for FLM-101B: An Open LLM and How to Train It with $100K Budget
Figure 4 for FLM-101B: An Open LLM and How to Train It with $100K Budget
Viaarxiv icon

Quantifying and Attributing the Hallucination of Large Language Models via Association Analysis

Add code
Bookmark button
Alert button
Sep 11, 2023
Li Du, Yequan Wang, Xingrun Xing, Yiqun Ya, Xiang Li, Xin Jiang, Xuezhi Fang

Figure 1 for Quantifying and Attributing the Hallucination of Large Language Models via Association Analysis
Figure 2 for Quantifying and Attributing the Hallucination of Large Language Models via Association Analysis
Figure 3 for Quantifying and Attributing the Hallucination of Large Language Models via Association Analysis
Figure 4 for Quantifying and Attributing the Hallucination of Large Language Models via Association Analysis
Viaarxiv icon

Rethinking Document-Level Relation Extraction: A Reality Check

Add code
Bookmark button
Alert button
Jun 15, 2023
Jing Li, Yequan Wang, Shuai Zhang, Min Zhang

Figure 1 for Rethinking Document-Level Relation Extraction: A Reality Check
Figure 2 for Rethinking Document-Level Relation Extraction: A Reality Check
Figure 3 for Rethinking Document-Level Relation Extraction: A Reality Check
Figure 4 for Rethinking Document-Level Relation Extraction: A Reality Check
Viaarxiv icon

2x Faster Language Model Pre-training via Masked Structural Growth

Add code
Bookmark button
Alert button
May 04, 2023
Yiqun Yao, Zheng Zhang, Jing Li, Yequan Wang

Figure 1 for 2x Faster Language Model Pre-training via Masked Structural Growth
Figure 2 for 2x Faster Language Model Pre-training via Masked Structural Growth
Figure 3 for 2x Faster Language Model Pre-training via Masked Structural Growth
Figure 4 for 2x Faster Language Model Pre-training via Masked Structural Growth
Viaarxiv icon