Picture for Hongxia Jin

Hongxia Jin

MossNet: Mixture of State-Space Experts is a Multi-Head Attention

Add code
Oct 30, 2025
Viaarxiv icon

SimuRA: Towards General Goal-Oriented Agent via Simulative Reasoning Architecture with LLM-Based World Model

Add code
Jul 31, 2025
Figure 1 for SimuRA: Towards General Goal-Oriented Agent via Simulative Reasoning Architecture with LLM-Based World Model
Figure 2 for SimuRA: Towards General Goal-Oriented Agent via Simulative Reasoning Architecture with LLM-Based World Model
Figure 3 for SimuRA: Towards General Goal-Oriented Agent via Simulative Reasoning Architecture with LLM-Based World Model
Figure 4 for SimuRA: Towards General Goal-Oriented Agent via Simulative Reasoning Architecture with LLM-Based World Model
Viaarxiv icon

RestoreGrad: Signal Restoration Using Conditional Denoising Diffusion Models with Jointly Learned Prior

Add code
Feb 19, 2025
Viaarxiv icon

Dynamic Noise Preference Optimization for LLM Self-Improvement via Synthetic Data

Add code
Feb 08, 2025
Viaarxiv icon

FlexiGPT: Pruning and Extending Large Language Models with Low-Rank Weight Sharing

Add code
Jan 24, 2025
Figure 1 for FlexiGPT: Pruning and Extending Large Language Models with Low-Rank Weight Sharing
Figure 2 for FlexiGPT: Pruning and Extending Large Language Models with Low-Rank Weight Sharing
Figure 3 for FlexiGPT: Pruning and Extending Large Language Models with Low-Rank Weight Sharing
Figure 4 for FlexiGPT: Pruning and Extending Large Language Models with Low-Rank Weight Sharing
Viaarxiv icon

DISP-LLM: Dimension-Independent Structural Pruning for Large Language Models

Add code
Oct 15, 2024
Viaarxiv icon

MoDeGPT: Modular Decomposition for Large Language Model Compression

Add code
Aug 20, 2024
Figure 1 for MoDeGPT: Modular Decomposition for Large Language Model Compression
Figure 2 for MoDeGPT: Modular Decomposition for Large Language Model Compression
Figure 3 for MoDeGPT: Modular Decomposition for Large Language Model Compression
Figure 4 for MoDeGPT: Modular Decomposition for Large Language Model Compression
Viaarxiv icon

Explicit Diversity Conditions for Effective Question Answer Generation with Large Language Models

Add code
Jun 26, 2024
Figure 1 for Explicit Diversity Conditions for Effective Question Answer Generation with Large Language Models
Figure 2 for Explicit Diversity Conditions for Effective Question Answer Generation with Large Language Models
Figure 3 for Explicit Diversity Conditions for Effective Question Answer Generation with Large Language Models
Figure 4 for Explicit Diversity Conditions for Effective Question Answer Generation with Large Language Models
Viaarxiv icon

DynaMo: Accelerating Language Model Inference with Dynamic Multi-Token Sampling

Add code
May 01, 2024
Figure 1 for DynaMo: Accelerating Language Model Inference with Dynamic Multi-Token Sampling
Figure 2 for DynaMo: Accelerating Language Model Inference with Dynamic Multi-Token Sampling
Figure 3 for DynaMo: Accelerating Language Model Inference with Dynamic Multi-Token Sampling
Figure 4 for DynaMo: Accelerating Language Model Inference with Dynamic Multi-Token Sampling
Viaarxiv icon

Compositional Generalization in Spoken Language Understanding

Add code
Dec 25, 2023
Figure 1 for Compositional Generalization in Spoken Language Understanding
Figure 2 for Compositional Generalization in Spoken Language Understanding
Figure 3 for Compositional Generalization in Spoken Language Understanding
Figure 4 for Compositional Generalization in Spoken Language Understanding
Viaarxiv icon