Picture for Chen Zhang

Chen Zhang

SenseTime Research

XTraffic: A Dataset Where Traffic Meets Incidents with Explainability and More

Add code
Jul 16, 2024
Viaarxiv icon

Unlocking the Potential of Model Merging for Low-Resource Languages

Add code
Jul 04, 2024
Viaarxiv icon

DynaThink: Fast or Slow? A Dynamic Decision-Making Framework for Large Language Models

Add code
Jul 01, 2024
Viaarxiv icon

Extracting thin film structures of energy materials using transformers

Add code
Jun 24, 2024
Figure 1 for Extracting thin film structures of energy materials using transformers
Figure 2 for Extracting thin film structures of energy materials using transformers
Figure 3 for Extracting thin film structures of energy materials using transformers
Figure 4 for Extracting thin film structures of energy materials using transformers
Viaarxiv icon

RefXVC: Cross-Lingual Voice Conversion with Enhanced Reference Leveraging

Add code
Jun 24, 2024
Viaarxiv icon

Harvesting Efficient On-Demand Order Pooling from Skilled Couriers: Enhancing Graph Representation Learning for Refining Real-time Many-to-One Assignments

Add code
Jun 20, 2024
Figure 1 for Harvesting Efficient On-Demand Order Pooling from Skilled Couriers: Enhancing Graph Representation Learning for Refining Real-time Many-to-One Assignments
Figure 2 for Harvesting Efficient On-Demand Order Pooling from Skilled Couriers: Enhancing Graph Representation Learning for Refining Real-time Many-to-One Assignments
Figure 3 for Harvesting Efficient On-Demand Order Pooling from Skilled Couriers: Enhancing Graph Representation Learning for Refining Real-time Many-to-One Assignments
Figure 4 for Harvesting Efficient On-Demand Order Pooling from Skilled Couriers: Enhancing Graph Representation Learning for Refining Real-time Many-to-One Assignments
Viaarxiv icon

Understanding the RoPE Extensions of Long-Context LLMs: An Attention Perspective

Add code
Jun 19, 2024
Figure 1 for Understanding the RoPE Extensions of Long-Context LLMs: An Attention Perspective
Figure 2 for Understanding the RoPE Extensions of Long-Context LLMs: An Attention Perspective
Figure 3 for Understanding the RoPE Extensions of Long-Context LLMs: An Attention Perspective
Figure 4 for Understanding the RoPE Extensions of Long-Context LLMs: An Attention Perspective
Viaarxiv icon

Advancing DRL Agents in Commercial Fighting Games: Training, Integration, and Agent-Human Alignment

Add code
Jun 03, 2024
Viaarxiv icon

TS-Align: A Teacher-Student Collaborative Framework for Scalable Iterative Finetuning of Large Language Models

Add code
May 30, 2024
Viaarxiv icon

Functional Programming Paradigm of Python for Scientific Computation Pipeline Integration

Add code
May 27, 2024
Viaarxiv icon