Picture for Yin Zhang

Yin Zhang

PIN: A Knowledge-Intensive Dataset for Paired and Interleaved Multimodal Documents

Add code
Jun 20, 2024
Figure 1 for PIN: A Knowledge-Intensive Dataset for Paired and Interleaved Multimodal Documents
Figure 2 for PIN: A Knowledge-Intensive Dataset for Paired and Interleaved Multimodal Documents
Figure 3 for PIN: A Knowledge-Intensive Dataset for Paired and Interleaved Multimodal Documents
Figure 4 for PIN: A Knowledge-Intensive Dataset for Paired and Interleaved Multimodal Documents
Viaarxiv icon

Physical formula enhanced multi-task learning for pharmacokinetics prediction

Add code
Apr 16, 2024
Viaarxiv icon

Domain Adaptive Detection of MAVs: A Benchmark and Noise Suppression Network

Add code
Mar 25, 2024
Viaarxiv icon

Effective Two-Stage Knowledge Transfer for Multi-Entity Cross-Domain Recommendation

Add code
Feb 29, 2024
Figure 1 for Effective Two-Stage Knowledge Transfer for Multi-Entity Cross-Domain Recommendation
Figure 2 for Effective Two-Stage Knowledge Transfer for Multi-Entity Cross-Domain Recommendation
Figure 3 for Effective Two-Stage Knowledge Transfer for Multi-Entity Cross-Domain Recommendation
Figure 4 for Effective Two-Stage Knowledge Transfer for Multi-Entity Cross-Domain Recommendation
Viaarxiv icon

A Bearing-Angle Approach for Unknown Target Motion Analysis Based on Visual Measurements

Add code
Jan 30, 2024
Viaarxiv icon

TeleChat Technical Report

Add code
Jan 08, 2024
Viaarxiv icon

Global-Local MAV Detection under Challenging Conditions based on Appearance and Motion

Add code
Dec 18, 2023
Figure 1 for Global-Local MAV Detection under Challenging Conditions based on Appearance and Motion
Figure 2 for Global-Local MAV Detection under Challenging Conditions based on Appearance and Motion
Figure 3 for Global-Local MAV Detection under Challenging Conditions based on Appearance and Motion
Figure 4 for Global-Local MAV Detection under Challenging Conditions based on Appearance and Motion
Viaarxiv icon

Why "classic" Transformers are shallow and how to make them go deep

Add code
Dec 11, 2023
Figure 1 for Why "classic" Transformers are shallow and how to make them go deep
Figure 2 for Why "classic" Transformers are shallow and how to make them go deep
Figure 3 for Why "classic" Transformers are shallow and how to make them go deep
Figure 4 for Why "classic" Transformers are shallow and how to make them go deep
Viaarxiv icon

HierarchicalContrast: A Coarse-to-Fine Contrastive Learning Framework for Cross-Domain Zero-Shot Slot Filling

Add code
Oct 20, 2023
Figure 1 for HierarchicalContrast: A Coarse-to-Fine Contrastive Learning Framework for Cross-Domain Zero-Shot Slot Filling
Figure 2 for HierarchicalContrast: A Coarse-to-Fine Contrastive Learning Framework for Cross-Domain Zero-Shot Slot Filling
Figure 3 for HierarchicalContrast: A Coarse-to-Fine Contrastive Learning Framework for Cross-Domain Zero-Shot Slot Filling
Figure 4 for HierarchicalContrast: A Coarse-to-Fine Contrastive Learning Framework for Cross-Domain Zero-Shot Slot Filling
Viaarxiv icon

Large Language Models Are Also Good Prototypical Commonsense Reasoners

Add code
Sep 22, 2023
Viaarxiv icon