Picture for Dongsheng Zhu

Dongsheng Zhu

Intern-S1-Pro: Scientific Multimodal Foundation Model at Trillion Scale

Add code
Mar 26, 2026
Viaarxiv icon

How Brittle is Agent Safety? Rethinking Agent Risk under Intent Concealment and Task Complexity

Add code
Nov 11, 2025
Figure 1 for How Brittle is Agent Safety? Rethinking Agent Risk under Intent Concealment and Task Complexity
Figure 2 for How Brittle is Agent Safety? Rethinking Agent Risk under Intent Concealment and Task Complexity
Figure 3 for How Brittle is Agent Safety? Rethinking Agent Risk under Intent Concealment and Task Complexity
Figure 4 for How Brittle is Agent Safety? Rethinking Agent Risk under Intent Concealment and Task Complexity
Viaarxiv icon

VisLingInstruct: Elevating Zero-Shot Learning in Multi-Modal Language Models with Autonomous Instruction Optimization

Add code
Feb 12, 2024
Viaarxiv icon

SDA: Simple Discrete Augmentation for Contrastive Sentence Representation Learning

Add code
Oct 08, 2022
Figure 1 for SDA: Simple Discrete Augmentation for Contrastive Sentence Representation Learning
Figure 2 for SDA: Simple Discrete Augmentation for Contrastive Sentence Representation Learning
Figure 3 for SDA: Simple Discrete Augmentation for Contrastive Sentence Representation Learning
Figure 4 for SDA: Simple Discrete Augmentation for Contrastive Sentence Representation Learning
Viaarxiv icon

What Makes Pre-trained Language Models Better Zero/Few-shot Learners?

Add code
Sep 30, 2022
Figure 1 for What Makes Pre-trained Language Models Better Zero/Few-shot Learners?
Figure 2 for What Makes Pre-trained Language Models Better Zero/Few-shot Learners?
Figure 3 for What Makes Pre-trained Language Models Better Zero/Few-shot Learners?
Figure 4 for What Makes Pre-trained Language Models Better Zero/Few-shot Learners?
Viaarxiv icon