Picture for Zhengyang Wang

Zhengyang Wang

CS-Bench: A Comprehensive Benchmark for Large Language Models towards Computer Science Mastery

Add code
Jun 12, 2024
Viaarxiv icon

Towards Unified Multi-Modal Personalization: Large Vision-Language Models for Generative Recommendation and Beyond

Add code
Mar 27, 2024
Figure 1 for Towards Unified Multi-Modal Personalization: Large Vision-Language Models for Generative Recommendation and Beyond
Figure 2 for Towards Unified Multi-Modal Personalization: Large Vision-Language Models for Generative Recommendation and Beyond
Figure 3 for Towards Unified Multi-Modal Personalization: Large Vision-Language Models for Generative Recommendation and Beyond
Figure 4 for Towards Unified Multi-Modal Personalization: Large Vision-Language Models for Generative Recommendation and Beyond
Viaarxiv icon

Knowledge Editing on Black-box Large Language Models

Add code
Feb 17, 2024
Figure 1 for Knowledge Editing on Black-box Large Language Models
Figure 2 for Knowledge Editing on Black-box Large Language Models
Figure 3 for Knowledge Editing on Black-box Large Language Models
Figure 4 for Knowledge Editing on Black-box Large Language Models
Viaarxiv icon

Enhancing User Intent Capture in Session-Based Recommendation with Attribute Patterns

Add code
Dec 23, 2023
Viaarxiv icon

Language Models As Semantic Indexers

Add code
Oct 11, 2023
Figure 1 for Language Models As Semantic Indexers
Figure 2 for Language Models As Semantic Indexers
Figure 3 for Language Models As Semantic Indexers
Figure 4 for Language Models As Semantic Indexers
Viaarxiv icon

Towards Robust and Generalizable Training: An Empirical Study of Noisy Slot Filling for Input Perturbations

Add code
Oct 05, 2023
Figure 1 for Towards Robust and Generalizable Training: An Empirical Study of Noisy Slot Filling for Input Perturbations
Figure 2 for Towards Robust and Generalizable Training: An Empirical Study of Noisy Slot Filling for Input Perturbations
Figure 3 for Towards Robust and Generalizable Training: An Empirical Study of Noisy Slot Filling for Input Perturbations
Figure 4 for Towards Robust and Generalizable Training: An Empirical Study of Noisy Slot Filling for Input Perturbations
Viaarxiv icon

Amazon-M2: A Multilingual Multi-locale Shopping Session Dataset for Recommendation and Text Generation

Add code
Jul 19, 2023
Figure 1 for Amazon-M2: A Multilingual Multi-locale Shopping Session Dataset for Recommendation and Text Generation
Figure 2 for Amazon-M2: A Multilingual Multi-locale Shopping Session Dataset for Recommendation and Text Generation
Figure 3 for Amazon-M2: A Multilingual Multi-locale Shopping Session Dataset for Recommendation and Text Generation
Figure 4 for Amazon-M2: A Multilingual Multi-locale Shopping Session Dataset for Recommendation and Text Generation
Viaarxiv icon

Concept2Box: Joint Geometric Embeddings for Learning Two-View Knowledge Graphs

Add code
Jul 04, 2023
Figure 1 for Concept2Box: Joint Geometric Embeddings for Learning Two-View Knowledge Graphs
Figure 2 for Concept2Box: Joint Geometric Embeddings for Learning Two-View Knowledge Graphs
Figure 3 for Concept2Box: Joint Geometric Embeddings for Learning Two-View Knowledge Graphs
Figure 4 for Concept2Box: Joint Geometric Embeddings for Learning Two-View Knowledge Graphs
Viaarxiv icon

A Unified Framework of Graph Information Bottleneck for Robustness and Membership Privacy

Add code
Jun 14, 2023
Figure 1 for A Unified Framework of Graph Information Bottleneck for Robustness and Membership Privacy
Figure 2 for A Unified Framework of Graph Information Bottleneck for Robustness and Membership Privacy
Figure 3 for A Unified Framework of Graph Information Bottleneck for Robustness and Membership Privacy
Figure 4 for A Unified Framework of Graph Information Bottleneck for Robustness and Membership Privacy
Viaarxiv icon

SCOTT: Self-Consistent Chain-of-Thought Distillation

Add code
May 03, 2023
Figure 1 for SCOTT: Self-Consistent Chain-of-Thought Distillation
Figure 2 for SCOTT: Self-Consistent Chain-of-Thought Distillation
Figure 3 for SCOTT: Self-Consistent Chain-of-Thought Distillation
Figure 4 for SCOTT: Self-Consistent Chain-of-Thought Distillation
Viaarxiv icon