Picture for Zhiqiang Tang

Zhiqiang Tang

Learning to Generate Answers with Citations via Factual Consistency Models

Add code
Jun 19, 2024
Figure 1 for Learning to Generate Answers with Citations via Factual Consistency Models
Figure 2 for Learning to Generate Answers with Citations via Factual Consistency Models
Figure 3 for Learning to Generate Answers with Citations via Factual Consistency Models
Figure 4 for Learning to Generate Answers with Citations via Factual Consistency Models
Viaarxiv icon

AutoGluon-Multimodal (AutoMM): Supercharging Multimodal AutoML with Foundation Models

Add code
Apr 30, 2024
Figure 1 for AutoGluon-Multimodal (AutoMM): Supercharging Multimodal AutoML with Foundation Models
Figure 2 for AutoGluon-Multimodal (AutoMM): Supercharging Multimodal AutoML with Foundation Models
Figure 3 for AutoGluon-Multimodal (AutoMM): Supercharging Multimodal AutoML with Foundation Models
Figure 4 for AutoGluon-Multimodal (AutoMM): Supercharging Multimodal AutoML with Foundation Models
Viaarxiv icon

Convolution Meets LoRA: Parameter Efficient Finetuning for Segment Anything Model

Add code
Jan 31, 2024
Viaarxiv icon

Learning Multimodal Data Augmentation in Feature Space

Add code
Dec 29, 2022
Figure 1 for Learning Multimodal Data Augmentation in Feature Space
Figure 2 for Learning Multimodal Data Augmentation in Feature Space
Figure 3 for Learning Multimodal Data Augmentation in Feature Space
Figure 4 for Learning Multimodal Data Augmentation in Feature Space
Viaarxiv icon

Are Multimodal Models Robust to Image and Text Perturbations?

Add code
Dec 15, 2022
Figure 1 for Are Multimodal Models Robust to Image and Text Perturbations?
Figure 2 for Are Multimodal Models Robust to Image and Text Perturbations?
Figure 3 for Are Multimodal Models Robust to Image and Text Perturbations?
Figure 4 for Are Multimodal Models Robust to Image and Text Perturbations?
Viaarxiv icon

Visual Prompt Tuning for Test-time Domain Adaptation

Add code
Oct 10, 2022
Figure 1 for Visual Prompt Tuning for Test-time Domain Adaptation
Figure 2 for Visual Prompt Tuning for Test-time Domain Adaptation
Figure 3 for Visual Prompt Tuning for Test-time Domain Adaptation
Figure 4 for Visual Prompt Tuning for Test-time Domain Adaptation
Viaarxiv icon

Enabling Data Diversity: Efficient Automatic Augmentation via Regularized Adversarial Training

Add code
Mar 30, 2021
Figure 1 for Enabling Data Diversity: Efficient Automatic Augmentation via Regularized Adversarial Training
Figure 2 for Enabling Data Diversity: Efficient Automatic Augmentation via Regularized Adversarial Training
Figure 3 for Enabling Data Diversity: Efficient Automatic Augmentation via Regularized Adversarial Training
Figure 4 for Enabling Data Diversity: Efficient Automatic Augmentation via Regularized Adversarial Training
Viaarxiv icon

SelfNorm and CrossNorm for Out-of-Distribution Robustness

Add code
Feb 04, 2021
Figure 1 for SelfNorm and CrossNorm for Out-of-Distribution Robustness
Figure 2 for SelfNorm and CrossNorm for Out-of-Distribution Robustness
Figure 3 for SelfNorm and CrossNorm for Out-of-Distribution Robustness
Figure 4 for SelfNorm and CrossNorm for Out-of-Distribution Robustness
Viaarxiv icon

OnlineAugment: Online Data Augmentation with Less Domain Knowledge

Add code
Aug 22, 2020
Figure 1 for OnlineAugment: Online Data Augmentation with Less Domain Knowledge
Figure 2 for OnlineAugment: Online Data Augmentation with Less Domain Knowledge
Figure 3 for OnlineAugment: Online Data Augmentation with Less Domain Knowledge
Figure 4 for OnlineAugment: Online Data Augmentation with Less Domain Knowledge
Viaarxiv icon

Learning where to look: Semantic-Guided Multi-Attention Localization for Zero-Shot Learning

Add code
Mar 01, 2019
Figure 1 for Learning where to look: Semantic-Guided Multi-Attention Localization for Zero-Shot Learning
Figure 2 for Learning where to look: Semantic-Guided Multi-Attention Localization for Zero-Shot Learning
Figure 3 for Learning where to look: Semantic-Guided Multi-Attention Localization for Zero-Shot Learning
Figure 4 for Learning where to look: Semantic-Guided Multi-Attention Localization for Zero-Shot Learning
Viaarxiv icon