Picture for Mu Li

Mu Li

Removing Batch Normalization Boosts Adversarial Training

Add code
Jul 04, 2022
Figure 1 for Removing Batch Normalization Boosts Adversarial Training
Figure 2 for Removing Batch Normalization Boosts Adversarial Training
Figure 3 for Removing Batch Normalization Boosts Adversarial Training
Figure 4 for Removing Batch Normalization Boosts Adversarial Training
Viaarxiv icon

MixGen: A New Multi-Modal Data Augmentation

Add code
Jun 16, 2022
Figure 1 for MixGen: A New Multi-Modal Data Augmentation
Figure 2 for MixGen: A New Multi-Modal Data Augmentation
Figure 3 for MixGen: A New Multi-Modal Data Augmentation
Figure 4 for MixGen: A New Multi-Modal Data Augmentation
Viaarxiv icon

Modeling Multi-Granularity Hierarchical Features for Relation Extraction

Add code
Apr 09, 2022
Figure 1 for Modeling Multi-Granularity Hierarchical Features for Relation Extraction
Figure 2 for Modeling Multi-Granularity Hierarchical Features for Relation Extraction
Figure 3 for Modeling Multi-Granularity Hierarchical Features for Relation Extraction
Figure 4 for Modeling Multi-Granularity Hierarchical Features for Relation Extraction
Viaarxiv icon

BigDetection: A Large-scale Benchmark for Improved Object Detector Pre-training

Add code
Mar 24, 2022
Figure 1 for BigDetection: A Large-scale Benchmark for Improved Object Detector Pre-training
Figure 2 for BigDetection: A Large-scale Benchmark for Improved Object Detector Pre-training
Figure 3 for BigDetection: A Large-scale Benchmark for Improved Object Detector Pre-training
Figure 4 for BigDetection: A Large-scale Benchmark for Improved Object Detector Pre-training
Viaarxiv icon

Task-guided Disentangled Tuning for Pretrained Language Models

Add code
Mar 22, 2022
Figure 1 for Task-guided Disentangled Tuning for Pretrained Language Models
Figure 2 for Task-guided Disentangled Tuning for Pretrained Language Models
Figure 3 for Task-guided Disentangled Tuning for Pretrained Language Models
Figure 4 for Task-guided Disentangled Tuning for Pretrained Language Models
Viaarxiv icon

Learning Confidence for Transformer-based Neural Machine Translation

Add code
Mar 22, 2022
Figure 1 for Learning Confidence for Transformer-based Neural Machine Translation
Figure 2 for Learning Confidence for Transformer-based Neural Machine Translation
Figure 3 for Learning Confidence for Transformer-based Neural Machine Translation
Figure 4 for Learning Confidence for Transformer-based Neural Machine Translation
Viaarxiv icon

Pseudocylindrical Convolutions for Learned Omnidirectional Image Compression

Add code
Dec 25, 2021
Figure 1 for Pseudocylindrical Convolutions for Learned Omnidirectional Image Compression
Figure 2 for Pseudocylindrical Convolutions for Learned Omnidirectional Image Compression
Figure 3 for Pseudocylindrical Convolutions for Learned Omnidirectional Image Compression
Figure 4 for Pseudocylindrical Convolutions for Learned Omnidirectional Image Compression
Viaarxiv icon

Benchmarking Multimodal AutoML for Tabular Data with Text Fields

Add code
Nov 04, 2021
Figure 1 for Benchmarking Multimodal AutoML for Tabular Data with Text Fields
Figure 2 for Benchmarking Multimodal AutoML for Tabular Data with Text Fields
Figure 3 for Benchmarking Multimodal AutoML for Tabular Data with Text Fields
Figure 4 for Benchmarking Multimodal AutoML for Tabular Data with Text Fields
Viaarxiv icon

Blending Anti-Aliasing into Vision Transformer

Add code
Oct 28, 2021
Figure 1 for Blending Anti-Aliasing into Vision Transformer
Figure 2 for Blending Anti-Aliasing into Vision Transformer
Figure 3 for Blending Anti-Aliasing into Vision Transformer
Figure 4 for Blending Anti-Aliasing into Vision Transformer
Viaarxiv icon

Distiller: A Systematic Study of Model Distillation Methods in Natural Language Processing

Add code
Sep 23, 2021
Figure 1 for Distiller: A Systematic Study of Model Distillation Methods in Natural Language Processing
Figure 2 for Distiller: A Systematic Study of Model Distillation Methods in Natural Language Processing
Figure 3 for Distiller: A Systematic Study of Model Distillation Methods in Natural Language Processing
Figure 4 for Distiller: A Systematic Study of Model Distillation Methods in Natural Language Processing
Viaarxiv icon