Picture for Mingi Ji

Mingi Ji

LayerMerge: Neural Network Depth Compression through Layer Pruning and Merging

Add code
Jun 18, 2024
Viaarxiv icon

Unknown-Aware Domain Adversarial Learning for Open-Set Domain Adaptation

Add code
Jun 15, 2022
Figure 1 for Unknown-Aware Domain Adversarial Learning for Open-Set Domain Adaptation
Figure 2 for Unknown-Aware Domain Adversarial Learning for Open-Set Domain Adaptation
Figure 3 for Unknown-Aware Domain Adversarial Learning for Open-Set Domain Adaptation
Figure 4 for Unknown-Aware Domain Adversarial Learning for Open-Set Domain Adaptation
Viaarxiv icon

BROS: A Pre-trained Language Model Focusing on Text and Layout for Better Key Information Extraction from Documents

Add code
Sep 10, 2021
Figure 1 for BROS: A Pre-trained Language Model Focusing on Text and Layout for Better Key Information Extraction from Documents
Figure 2 for BROS: A Pre-trained Language Model Focusing on Text and Layout for Better Key Information Extraction from Documents
Figure 3 for BROS: A Pre-trained Language Model Focusing on Text and Layout for Better Key Information Extraction from Documents
Figure 4 for BROS: A Pre-trained Language Model Focusing on Text and Layout for Better Key Information Extraction from Documents
Viaarxiv icon

Refine Myself by Teaching Myself: Feature Refinement via Self-Knowledge Distillation

Add code
Mar 15, 2021
Figure 1 for Refine Myself by Teaching Myself: Feature Refinement via Self-Knowledge Distillation
Figure 2 for Refine Myself by Teaching Myself: Feature Refinement via Self-Knowledge Distillation
Figure 3 for Refine Myself by Teaching Myself: Feature Refinement via Self-Knowledge Distillation
Figure 4 for Refine Myself by Teaching Myself: Feature Refinement via Self-Knowledge Distillation
Viaarxiv icon

Show, Attend and Distill:Knowledge Distillation via Attention-based Feature Matching

Add code
Feb 05, 2021
Figure 1 for Show, Attend and Distill:Knowledge Distillation via Attention-based Feature Matching
Figure 2 for Show, Attend and Distill:Knowledge Distillation via Attention-based Feature Matching
Figure 3 for Show, Attend and Distill:Knowledge Distillation via Attention-based Feature Matching
Figure 4 for Show, Attend and Distill:Knowledge Distillation via Attention-based Feature Matching
Viaarxiv icon

Sequential Recommendation with Relation-Aware Kernelized Self-Attention

Add code
Nov 15, 2019
Figure 1 for Sequential Recommendation with Relation-Aware Kernelized Self-Attention
Figure 2 for Sequential Recommendation with Relation-Aware Kernelized Self-Attention
Figure 3 for Sequential Recommendation with Relation-Aware Kernelized Self-Attention
Figure 4 for Sequential Recommendation with Relation-Aware Kernelized Self-Attention
Viaarxiv icon

Hierarchical Context enabled Recurrent Neural Network for Recommendation

Add code
Apr 26, 2019
Figure 1 for Hierarchical Context enabled Recurrent Neural Network for Recommendation
Figure 2 for Hierarchical Context enabled Recurrent Neural Network for Recommendation
Figure 3 for Hierarchical Context enabled Recurrent Neural Network for Recommendation
Figure 4 for Hierarchical Context enabled Recurrent Neural Network for Recommendation
Viaarxiv icon

Adversarial Dropout for Recurrent Neural Networks

Add code
Apr 22, 2019
Figure 1 for Adversarial Dropout for Recurrent Neural Networks
Figure 2 for Adversarial Dropout for Recurrent Neural Networks
Figure 3 for Adversarial Dropout for Recurrent Neural Networks
Figure 4 for Adversarial Dropout for Recurrent Neural Networks
Viaarxiv icon