Alert button

"Information": models, code, and papers
Alert button

Recurrence-in-Recurrence Networks for Video Deblurring

Mar 12, 2022
Joonkyu Park, Seungjun Nah, Kyoung Mu Lee

Figure 1 for Recurrence-in-Recurrence Networks for Video Deblurring
Figure 2 for Recurrence-in-Recurrence Networks for Video Deblurring
Figure 3 for Recurrence-in-Recurrence Networks for Video Deblurring
Figure 4 for Recurrence-in-Recurrence Networks for Video Deblurring
Viaarxiv icon

Information-Theoretic Segmentation by Inpainting Error Maximization

Add code
Bookmark button
Alert button
Dec 14, 2020
Pedro Savarese, Sunnie S. Y. Kim, Michael Maire, Greg Shakhnarovich, David McAllester

Figure 1 for Information-Theoretic Segmentation by Inpainting Error Maximization
Figure 2 for Information-Theoretic Segmentation by Inpainting Error Maximization
Figure 3 for Information-Theoretic Segmentation by Inpainting Error Maximization
Figure 4 for Information-Theoretic Segmentation by Inpainting Error Maximization
Viaarxiv icon

Democracy Does Matter: Comprehensive Feature Mining for Co-Salient Object Detection

Add code
Bookmark button
Alert button
Mar 11, 2022
Siyue Yu, Jimin Xiao, Bingfeng Zhang, Eng Gee Lim

Figure 1 for Democracy Does Matter: Comprehensive Feature Mining for Co-Salient Object Detection
Figure 2 for Democracy Does Matter: Comprehensive Feature Mining for Co-Salient Object Detection
Figure 3 for Democracy Does Matter: Comprehensive Feature Mining for Co-Salient Object Detection
Figure 4 for Democracy Does Matter: Comprehensive Feature Mining for Co-Salient Object Detection
Viaarxiv icon

Multi-Sample $ζ$-mixup: Richer, More Realistic Synthetic Samples from a $p$-Series Interpolant

Add code
Bookmark button
Alert button
Apr 07, 2022
Kumar Abhishek, Colin J. Brown, Ghassan Hamarneh

Figure 1 for Multi-Sample $ζ$-mixup: Richer, More Realistic Synthetic Samples from a $p$-Series Interpolant
Figure 2 for Multi-Sample $ζ$-mixup: Richer, More Realistic Synthetic Samples from a $p$-Series Interpolant
Figure 3 for Multi-Sample $ζ$-mixup: Richer, More Realistic Synthetic Samples from a $p$-Series Interpolant
Figure 4 for Multi-Sample $ζ$-mixup: Richer, More Realistic Synthetic Samples from a $p$-Series Interpolant
Viaarxiv icon

Category-Aware Transformer Network for Better Human-Object Interaction Detection

Apr 11, 2022
Leizhen Dong, Zhimin Li, Kunlun Xu, Zhijun Zhang, Luxin Yan, Sheng Zhong, Xu Zou

Figure 1 for Category-Aware Transformer Network for Better Human-Object Interaction Detection
Figure 2 for Category-Aware Transformer Network for Better Human-Object Interaction Detection
Figure 3 for Category-Aware Transformer Network for Better Human-Object Interaction Detection
Figure 4 for Category-Aware Transformer Network for Better Human-Object Interaction Detection
Viaarxiv icon

On statistic alignment for domain adaptation in structural health monitoring

Add code
Bookmark button
Alert button
May 24, 2022
Jack Poole, Paul Gardner, Nikolaos Dervilis, Lawrence Bull, Keith Worden

Figure 1 for On statistic alignment for domain adaptation in structural health monitoring
Figure 2 for On statistic alignment for domain adaptation in structural health monitoring
Figure 3 for On statistic alignment for domain adaptation in structural health monitoring
Figure 4 for On statistic alignment for domain adaptation in structural health monitoring
Viaarxiv icon

Effect of Gender, Pose and Camera Distance on Human Body Dimensions Estimation

Add code
Bookmark button
Alert button
May 24, 2022
Yansel Gónzalez Tejeda, Helmut A. Mayer

Figure 1 for Effect of Gender, Pose and Camera Distance on Human Body Dimensions Estimation
Figure 2 for Effect of Gender, Pose and Camera Distance on Human Body Dimensions Estimation
Figure 3 for Effect of Gender, Pose and Camera Distance on Human Body Dimensions Estimation
Figure 4 for Effect of Gender, Pose and Camera Distance on Human Body Dimensions Estimation
Viaarxiv icon

Transformer Language Models with LSTM-based Cross-utterance Information Representation

Add code
Bookmark button
Alert button
Feb 12, 2021
G. Sun, C. Zhang, P. C. Woodland

Figure 1 for Transformer Language Models with LSTM-based Cross-utterance Information Representation
Figure 2 for Transformer Language Models with LSTM-based Cross-utterance Information Representation
Figure 3 for Transformer Language Models with LSTM-based Cross-utterance Information Representation
Figure 4 for Transformer Language Models with LSTM-based Cross-utterance Information Representation
Viaarxiv icon

A Linear Comb Filter for Event Flicker Removal

Add code
Bookmark button
Alert button
May 17, 2022
Ziwei Wang, Dingran Yuan, Yonhon Ng, Robert Mahony

Figure 1 for A Linear Comb Filter for Event Flicker Removal
Figure 2 for A Linear Comb Filter for Event Flicker Removal
Figure 3 for A Linear Comb Filter for Event Flicker Removal
Figure 4 for A Linear Comb Filter for Event Flicker Removal
Viaarxiv icon

Residual Q-Networks for Value Function Factorizing in Multi-Agent Reinforcement Learning

Add code
Bookmark button
Alert button
May 30, 2022
Rafael Pina, Varuna De Silva, Joosep Hook, Ahmet Kondoz

Figure 1 for Residual Q-Networks for Value Function Factorizing in Multi-Agent Reinforcement Learning
Figure 2 for Residual Q-Networks for Value Function Factorizing in Multi-Agent Reinforcement Learning
Figure 3 for Residual Q-Networks for Value Function Factorizing in Multi-Agent Reinforcement Learning
Figure 4 for Residual Q-Networks for Value Function Factorizing in Multi-Agent Reinforcement Learning
Viaarxiv icon