Alert button

"Information": models, code, and papers
Alert button

Mutual information neural estimation for unsupervised multi-modal registration of brain images

Jan 25, 2022
Gerard Snaauw, Michele Sasdelli, Gabriel Maicas, Stephan Lau, Johan Verjans, Mark Jenkinson, Gustavo Carneiro

Figure 1 for Mutual information neural estimation for unsupervised multi-modal registration of brain images
Figure 2 for Mutual information neural estimation for unsupervised multi-modal registration of brain images
Figure 3 for Mutual information neural estimation for unsupervised multi-modal registration of brain images
Figure 4 for Mutual information neural estimation for unsupervised multi-modal registration of brain images
Viaarxiv icon

Dialog Acts for Task-Driven Embodied Agents

Sep 26, 2022
Spandana Gella, Aishwarya Padmakumar, Patrick Lange, Dilek Hakkani-Tur

Figure 1 for Dialog Acts for Task-Driven Embodied Agents
Figure 2 for Dialog Acts for Task-Driven Embodied Agents
Figure 3 for Dialog Acts for Task-Driven Embodied Agents
Figure 4 for Dialog Acts for Task-Driven Embodied Agents
Viaarxiv icon

Improving Micro-video Recommendation by Controlling Position Bias

Add code
Bookmark button
Alert button
Aug 09, 2022
Yisong Yu, Beihong Jin, Jiageng Song, Beibei Li, Yiyuan Zheng, Wei Zhu

Figure 1 for Improving Micro-video Recommendation by Controlling Position Bias
Figure 2 for Improving Micro-video Recommendation by Controlling Position Bias
Figure 3 for Improving Micro-video Recommendation by Controlling Position Bias
Figure 4 for Improving Micro-video Recommendation by Controlling Position Bias
Viaarxiv icon

TwHIN-BERT: A Socially-Enriched Pre-trained Language Model for Multilingual Tweet Representations

Add code
Bookmark button
Alert button
Sep 15, 2022
Xinyang Zhang, Yury Malkov, Omar Florez, Serim Park, Brian McWilliams, Jiawei Han, Ahmed El-Kishky

Figure 1 for TwHIN-BERT: A Socially-Enriched Pre-trained Language Model for Multilingual Tweet Representations
Figure 2 for TwHIN-BERT: A Socially-Enriched Pre-trained Language Model for Multilingual Tweet Representations
Figure 3 for TwHIN-BERT: A Socially-Enriched Pre-trained Language Model for Multilingual Tweet Representations
Figure 4 for TwHIN-BERT: A Socially-Enriched Pre-trained Language Model for Multilingual Tweet Representations
Viaarxiv icon

Center Feature Fusion: Selective Multi-Sensor Fusion of Center-based Objects

Sep 26, 2022
Philip Jacobson, Yiyang Zhou, Wei Zhan, Masayoshi Tomizuka, Ming C. Wu

Figure 1 for Center Feature Fusion: Selective Multi-Sensor Fusion of Center-based Objects
Figure 2 for Center Feature Fusion: Selective Multi-Sensor Fusion of Center-based Objects
Figure 3 for Center Feature Fusion: Selective Multi-Sensor Fusion of Center-based Objects
Figure 4 for Center Feature Fusion: Selective Multi-Sensor Fusion of Center-based Objects
Viaarxiv icon

Paging with Succinct Predictions

Oct 06, 2022
Antonios Antoniadis, Joan Boyar, Marek Eliáš, Lene M. Favrholdt, Ruben Hoeksma, Kim S. Larsen, Adam Polak, Bertrand Simon

Viaarxiv icon

Large-Scale Multi-Document Summarization with Information Extraction and Compression

May 01, 2022
Ning Wang, Han Liu, Diego Klabjan

Figure 1 for Large-Scale Multi-Document Summarization with Information Extraction and Compression
Figure 2 for Large-Scale Multi-Document Summarization with Information Extraction and Compression
Figure 3 for Large-Scale Multi-Document Summarization with Information Extraction and Compression
Figure 4 for Large-Scale Multi-Document Summarization with Information Extraction and Compression
Viaarxiv icon

SVNet: Where SO(3) Equivariance Meets Binarization on Point Cloud Representation

Add code
Bookmark button
Alert button
Sep 20, 2022
Zhuo Su, Max Welling, Matti Pietikäinen, Li Liu

Figure 1 for SVNet: Where SO(3) Equivariance Meets Binarization on Point Cloud Representation
Figure 2 for SVNet: Where SO(3) Equivariance Meets Binarization on Point Cloud Representation
Figure 3 for SVNet: Where SO(3) Equivariance Meets Binarization on Point Cloud Representation
Figure 4 for SVNet: Where SO(3) Equivariance Meets Binarization on Point Cloud Representation
Viaarxiv icon

Physics-based Digital Twins for Autonomous Thermal Food Processing: Efficient, Non-intrusive Reduced-order Modeling

Sep 07, 2022
Maximilian Kannapinn, Minh Khang Pham, Michael Schäfer

Figure 1 for Physics-based Digital Twins for Autonomous Thermal Food Processing: Efficient, Non-intrusive Reduced-order Modeling
Figure 2 for Physics-based Digital Twins for Autonomous Thermal Food Processing: Efficient, Non-intrusive Reduced-order Modeling
Figure 3 for Physics-based Digital Twins for Autonomous Thermal Food Processing: Efficient, Non-intrusive Reduced-order Modeling
Figure 4 for Physics-based Digital Twins for Autonomous Thermal Food Processing: Efficient, Non-intrusive Reduced-order Modeling
Viaarxiv icon

Neighborhood-aware Scalable Temporal Network Representation Learning

Add code
Bookmark button
Alert button
Sep 05, 2022
Yuhong Luo, Pan Li

Figure 1 for Neighborhood-aware Scalable Temporal Network Representation Learning
Figure 2 for Neighborhood-aware Scalable Temporal Network Representation Learning
Figure 3 for Neighborhood-aware Scalable Temporal Network Representation Learning
Figure 4 for Neighborhood-aware Scalable Temporal Network Representation Learning
Viaarxiv icon