Alert button

"Information": models, code, and papers
Alert button

FCM: A Fine-grained Comparison Model for Multi-turn Dialogue Reasoning

Add code
Bookmark button
Alert button
Sep 23, 2021
Xu Wang, Hainan Zhang, Shuai Zhao, Yanyan Zou, Hongshen Chen, Zhuoye Ding, Bo Cheng, Yanyan Lan

Figure 1 for FCM: A Fine-grained Comparison Model for Multi-turn Dialogue Reasoning
Figure 2 for FCM: A Fine-grained Comparison Model for Multi-turn Dialogue Reasoning
Figure 3 for FCM: A Fine-grained Comparison Model for Multi-turn Dialogue Reasoning
Figure 4 for FCM: A Fine-grained Comparison Model for Multi-turn Dialogue Reasoning
Viaarxiv icon

Co-learning: Learning from Noisy Labels with Self-supervision

Add code
Bookmark button
Alert button
Aug 30, 2021
Cheng Tan, Jun Xia, Lirong Wu, Stan Z. Li

Figure 1 for Co-learning: Learning from Noisy Labels with Self-supervision
Figure 2 for Co-learning: Learning from Noisy Labels with Self-supervision
Figure 3 for Co-learning: Learning from Noisy Labels with Self-supervision
Figure 4 for Co-learning: Learning from Noisy Labels with Self-supervision
Viaarxiv icon

Spectral Temporal Graph Neural Network for Trajectory Prediction

Jun 05, 2021
Defu Cao, Jiachen Li, Hengbo Ma, Masayoshi Tomizuka

Figure 1 for Spectral Temporal Graph Neural Network for Trajectory Prediction
Figure 2 for Spectral Temporal Graph Neural Network for Trajectory Prediction
Figure 3 for Spectral Temporal Graph Neural Network for Trajectory Prediction
Figure 4 for Spectral Temporal Graph Neural Network for Trajectory Prediction
Viaarxiv icon

Semantic Extractor-Paraphraser based Abstractive Summarization

May 04, 2021
Anubhav Jangra, Raghav Jain, Vaibhav Mavi, Sriparna Saha, Pushpak Bhattacharyya

Figure 1 for Semantic Extractor-Paraphraser based Abstractive Summarization
Figure 2 for Semantic Extractor-Paraphraser based Abstractive Summarization
Figure 3 for Semantic Extractor-Paraphraser based Abstractive Summarization
Figure 4 for Semantic Extractor-Paraphraser based Abstractive Summarization
Viaarxiv icon

SSC: Semantic Scan Context for Large-Scale Place Recognition

Add code
Bookmark button
Alert button
Jul 01, 2021
Lin Li, Xin Kong, Xiangrui Zhao, Tianxin Huang, Yong Liu

Figure 1 for SSC: Semantic Scan Context for Large-Scale Place Recognition
Figure 2 for SSC: Semantic Scan Context for Large-Scale Place Recognition
Figure 3 for SSC: Semantic Scan Context for Large-Scale Place Recognition
Figure 4 for SSC: Semantic Scan Context for Large-Scale Place Recognition
Viaarxiv icon

Graph Pooling via Coarsened Graph Infomax

Add code
Bookmark button
Alert button
May 04, 2021
Yunsheng Pang, Yunxiang Zhao, Dongsheng Li

Figure 1 for Graph Pooling via Coarsened Graph Infomax
Figure 2 for Graph Pooling via Coarsened Graph Infomax
Figure 3 for Graph Pooling via Coarsened Graph Infomax
Figure 4 for Graph Pooling via Coarsened Graph Infomax
Viaarxiv icon

Sketches for Time-Dependent Machine Learning

Add code
Bookmark button
Alert button
Aug 26, 2021
Jesus Antonanzas, Marta Arias, Albert Bifet

Figure 1 for Sketches for Time-Dependent Machine Learning
Figure 2 for Sketches for Time-Dependent Machine Learning
Figure 3 for Sketches for Time-Dependent Machine Learning
Figure 4 for Sketches for Time-Dependent Machine Learning
Viaarxiv icon

Denoising ECG by Adaptive Filter with Empirical Mode Decomposition

Aug 18, 2021
Bingze Dai, Wen Bai

Figure 1 for Denoising ECG by Adaptive Filter with Empirical Mode Decomposition
Figure 2 for Denoising ECG by Adaptive Filter with Empirical Mode Decomposition
Figure 3 for Denoising ECG by Adaptive Filter with Empirical Mode Decomposition
Figure 4 for Denoising ECG by Adaptive Filter with Empirical Mode Decomposition
Viaarxiv icon

Shatter: An Efficient Transformer Encoder with Single-Headed Self-Attention and Relative Sequence Partitioning

Aug 30, 2021
Ran Tian, Joshua Maynez, Ankur P. Parikh

Figure 1 for Shatter: An Efficient Transformer Encoder with Single-Headed Self-Attention and Relative Sequence Partitioning
Figure 2 for Shatter: An Efficient Transformer Encoder with Single-Headed Self-Attention and Relative Sequence Partitioning
Figure 3 for Shatter: An Efficient Transformer Encoder with Single-Headed Self-Attention and Relative Sequence Partitioning
Figure 4 for Shatter: An Efficient Transformer Encoder with Single-Headed Self-Attention and Relative Sequence Partitioning
Viaarxiv icon

Quantifying multivariate redundancy with maximum entropy decompositions of mutual information

Add code
Bookmark button
Alert button
Apr 03, 2018
Daniel Chicharro

Figure 1 for Quantifying multivariate redundancy with maximum entropy decompositions of mutual information
Figure 2 for Quantifying multivariate redundancy with maximum entropy decompositions of mutual information
Figure 3 for Quantifying multivariate redundancy with maximum entropy decompositions of mutual information
Figure 4 for Quantifying multivariate redundancy with maximum entropy decompositions of mutual information
Viaarxiv icon