Alert button

"Information": models, code, and papers
Alert button

Looking for Out-of-Distribution Environments in Critical Care: A case study with the eICU Database

May 26, 2022
Dimitris Spathis, Stephanie L. Hyland

Figure 1 for Looking for Out-of-Distribution Environments in Critical Care: A case study with the eICU Database
Figure 2 for Looking for Out-of-Distribution Environments in Critical Care: A case study with the eICU Database
Figure 3 for Looking for Out-of-Distribution Environments in Critical Care: A case study with the eICU Database
Figure 4 for Looking for Out-of-Distribution Environments in Critical Care: A case study with the eICU Database
Viaarxiv icon

PLAID: An Efficient Engine for Late Interaction Retrieval

May 19, 2022
Keshav Santhanam, Omar Khattab, Christopher Potts, Matei Zaharia

Figure 1 for PLAID: An Efficient Engine for Late Interaction Retrieval
Figure 2 for PLAID: An Efficient Engine for Late Interaction Retrieval
Figure 3 for PLAID: An Efficient Engine for Late Interaction Retrieval
Figure 4 for PLAID: An Efficient Engine for Late Interaction Retrieval
Viaarxiv icon

A Span Extraction Approach for Information Extraction on Visually-Rich Documents

Jun 02, 2021
Tuan-Anh D. Nguyen, Hieu M. Vu, Nguyen Hong Son, Minh-Tien Nguyen

Figure 1 for A Span Extraction Approach for Information Extraction on Visually-Rich Documents
Figure 2 for A Span Extraction Approach for Information Extraction on Visually-Rich Documents
Figure 3 for A Span Extraction Approach for Information Extraction on Visually-Rich Documents
Figure 4 for A Span Extraction Approach for Information Extraction on Visually-Rich Documents
Viaarxiv icon

Neural Processes with Stochastic Attention: Paying more attention to the context dataset

Apr 11, 2022
Mingyu Kim, Kyeongryeol Go, Se-Young Yun

Figure 1 for Neural Processes with Stochastic Attention: Paying more attention to the context dataset
Figure 2 for Neural Processes with Stochastic Attention: Paying more attention to the context dataset
Figure 3 for Neural Processes with Stochastic Attention: Paying more attention to the context dataset
Figure 4 for Neural Processes with Stochastic Attention: Paying more attention to the context dataset
Viaarxiv icon

ActiveMLP: An MLP-like Architecture with Active Token Mixer

Mar 11, 2022
Guoqiang Wei, Zhizheng Zhang, Cuiling Lan, Yan Lu, Zhibo Chen

Figure 1 for ActiveMLP: An MLP-like Architecture with Active Token Mixer
Figure 2 for ActiveMLP: An MLP-like Architecture with Active Token Mixer
Figure 3 for ActiveMLP: An MLP-like Architecture with Active Token Mixer
Figure 4 for ActiveMLP: An MLP-like Architecture with Active Token Mixer
Viaarxiv icon

Flow-based Recurrent Belief State Learning for POMDPs

May 23, 2022
Xiaoyu Chen, Yao Mu, Ping Luo, Shengbo Li, Jianyu Chen

Figure 1 for Flow-based Recurrent Belief State Learning for POMDPs
Figure 2 for Flow-based Recurrent Belief State Learning for POMDPs
Figure 3 for Flow-based Recurrent Belief State Learning for POMDPs
Figure 4 for Flow-based Recurrent Belief State Learning for POMDPs
Viaarxiv icon

Recurrent Encoder-Decoder Networks for Vessel Trajectory Prediction with Uncertainty Estimation

May 11, 2022
Samuele Capobianco, Nicola Forti, Leonardo M. Millefiori, Paolo Braca, Peter Willett

Figure 1 for Recurrent Encoder-Decoder Networks for Vessel Trajectory Prediction with Uncertainty Estimation
Figure 2 for Recurrent Encoder-Decoder Networks for Vessel Trajectory Prediction with Uncertainty Estimation
Figure 3 for Recurrent Encoder-Decoder Networks for Vessel Trajectory Prediction with Uncertainty Estimation
Figure 4 for Recurrent Encoder-Decoder Networks for Vessel Trajectory Prediction with Uncertainty Estimation
Viaarxiv icon

On the Evolution of Syntactic Information Encoded by BERT's Contextualized Representations

Jan 27, 2021
Laura Perez-Mayos, Roberto Carlini, Miguel Ballesteros, Leo Wanner

Figure 1 for On the Evolution of Syntactic Information Encoded by BERT's Contextualized Representations
Figure 2 for On the Evolution of Syntactic Information Encoded by BERT's Contextualized Representations
Figure 3 for On the Evolution of Syntactic Information Encoded by BERT's Contextualized Representations
Figure 4 for On the Evolution of Syntactic Information Encoded by BERT's Contextualized Representations
Viaarxiv icon

Vision Transformer with Cross-attention by Temporal Shift for Efficient Action Recognition

Apr 01, 2022
Ryota Hashiguchi, Toru Tamaki

Figure 1 for Vision Transformer with Cross-attention by Temporal Shift for Efficient Action Recognition
Figure 2 for Vision Transformer with Cross-attention by Temporal Shift for Efficient Action Recognition
Figure 3 for Vision Transformer with Cross-attention by Temporal Shift for Efficient Action Recognition
Figure 4 for Vision Transformer with Cross-attention by Temporal Shift for Efficient Action Recognition
Viaarxiv icon

Geometric Synthesis: A Free lunch for Large-scale Palmprint Recognition Model Pretraining

Apr 11, 2022
Kai Zhao, Lei Shen, Yingyi Zhang, Chuhan Zhou, Tao Wang, Ruixin Zhang, Shouhong Ding, Wei Jia, Wei Shen

Figure 1 for Geometric Synthesis: A Free lunch for Large-scale Palmprint Recognition Model Pretraining
Figure 2 for Geometric Synthesis: A Free lunch for Large-scale Palmprint Recognition Model Pretraining
Figure 3 for Geometric Synthesis: A Free lunch for Large-scale Palmprint Recognition Model Pretraining
Figure 4 for Geometric Synthesis: A Free lunch for Large-scale Palmprint Recognition Model Pretraining
Viaarxiv icon