Alert button

"Information": models, code, and papers
Alert button

Named Entity Recognition and Classification on Historical Documents: A Survey

Add code
Bookmark button
Alert button
Sep 23, 2021
Maud Ehrmann, Ahmed Hamdi, Elvys Linhares Pontes, Matteo Romanello, Antoine Doucet

Figure 1 for Named Entity Recognition and Classification on Historical Documents: A Survey
Figure 2 for Named Entity Recognition and Classification on Historical Documents: A Survey
Figure 3 for Named Entity Recognition and Classification on Historical Documents: A Survey
Figure 4 for Named Entity Recognition and Classification on Historical Documents: A Survey
Viaarxiv icon

A Self-adaptive Weighted Differential Evolution Approach for Large-scale Feature Selection

Add code
Bookmark button
Alert button
Oct 27, 2021
Xubin Wang, Yunhe Wang, Ka-Chun Wong, Xiangtao Li

Figure 1 for A Self-adaptive Weighted Differential Evolution Approach for Large-scale Feature Selection
Figure 2 for A Self-adaptive Weighted Differential Evolution Approach for Large-scale Feature Selection
Figure 3 for A Self-adaptive Weighted Differential Evolution Approach for Large-scale Feature Selection
Figure 4 for A Self-adaptive Weighted Differential Evolution Approach for Large-scale Feature Selection
Viaarxiv icon

Polarized skylight orientation determination artificial neural network

Jul 06, 2021
Huaju Liang, Hongyang Bai, Ke Hu, Xinbo Lv

Figure 1 for Polarized skylight orientation determination artificial neural network
Figure 2 for Polarized skylight orientation determination artificial neural network
Figure 3 for Polarized skylight orientation determination artificial neural network
Figure 4 for Polarized skylight orientation determination artificial neural network
Viaarxiv icon

Revisiting Self-Training for Few-Shot Learning of Language Model

Add code
Bookmark button
Alert button
Oct 04, 2021
Yiming Chen, Yan Zhang, Chen Zhang, Grandee Lee, Ran Cheng, Haizhou Li

Figure 1 for Revisiting Self-Training for Few-Shot Learning of Language Model
Figure 2 for Revisiting Self-Training for Few-Shot Learning of Language Model
Figure 3 for Revisiting Self-Training for Few-Shot Learning of Language Model
Figure 4 for Revisiting Self-Training for Few-Shot Learning of Language Model
Viaarxiv icon

VATEX Captioning Challenge 2019: Multi-modal Information Fusion and Multi-stage Training Strategy for Video Captioning

Oct 13, 2019
Ziqi Zhang, Yaya Shi, Jiutong Wei, Chunfeng Yuan, Bing Li, Weiming Hu

Figure 1 for VATEX Captioning Challenge 2019: Multi-modal Information Fusion and Multi-stage Training Strategy for Video Captioning
Figure 2 for VATEX Captioning Challenge 2019: Multi-modal Information Fusion and Multi-stage Training Strategy for Video Captioning
Viaarxiv icon

Incorporating Reachability Knowledge into a Multi-Spatial Graph Convolution Based Seq2Seq Model for Traffic Forecasting

Add code
Bookmark button
Alert button
Jul 04, 2021
Jiexia Ye, Furong Zheng, Juanjuan Zhao, Kejiang Ye, Chengzhong Xu

Figure 1 for Incorporating Reachability Knowledge into a Multi-Spatial Graph Convolution Based Seq2Seq Model for Traffic Forecasting
Figure 2 for Incorporating Reachability Knowledge into a Multi-Spatial Graph Convolution Based Seq2Seq Model for Traffic Forecasting
Figure 3 for Incorporating Reachability Knowledge into a Multi-Spatial Graph Convolution Based Seq2Seq Model for Traffic Forecasting
Figure 4 for Incorporating Reachability Knowledge into a Multi-Spatial Graph Convolution Based Seq2Seq Model for Traffic Forecasting
Viaarxiv icon

PAENet: A Progressive Attention-Enhanced Network for 3D to 2D Retinal Vessel Segmentation

Aug 26, 2021
Zhuojie Wu, Muyi Sun

Figure 1 for PAENet: A Progressive Attention-Enhanced Network for 3D to 2D Retinal Vessel Segmentation
Figure 2 for PAENet: A Progressive Attention-Enhanced Network for 3D to 2D Retinal Vessel Segmentation
Figure 3 for PAENet: A Progressive Attention-Enhanced Network for 3D to 2D Retinal Vessel Segmentation
Figure 4 for PAENet: A Progressive Attention-Enhanced Network for 3D to 2D Retinal Vessel Segmentation
Viaarxiv icon

Alleviating the transit timing variation bias in transit surveys. I. RIVERS: Method and detection of a pair of resonant super-Earths around Kepler-1705

Add code
Bookmark button
Alert button
Nov 12, 2021
A. Leleu, G. Chatel, S. Udry, Y. Alibert, J. -B. Delisle, R. Mardling

Figure 1 for Alleviating the transit timing variation bias in transit surveys. I. RIVERS: Method and detection of a pair of resonant super-Earths around Kepler-1705
Figure 2 for Alleviating the transit timing variation bias in transit surveys. I. RIVERS: Method and detection of a pair of resonant super-Earths around Kepler-1705
Figure 3 for Alleviating the transit timing variation bias in transit surveys. I. RIVERS: Method and detection of a pair of resonant super-Earths around Kepler-1705
Figure 4 for Alleviating the transit timing variation bias in transit surveys. I. RIVERS: Method and detection of a pair of resonant super-Earths around Kepler-1705
Viaarxiv icon

Calibrating the Dice loss to handle neural network overconfidence for biomedical image segmentation

Add code
Bookmark button
Alert button
Oct 31, 2021
Michael Yeung, Leonardo Rundo, Yang Nan, Evis Sala, Carola-Bibiane Schönlieb, Guang Yang

Figure 1 for Calibrating the Dice loss to handle neural network overconfidence for biomedical image segmentation
Figure 2 for Calibrating the Dice loss to handle neural network overconfidence for biomedical image segmentation
Figure 3 for Calibrating the Dice loss to handle neural network overconfidence for biomedical image segmentation
Figure 4 for Calibrating the Dice loss to handle neural network overconfidence for biomedical image segmentation
Viaarxiv icon

Multiplicative Position-aware Transformer Models for Language Understanding

Add code
Bookmark button
Alert button
Sep 27, 2021
Zhiheng Huang, Davis Liang, Peng Xu, Bing Xiang

Figure 1 for Multiplicative Position-aware Transformer Models for Language Understanding
Figure 2 for Multiplicative Position-aware Transformer Models for Language Understanding
Figure 3 for Multiplicative Position-aware Transformer Models for Language Understanding
Figure 4 for Multiplicative Position-aware Transformer Models for Language Understanding
Viaarxiv icon