Alert button

"Information": models, code, and papers
Alert button

Towards Olfactory Information Extraction from Text: A Case Study on Detecting Smell Experiences in Novels

Nov 17, 2020
Ryan Brate, Paul Groth, Marieke van Erp

Figure 1 for Towards Olfactory Information Extraction from Text: A Case Study on Detecting Smell Experiences in Novels
Figure 2 for Towards Olfactory Information Extraction from Text: A Case Study on Detecting Smell Experiences in Novels
Figure 3 for Towards Olfactory Information Extraction from Text: A Case Study on Detecting Smell Experiences in Novels
Figure 4 for Towards Olfactory Information Extraction from Text: A Case Study on Detecting Smell Experiences in Novels
Viaarxiv icon

Focused Attention Improves Document-Grounded Generation

Apr 26, 2021
Shrimai Prabhumoye, Kazuma Hashimoto, Yingbo Zhou, Alan W Black, Ruslan Salakhutdinov

Figure 1 for Focused Attention Improves Document-Grounded Generation
Figure 2 for Focused Attention Improves Document-Grounded Generation
Figure 3 for Focused Attention Improves Document-Grounded Generation
Figure 4 for Focused Attention Improves Document-Grounded Generation
Viaarxiv icon

On the long-term learning ability of LSTM LMs

Jun 16, 2021
Wim Boes, Robbe Van Rompaey, Lyan Verwimp, Joris Pelemans, Hugo Van hamme, Patrick Wambacq

Figure 1 for On the long-term learning ability of LSTM LMs
Figure 2 for On the long-term learning ability of LSTM LMs
Figure 3 for On the long-term learning ability of LSTM LMs
Figure 4 for On the long-term learning ability of LSTM LMs
Viaarxiv icon

A Topological-Framework to Improve Analysis of Machine Learning Model Performance

Jul 09, 2021
Henry Kvinge, Colby Wight, Sarah Akers, Scott Howland, Woongjo Choi, Xiaolong Ma, Luke Gosink, Elizabeth Jurrus, Keerti Kappagantula, Tegan H. Emerson

Figure 1 for A Topological-Framework to Improve Analysis of Machine Learning Model Performance
Figure 2 for A Topological-Framework to Improve Analysis of Machine Learning Model Performance
Viaarxiv icon

Should Answer Immediately or Wait for Further Information? A Novel Wait-or-Answer Task and Its Predictive Approach

May 27, 2020
Zehao Lin, Shaobo Cui, Xiaoming Kang, Guodun Li, Feng Ji, Haiqing Chen, Yin Zhang

Figure 1 for Should Answer Immediately or Wait for Further Information? A Novel Wait-or-Answer Task and Its Predictive Approach
Figure 2 for Should Answer Immediately or Wait for Further Information? A Novel Wait-or-Answer Task and Its Predictive Approach
Figure 3 for Should Answer Immediately or Wait for Further Information? A Novel Wait-or-Answer Task and Its Predictive Approach
Figure 4 for Should Answer Immediately or Wait for Further Information? A Novel Wait-or-Answer Task and Its Predictive Approach
Viaarxiv icon

Generating Diversified Comments via Reader-Aware Topic Modeling and Saliency Detection

Feb 13, 2021
Wei Wang, Piji Li, Hai-Tao Zheng

Figure 1 for Generating Diversified Comments via Reader-Aware Topic Modeling and Saliency Detection
Figure 2 for Generating Diversified Comments via Reader-Aware Topic Modeling and Saliency Detection
Figure 3 for Generating Diversified Comments via Reader-Aware Topic Modeling and Saliency Detection
Figure 4 for Generating Diversified Comments via Reader-Aware Topic Modeling and Saliency Detection
Viaarxiv icon

Do Syntactic Probes Probe Syntax? Experiments with Jabberwocky Probing

Jun 04, 2021
Rowan Hall Maudslay, Ryan Cotterell

Figure 1 for Do Syntactic Probes Probe Syntax? Experiments with Jabberwocky Probing
Figure 2 for Do Syntactic Probes Probe Syntax? Experiments with Jabberwocky Probing
Figure 3 for Do Syntactic Probes Probe Syntax? Experiments with Jabberwocky Probing
Figure 4 for Do Syntactic Probes Probe Syntax? Experiments with Jabberwocky Probing
Viaarxiv icon

MST: Masked Self-Supervised Transformer for Visual Representation

Jun 10, 2021
Zhaowen Li, Zhiyang Chen, Fan Yang, Wei Li, Yousong Zhu, Chaoyang Zhao, Rui Deng, Liwei Wu, Rui Zhao, Ming Tang, Jinqiao Wang

Figure 1 for MST: Masked Self-Supervised Transformer for Visual Representation
Figure 2 for MST: Masked Self-Supervised Transformer for Visual Representation
Figure 3 for MST: Masked Self-Supervised Transformer for Visual Representation
Figure 4 for MST: Masked Self-Supervised Transformer for Visual Representation
Viaarxiv icon

Deep Learning-based Biological Anatomical Landmark Detection in Colonoscopy Videos

Aug 06, 2021
Kaiwei Che, Chengwei Ye, Yibing Yao, Nachuan Ma, Ruo Zhang, Jiankun Wang, Max Q. -H. Meng

Figure 1 for Deep Learning-based Biological Anatomical Landmark Detection in Colonoscopy Videos
Figure 2 for Deep Learning-based Biological Anatomical Landmark Detection in Colonoscopy Videos
Figure 3 for Deep Learning-based Biological Anatomical Landmark Detection in Colonoscopy Videos
Figure 4 for Deep Learning-based Biological Anatomical Landmark Detection in Colonoscopy Videos
Viaarxiv icon

Dual-Attention Enhanced BDense-UNet for Liver Lesion Segmentation

Jul 24, 2021
Wenming Cao, Philip L. H. Yu, Gilbert C. S. Lui, Keith W. H. Chiu, Ho-Ming Cheng, Yanwen Fang, Man-Fung Yuen, Wai-Kay Seto

Figure 1 for Dual-Attention Enhanced BDense-UNet for Liver Lesion Segmentation
Figure 2 for Dual-Attention Enhanced BDense-UNet for Liver Lesion Segmentation
Figure 3 for Dual-Attention Enhanced BDense-UNet for Liver Lesion Segmentation
Figure 4 for Dual-Attention Enhanced BDense-UNet for Liver Lesion Segmentation
Viaarxiv icon