Alert button

"Information": models, code, and papers
Alert button

An Efficient FPGA Accelerator for Point Cloud

Oct 14, 2022
Zilun Wang, Wendong Mao, Peixiang Yang, Zhongfeng Wang, Jun Lin

Figure 1 for An Efficient FPGA Accelerator for Point Cloud
Figure 2 for An Efficient FPGA Accelerator for Point Cloud
Figure 3 for An Efficient FPGA Accelerator for Point Cloud
Figure 4 for An Efficient FPGA Accelerator for Point Cloud
Viaarxiv icon

HashFormers: Towards Vocabulary-independent Pre-trained Transformers

Add code
Bookmark button
Alert button
Oct 14, 2022
Huiyin Xue, Nikolaos Aletras

Figure 1 for HashFormers: Towards Vocabulary-independent Pre-trained Transformers
Figure 2 for HashFormers: Towards Vocabulary-independent Pre-trained Transformers
Figure 3 for HashFormers: Towards Vocabulary-independent Pre-trained Transformers
Figure 4 for HashFormers: Towards Vocabulary-independent Pre-trained Transformers
Viaarxiv icon

The Surprisingly Straightforward Scene Text Removal Method With Gated Attention and Region of Interest Generation: A Comprehensive Prominent Model Analysis

Add code
Bookmark button
Alert button
Oct 14, 2022
Hyeonsu Lee, Chankyu Choi

Figure 1 for The Surprisingly Straightforward Scene Text Removal Method With Gated Attention and Region of Interest Generation: A Comprehensive Prominent Model Analysis
Figure 2 for The Surprisingly Straightforward Scene Text Removal Method With Gated Attention and Region of Interest Generation: A Comprehensive Prominent Model Analysis
Figure 3 for The Surprisingly Straightforward Scene Text Removal Method With Gated Attention and Region of Interest Generation: A Comprehensive Prominent Model Analysis
Figure 4 for The Surprisingly Straightforward Scene Text Removal Method With Gated Attention and Region of Interest Generation: A Comprehensive Prominent Model Analysis
Viaarxiv icon

BioIE: Biomedical Information Extraction with Multi-head Attention Enhanced Graph Convolutional Network

Oct 26, 2021
Jialun Wu, Yang Liu, Zeyu Gao, Tieliang Gong, Chunbao Wang, Chen Li

Figure 1 for BioIE: Biomedical Information Extraction with Multi-head Attention Enhanced Graph Convolutional Network
Figure 2 for BioIE: Biomedical Information Extraction with Multi-head Attention Enhanced Graph Convolutional Network
Figure 3 for BioIE: Biomedical Information Extraction with Multi-head Attention Enhanced Graph Convolutional Network
Figure 4 for BioIE: Biomedical Information Extraction with Multi-head Attention Enhanced Graph Convolutional Network
Viaarxiv icon

Cost-effective photonic super-resolution millimeter-wave joint radar-communication system using self-coherent detection

Oct 09, 2022
Wenlin Bai, Peixuan Li, Xihua Zou, Ningyuan Zhong, Wei Pan, Lianshan Yan, Bin Luo

Figure 1 for Cost-effective photonic super-resolution millimeter-wave joint radar-communication system using self-coherent detection
Figure 2 for Cost-effective photonic super-resolution millimeter-wave joint radar-communication system using self-coherent detection
Figure 3 for Cost-effective photonic super-resolution millimeter-wave joint radar-communication system using self-coherent detection
Figure 4 for Cost-effective photonic super-resolution millimeter-wave joint radar-communication system using self-coherent detection
Viaarxiv icon

Let Images Give You More:Point Cloud Cross-Modal Training for Shape Analysis

Add code
Bookmark button
Alert button
Oct 09, 2022
Xu Yan, Heshen Zhan, Chaoda Zheng, Jiantao Gao, Ruimao Zhang, Shuguang Cui, Zhen Li

Figure 1 for Let Images Give You More:Point Cloud Cross-Modal Training for Shape Analysis
Figure 2 for Let Images Give You More:Point Cloud Cross-Modal Training for Shape Analysis
Figure 3 for Let Images Give You More:Point Cloud Cross-Modal Training for Shape Analysis
Figure 4 for Let Images Give You More:Point Cloud Cross-Modal Training for Shape Analysis
Viaarxiv icon

KSAT: Knowledge-infused Self Attention Transformer -- Integrating Multiple Domain-Specific Contexts

Oct 09, 2022
Kaushik Roy, Yuxin Zi, Vignesh Narayanan, Manas Gaur, Amit Sheth

Figure 1 for KSAT: Knowledge-infused Self Attention Transformer -- Integrating Multiple Domain-Specific Contexts
Figure 2 for KSAT: Knowledge-infused Self Attention Transformer -- Integrating Multiple Domain-Specific Contexts
Figure 3 for KSAT: Knowledge-infused Self Attention Transformer -- Integrating Multiple Domain-Specific Contexts
Figure 4 for KSAT: Knowledge-infused Self Attention Transformer -- Integrating Multiple Domain-Specific Contexts
Viaarxiv icon

STAR: Zero-Shot Chinese Character Recognition with Stroke- and Radical-Level Decompositions

Oct 16, 2022
Jinshan Zeng, Ruiying Xu, Yu Wu, Hongwei Li, Jiaxing Lu

Figure 1 for STAR: Zero-Shot Chinese Character Recognition with Stroke- and Radical-Level Decompositions
Figure 2 for STAR: Zero-Shot Chinese Character Recognition with Stroke- and Radical-Level Decompositions
Figure 3 for STAR: Zero-Shot Chinese Character Recognition with Stroke- and Radical-Level Decompositions
Figure 4 for STAR: Zero-Shot Chinese Character Recognition with Stroke- and Radical-Level Decompositions
Viaarxiv icon

InTEn-LOAM: Intensity and Temporal Enhanced LiDAR Odometry and Mapping

Add code
Bookmark button
Alert button
Sep 13, 2022
Shuaixin Li, Bin Tian, Zhu Xiaozhou, Gui Jianjun, Yao Wen, Guangyun Li

Figure 1 for InTEn-LOAM: Intensity and Temporal Enhanced LiDAR Odometry and Mapping
Figure 2 for InTEn-LOAM: Intensity and Temporal Enhanced LiDAR Odometry and Mapping
Figure 3 for InTEn-LOAM: Intensity and Temporal Enhanced LiDAR Odometry and Mapping
Figure 4 for InTEn-LOAM: Intensity and Temporal Enhanced LiDAR Odometry and Mapping
Viaarxiv icon

Probing Cross-modal Semantics Alignment Capability from the Textual Perspective

Add code
Bookmark button
Alert button
Oct 18, 2022
Zheng Ma, Shi Zong, Mianzhi Pan, Jianbing Zhang, Shujian Huang, Xinyu Dai, Jiajun Chen

Figure 1 for Probing Cross-modal Semantics Alignment Capability from the Textual Perspective
Figure 2 for Probing Cross-modal Semantics Alignment Capability from the Textual Perspective
Figure 3 for Probing Cross-modal Semantics Alignment Capability from the Textual Perspective
Figure 4 for Probing Cross-modal Semantics Alignment Capability from the Textual Perspective
Viaarxiv icon