Alert button

"Information": models, code, and papers
Alert button

Removing Non-Stationary Knowledge From Pre-Trained Language Models for Entity-Level Sentiment Classification in Finance

Add code
Bookmark button
Alert button
Jan 25, 2023
Guijin Son, Hanwool Lee, Nahyeon Kang, Moonjeong Hahm

Figure 1 for Removing Non-Stationary Knowledge From Pre-Trained Language Models for Entity-Level Sentiment Classification in Finance
Figure 2 for Removing Non-Stationary Knowledge From Pre-Trained Language Models for Entity-Level Sentiment Classification in Finance
Figure 3 for Removing Non-Stationary Knowledge From Pre-Trained Language Models for Entity-Level Sentiment Classification in Finance
Figure 4 for Removing Non-Stationary Knowledge From Pre-Trained Language Models for Entity-Level Sentiment Classification in Finance
Viaarxiv icon

The Linear Capacity of Single-Server Individually-Private Information Retrieval with Side Information

Feb 24, 2022
Anoosheh Heidarzadeh, Alex Sprintson

Viaarxiv icon

Learning Transformations To Reduce the Geometric Shift in Object Detection

Jan 13, 2023
Vidit Vidit, Martin Engilberge, Mathieu Salzmann

Figure 1 for Learning Transformations To Reduce the Geometric Shift in Object Detection
Figure 2 for Learning Transformations To Reduce the Geometric Shift in Object Detection
Figure 3 for Learning Transformations To Reduce the Geometric Shift in Object Detection
Figure 4 for Learning Transformations To Reduce the Geometric Shift in Object Detection
Viaarxiv icon

MASTER: Multi-task Pre-trained Bottlenecked Masked Autoencoders are Better Dense Retrievers

Add code
Bookmark button
Alert button
Dec 15, 2022
Kun Zhou, Xiao Liu, Yeyun Gong, Wayne Xin Zhao, Daxin Jiang, Nan Duan, Ji-Rong Wen

Figure 1 for MASTER: Multi-task Pre-trained Bottlenecked Masked Autoencoders are Better Dense Retrievers
Figure 2 for MASTER: Multi-task Pre-trained Bottlenecked Masked Autoencoders are Better Dense Retrievers
Figure 3 for MASTER: Multi-task Pre-trained Bottlenecked Masked Autoencoders are Better Dense Retrievers
Figure 4 for MASTER: Multi-task Pre-trained Bottlenecked Masked Autoencoders are Better Dense Retrievers
Viaarxiv icon

ROIFormer: Semantic-Aware Region of Interest Transformer for Efficient Self-Supervised Monocular Depth Estimation

Dec 16, 2022
Daitao Xing, Jinglin Shen, Chiuman Ho, Anthony Tzes

Figure 1 for ROIFormer: Semantic-Aware Region of Interest Transformer for Efficient Self-Supervised Monocular Depth Estimation
Figure 2 for ROIFormer: Semantic-Aware Region of Interest Transformer for Efficient Self-Supervised Monocular Depth Estimation
Figure 3 for ROIFormer: Semantic-Aware Region of Interest Transformer for Efficient Self-Supervised Monocular Depth Estimation
Figure 4 for ROIFormer: Semantic-Aware Region of Interest Transformer for Efficient Self-Supervised Monocular Depth Estimation
Viaarxiv icon

SPTS v2: Single-Point Scene Text Spotting

Add code
Bookmark button
Alert button
Jan 04, 2023
Yuliang Liu, Jiaxin Zhang, Dezhi Peng, Mingxin Huang, Xinyu Wang, Jingqun Tang, Can Huang, Dahua Lin, Chunhua Shen, Xiang Bai, Lianwen Jin

Figure 1 for SPTS v2: Single-Point Scene Text Spotting
Figure 2 for SPTS v2: Single-Point Scene Text Spotting
Figure 3 for SPTS v2: Single-Point Scene Text Spotting
Figure 4 for SPTS v2: Single-Point Scene Text Spotting
Viaarxiv icon

Fully Complex-valued Fully Convolutional Multi-feature Fusion Network (FC2MFN) for Building Segmentation of InSAR images

Dec 14, 2022
Aniruddh Sikdar, Sumanth Udupa, Suresh Sundaram, Narasimhan Sundararajan

Figure 1 for Fully Complex-valued Fully Convolutional Multi-feature Fusion Network (FC2MFN) for Building Segmentation of InSAR images
Figure 2 for Fully Complex-valued Fully Convolutional Multi-feature Fusion Network (FC2MFN) for Building Segmentation of InSAR images
Figure 3 for Fully Complex-valued Fully Convolutional Multi-feature Fusion Network (FC2MFN) for Building Segmentation of InSAR images
Figure 4 for Fully Complex-valued Fully Convolutional Multi-feature Fusion Network (FC2MFN) for Building Segmentation of InSAR images
Viaarxiv icon

Edge-Assisted V2X Motion Planning and Power Control Under Channel Uncertainty

Dec 13, 2022
Zongze Li, Shuai Wang, Shiyao Zhang, Miaowen Wen, Kejiang Ye, Yik-Chung Wu, Derrick Wing Kwan Ng

Figure 1 for Edge-Assisted V2X Motion Planning and Power Control Under Channel Uncertainty
Figure 2 for Edge-Assisted V2X Motion Planning and Power Control Under Channel Uncertainty
Figure 3 for Edge-Assisted V2X Motion Planning and Power Control Under Channel Uncertainty
Figure 4 for Edge-Assisted V2X Motion Planning and Power Control Under Channel Uncertainty
Viaarxiv icon

Improving Performance of Object Detection using the Mechanisms of Visual Recognition in Humans

Jan 23, 2023
Amir Ghasemi, Fatemeh Mottaghian, Akram Bayat

Figure 1 for Improving Performance of Object Detection using the Mechanisms of Visual Recognition in Humans
Figure 2 for Improving Performance of Object Detection using the Mechanisms of Visual Recognition in Humans
Figure 3 for Improving Performance of Object Detection using the Mechanisms of Visual Recognition in Humans
Figure 4 for Improving Performance of Object Detection using the Mechanisms of Visual Recognition in Humans
Viaarxiv icon

Triplet Contrastive Learning for Unsupervised Vehicle Re-identification

Add code
Bookmark button
Alert button
Jan 23, 2023
Fei Shen, Xiaoyu Du, Liyan Zhang, Jinhui Tang

Figure 1 for Triplet Contrastive Learning for Unsupervised Vehicle Re-identification
Figure 2 for Triplet Contrastive Learning for Unsupervised Vehicle Re-identification
Figure 3 for Triplet Contrastive Learning for Unsupervised Vehicle Re-identification
Figure 4 for Triplet Contrastive Learning for Unsupervised Vehicle Re-identification
Viaarxiv icon