Alert button

"Information": models, code, and papers
Alert button

DOMINO: Domain-invariant Hyperdimensional Classification for Multi-Sensor Time Series Data

Aug 18, 2023
Junyao Wang, Luke Chen, Mohammad Abdullah Al Faruque

Figure 1 for DOMINO: Domain-invariant Hyperdimensional Classification for Multi-Sensor Time Series Data
Figure 2 for DOMINO: Domain-invariant Hyperdimensional Classification for Multi-Sensor Time Series Data
Figure 3 for DOMINO: Domain-invariant Hyperdimensional Classification for Multi-Sensor Time Series Data
Figure 4 for DOMINO: Domain-invariant Hyperdimensional Classification for Multi-Sensor Time Series Data
Viaarxiv icon

Open-vocabulary Video Question Answering: A New Benchmark for Evaluating the Generalizability of Video Question Answering Models

Aug 18, 2023
Dohwan Ko, Ji Soo Lee, Miso Choi, Jaewon Chu, Jihwan Park, Hyunwoo J. Kim

Figure 1 for Open-vocabulary Video Question Answering: A New Benchmark for Evaluating the Generalizability of Video Question Answering Models
Figure 2 for Open-vocabulary Video Question Answering: A New Benchmark for Evaluating the Generalizability of Video Question Answering Models
Figure 3 for Open-vocabulary Video Question Answering: A New Benchmark for Evaluating the Generalizability of Video Question Answering Models
Figure 4 for Open-vocabulary Video Question Answering: A New Benchmark for Evaluating the Generalizability of Video Question Answering Models
Viaarxiv icon

NAPA-VQ: Neighborhood Aware Prototype Augmentation with Vector Quantization for Continual Learning

Aug 18, 2023
Tamasha Malepathirana, Damith Senanayake, Saman Halgamuge

Figure 1 for NAPA-VQ: Neighborhood Aware Prototype Augmentation with Vector Quantization for Continual Learning
Figure 2 for NAPA-VQ: Neighborhood Aware Prototype Augmentation with Vector Quantization for Continual Learning
Figure 3 for NAPA-VQ: Neighborhood Aware Prototype Augmentation with Vector Quantization for Continual Learning
Figure 4 for NAPA-VQ: Neighborhood Aware Prototype Augmentation with Vector Quantization for Continual Learning
Viaarxiv icon

Refining Human-Centered Autonomy Using Side Information

May 09, 2023
Adam J. Thorpe

Viaarxiv icon

PromptStyler: Prompt-driven Style Generation for Source-free Domain Generalization

Aug 15, 2023
Junhyeong Cho, Gilhyun Nam, Sungyeon Kim, Hunmin Yang, Suha Kwak

Figure 1 for PromptStyler: Prompt-driven Style Generation for Source-free Domain Generalization
Figure 2 for PromptStyler: Prompt-driven Style Generation for Source-free Domain Generalization
Figure 3 for PromptStyler: Prompt-driven Style Generation for Source-free Domain Generalization
Figure 4 for PromptStyler: Prompt-driven Style Generation for Source-free Domain Generalization
Viaarxiv icon

Teach LLMs to Personalize -- An Approach inspired by Writing Education

Aug 15, 2023
Cheng Li, Mingyang Zhang, Qiaozhu Mei, Yaqing Wang, Spurthi Amba Hombaiah, Yi Liang, Michael Bendersky

Figure 1 for Teach LLMs to Personalize -- An Approach inspired by Writing Education
Figure 2 for Teach LLMs to Personalize -- An Approach inspired by Writing Education
Figure 3 for Teach LLMs to Personalize -- An Approach inspired by Writing Education
Figure 4 for Teach LLMs to Personalize -- An Approach inspired by Writing Education
Viaarxiv icon

Freshness or Accuracy, Why Not Both? Addressing Delayed Feedback via Dynamic Graph Neural Networks

Aug 15, 2023
Xiaolin Zheng, Zhongyu Wang, Chaochao Chen, Feng Zhu, Jiashu Qian

Figure 1 for Freshness or Accuracy, Why Not Both? Addressing Delayed Feedback via Dynamic Graph Neural Networks
Figure 2 for Freshness or Accuracy, Why Not Both? Addressing Delayed Feedback via Dynamic Graph Neural Networks
Figure 3 for Freshness or Accuracy, Why Not Both? Addressing Delayed Feedback via Dynamic Graph Neural Networks
Figure 4 for Freshness or Accuracy, Why Not Both? Addressing Delayed Feedback via Dynamic Graph Neural Networks
Viaarxiv icon

CFN-ESA: A Cross-Modal Fusion Network with Emotion-Shift Awareness for Dialogue Emotion Recognition

Jul 28, 2023
Jiang Li, Yingjian Liu, Xiaoping Wang, Zhigang Zeng

Figure 1 for CFN-ESA: A Cross-Modal Fusion Network with Emotion-Shift Awareness for Dialogue Emotion Recognition
Figure 2 for CFN-ESA: A Cross-Modal Fusion Network with Emotion-Shift Awareness for Dialogue Emotion Recognition
Figure 3 for CFN-ESA: A Cross-Modal Fusion Network with Emotion-Shift Awareness for Dialogue Emotion Recognition
Figure 4 for CFN-ESA: A Cross-Modal Fusion Network with Emotion-Shift Awareness for Dialogue Emotion Recognition
Viaarxiv icon

Beyond First Impressions: Integrating Joint Multi-modal Cues for Comprehensive 3D Representation

Aug 06, 2023
Haowei Wang, Jiji Tang, Jiayi Ji, Xiaoshuai Sun, Rongsheng Zhang, Yiwei Ma, Minda Zhao, Lincheng Li, zeng zhao, Tangjie Lv, Rongrong Ji

Figure 1 for Beyond First Impressions: Integrating Joint Multi-modal Cues for Comprehensive 3D Representation
Figure 2 for Beyond First Impressions: Integrating Joint Multi-modal Cues for Comprehensive 3D Representation
Figure 3 for Beyond First Impressions: Integrating Joint Multi-modal Cues for Comprehensive 3D Representation
Figure 4 for Beyond First Impressions: Integrating Joint Multi-modal Cues for Comprehensive 3D Representation
Viaarxiv icon

Auditory Representation Effective for Estimating Vocal Tract Information

Jun 02, 2023
Toshio Irino, Shintaro Doan

Figure 1 for Auditory Representation Effective for Estimating Vocal Tract Information
Figure 2 for Auditory Representation Effective for Estimating Vocal Tract Information
Figure 3 for Auditory Representation Effective for Estimating Vocal Tract Information
Figure 4 for Auditory Representation Effective for Estimating Vocal Tract Information
Viaarxiv icon