Picture for Georgios Chochlakis

Georgios Chochlakis

Signal Analysis and Interpretation Lab, University of Southern California, Information Science Institute, University of Southern California

The Strong Pull of Prior Knowledge in Large Language Models and Its Impact on Emotion Recognition

Add code
Mar 25, 2024
Figure 1 for The Strong Pull of Prior Knowledge in Large Language Models and Its Impact on Emotion Recognition
Figure 2 for The Strong Pull of Prior Knowledge in Large Language Models and Its Impact on Emotion Recognition
Figure 3 for The Strong Pull of Prior Knowledge in Large Language Models and Its Impact on Emotion Recognition
Figure 4 for The Strong Pull of Prior Knowledge in Large Language Models and Its Impact on Emotion Recognition
Viaarxiv icon

A Multi-Perspective Machine Learning Approach to Evaluate Police-Driver Interaction in Los Angeles

Add code
Feb 08, 2024
Viaarxiv icon

Using Emotion Embeddings to Transfer Knowledge Between Emotions, Languages, and Annotation Formats

Add code
Oct 31, 2022
Figure 1 for Using Emotion Embeddings to Transfer Knowledge Between Emotions, Languages, and Annotation Formats
Figure 2 for Using Emotion Embeddings to Transfer Knowledge Between Emotions, Languages, and Annotation Formats
Figure 3 for Using Emotion Embeddings to Transfer Knowledge Between Emotions, Languages, and Annotation Formats
Figure 4 for Using Emotion Embeddings to Transfer Knowledge Between Emotions, Languages, and Annotation Formats
Viaarxiv icon

Leveraging Label Correlations in a Multi-label Setting: A Case Study in Emotion

Add code
Oct 28, 2022
Figure 1 for Leveraging Label Correlations in a Multi-label Setting: A Case Study in Emotion
Figure 2 for Leveraging Label Correlations in a Multi-label Setting: A Case Study in Emotion
Figure 3 for Leveraging Label Correlations in a Multi-label Setting: A Case Study in Emotion
Viaarxiv icon

VAuLT: Augmenting the Vision-and-Language Transformer with the Propagation of Deep Language Representations

Add code
Aug 18, 2022
Figure 1 for VAuLT: Augmenting the Vision-and-Language Transformer with the Propagation of Deep Language Representations
Figure 2 for VAuLT: Augmenting the Vision-and-Language Transformer with the Propagation of Deep Language Representations
Figure 3 for VAuLT: Augmenting the Vision-and-Language Transformer with the Propagation of Deep Language Representations
Figure 4 for VAuLT: Augmenting the Vision-and-Language Transformer with the Propagation of Deep Language Representations
Viaarxiv icon

CLiMB: A Continual Learning Benchmark for Vision-and-Language Tasks

Add code
Jun 18, 2022
Figure 1 for CLiMB: A Continual Learning Benchmark for Vision-and-Language Tasks
Figure 2 for CLiMB: A Continual Learning Benchmark for Vision-and-Language Tasks
Figure 3 for CLiMB: A Continual Learning Benchmark for Vision-and-Language Tasks
Figure 4 for CLiMB: A Continual Learning Benchmark for Vision-and-Language Tasks
Viaarxiv icon

End-to-end Generative Zero-shot Learning via Few-shot Learning

Add code
Feb 08, 2021
Figure 1 for End-to-end Generative Zero-shot Learning via Few-shot Learning
Figure 2 for End-to-end Generative Zero-shot Learning via Few-shot Learning
Figure 3 for End-to-end Generative Zero-shot Learning via Few-shot Learning
Figure 4 for End-to-end Generative Zero-shot Learning via Few-shot Learning
Viaarxiv icon