Alert button
Picture for Georgios Chochlakis

Georgios Chochlakis

Alert button

Signal Analysis and Interpretation Lab, University of Southern California, Information Science Institute, University of Southern California

A Multi-Perspective Machine Learning Approach to Evaluate Police-Driver Interaction in Los Angeles

Feb 08, 2024
Benjamin A. T. Grahama, Lauren Brown, Georgios Chochlakis, Morteza Dehghani, Raquel Delerme, Brittany Friedman, Ellie Graeden, Preni Golazizian, Rajat Hebbar, Parsa Hejabi, Aditya Kommineni, Mayagüez Salinas, Michael Sierra-Arévalo, Jackson Trager, Nicholas Weller, Shrikanth Narayan

Viaarxiv icon

Using Emotion Embeddings to Transfer Knowledge Between Emotions, Languages, and Annotation Formats

Oct 31, 2022
Georgios Chochlakis, Gireesh Mahajan, Sabyasachee Baruah, Keith Burghardt, Kristina Lerman, Shrikanth Narayanan

Figure 1 for Using Emotion Embeddings to Transfer Knowledge Between Emotions, Languages, and Annotation Formats
Figure 2 for Using Emotion Embeddings to Transfer Knowledge Between Emotions, Languages, and Annotation Formats
Figure 3 for Using Emotion Embeddings to Transfer Knowledge Between Emotions, Languages, and Annotation Formats
Figure 4 for Using Emotion Embeddings to Transfer Knowledge Between Emotions, Languages, and Annotation Formats
Viaarxiv icon

Leveraging Label Correlations in a Multi-label Setting: A Case Study in Emotion

Oct 28, 2022
Georgios Chochlakis, Gireesh Mahajan, Sabyasachee Baruah, Keith Burghardt, Kristina Lerman, Shrikanth Narayanan

Figure 1 for Leveraging Label Correlations in a Multi-label Setting: A Case Study in Emotion
Figure 2 for Leveraging Label Correlations in a Multi-label Setting: A Case Study in Emotion
Figure 3 for Leveraging Label Correlations in a Multi-label Setting: A Case Study in Emotion
Viaarxiv icon

VAuLT: Augmenting the Vision-and-Language Transformer with the Propagation of Deep Language Representations

Aug 18, 2022
Georgios Chochlakis, Tejas Srinivasan, Jesse Thomason, Shrikanth Narayanan

Figure 1 for VAuLT: Augmenting the Vision-and-Language Transformer with the Propagation of Deep Language Representations
Figure 2 for VAuLT: Augmenting the Vision-and-Language Transformer with the Propagation of Deep Language Representations
Figure 3 for VAuLT: Augmenting the Vision-and-Language Transformer with the Propagation of Deep Language Representations
Figure 4 for VAuLT: Augmenting the Vision-and-Language Transformer with the Propagation of Deep Language Representations
Viaarxiv icon

CLiMB: A Continual Learning Benchmark for Vision-and-Language Tasks

Jun 18, 2022
Tejas Srinivasan, Ting-Yun Chang, Leticia Leonor Pinto Alva, Georgios Chochlakis, Mohammad Rostami, Jesse Thomason

Figure 1 for CLiMB: A Continual Learning Benchmark for Vision-and-Language Tasks
Figure 2 for CLiMB: A Continual Learning Benchmark for Vision-and-Language Tasks
Figure 3 for CLiMB: A Continual Learning Benchmark for Vision-and-Language Tasks
Figure 4 for CLiMB: A Continual Learning Benchmark for Vision-and-Language Tasks
Viaarxiv icon

End-to-end Generative Zero-shot Learning via Few-shot Learning

Feb 08, 2021
Georgios Chochlakis, Efthymios Georgiou, Alexandros Potamianos

Figure 1 for End-to-end Generative Zero-shot Learning via Few-shot Learning
Figure 2 for End-to-end Generative Zero-shot Learning via Few-shot Learning
Figure 3 for End-to-end Generative Zero-shot Learning via Few-shot Learning
Figure 4 for End-to-end Generative Zero-shot Learning via Few-shot Learning
Viaarxiv icon