Alert button
Picture for Sunit Bhattacharya

Sunit Bhattacharya

Alert button

Unveiling Multilinguality in Transformer Models: Exploring Language Specificity in Feed-Forward Networks

Add code
Bookmark button
Alert button
Oct 24, 2023
Sunit Bhattacharya, Ondrej Bojar

Viaarxiv icon

Multimodal Shannon Game with Images

Add code
Bookmark button
Alert button
Mar 20, 2023
Vilém Zouhar, Sunit Bhattacharya, Ondřej Bojar

Figure 1 for Multimodal Shannon Game with Images
Figure 2 for Multimodal Shannon Game with Images
Figure 3 for Multimodal Shannon Game with Images
Figure 4 for Multimodal Shannon Game with Images
Viaarxiv icon

Sentence Ambiguity, Grammaticality and Complexity Probes

Add code
Bookmark button
Alert button
Oct 15, 2022
Sunit Bhattacharya, Vilém Zouhar, Ondřej Bojar

Figure 1 for Sentence Ambiguity, Grammaticality and Complexity Probes
Figure 2 for Sentence Ambiguity, Grammaticality and Complexity Probes
Figure 3 for Sentence Ambiguity, Grammaticality and Complexity Probes
Figure 4 for Sentence Ambiguity, Grammaticality and Complexity Probes
Viaarxiv icon

Team ÚFAL at CMCL 2022 Shared Task: Figuring out the correct recipe for predicting Eye-Tracking features using Pretrained Language Models

Add code
Bookmark button
Alert button
Apr 11, 2022
Sunit Bhattacharya, Rishu Kumar, Ondrej Bojar

Figure 1 for Team ÚFAL at CMCL 2022 Shared Task: Figuring out the correct recipe for predicting Eye-Tracking features using Pretrained Language Models
Figure 2 for Team ÚFAL at CMCL 2022 Shared Task: Figuring out the correct recipe for predicting Eye-Tracking features using Pretrained Language Models
Figure 3 for Team ÚFAL at CMCL 2022 Shared Task: Figuring out the correct recipe for predicting Eye-Tracking features using Pretrained Language Models
Figure 4 for Team ÚFAL at CMCL 2022 Shared Task: Figuring out the correct recipe for predicting Eye-Tracking features using Pretrained Language Models
Viaarxiv icon

EMMT: A simultaneous eye-tracking, 4-electrode EEG and audio corpus for multi-modal reading and translation scenarios

Add code
Bookmark button
Alert button
Apr 06, 2022
Sunit Bhattacharya, Věra Kloudová, Vilém Zouhar, Ondřej Bojar

Figure 1 for EMMT: A simultaneous eye-tracking, 4-electrode EEG and audio corpus for multi-modal reading and translation scenarios
Figure 2 for EMMT: A simultaneous eye-tracking, 4-electrode EEG and audio corpus for multi-modal reading and translation scenarios
Figure 3 for EMMT: A simultaneous eye-tracking, 4-electrode EEG and audio corpus for multi-modal reading and translation scenarios
Figure 4 for EMMT: A simultaneous eye-tracking, 4-electrode EEG and audio corpus for multi-modal reading and translation scenarios
Viaarxiv icon