Picture for Nickil Maveli

Nickil Maveli

Can LLMs Compress (and Decompress)? Evaluating Code Understanding and Execution via Invertibility

Add code
Jan 19, 2026
Viaarxiv icon

What can Large Language Models Capture about Code Functional Equivalence?

Add code
Aug 20, 2024
Figure 1 for What can Large Language Models Capture about Code Functional Equivalence?
Figure 2 for What can Large Language Models Capture about Code Functional Equivalence?
Figure 3 for What can Large Language Models Capture about Code Functional Equivalence?
Figure 4 for What can Large Language Models Capture about Code Functional Equivalence?
Viaarxiv icon

Co-training an Unsupervised Constituency Parser with Weak Supervision

Add code
Oct 05, 2021
Figure 1 for Co-training an Unsupervised Constituency Parser with Weak Supervision
Figure 2 for Co-training an Unsupervised Constituency Parser with Weak Supervision
Figure 3 for Co-training an Unsupervised Constituency Parser with Weak Supervision
Figure 4 for Co-training an Unsupervised Constituency Parser with Weak Supervision
Viaarxiv icon

EdinburghNLP at WNUT-2020 Task 2: Leveraging Transformers with Generalized Augmentation for Identifying Informativeness in COVID-19 Tweets

Add code
Oct 08, 2020
Figure 1 for EdinburghNLP at WNUT-2020 Task 2: Leveraging Transformers with Generalized Augmentation for Identifying Informativeness in COVID-19 Tweets
Figure 2 for EdinburghNLP at WNUT-2020 Task 2: Leveraging Transformers with Generalized Augmentation for Identifying Informativeness in COVID-19 Tweets
Figure 3 for EdinburghNLP at WNUT-2020 Task 2: Leveraging Transformers with Generalized Augmentation for Identifying Informativeness in COVID-19 Tweets
Figure 4 for EdinburghNLP at WNUT-2020 Task 2: Leveraging Transformers with Generalized Augmentation for Identifying Informativeness in COVID-19 Tweets
Viaarxiv icon