Alert button
Picture for Reto Gubelmann

Reto Gubelmann

Alert button

University of St.Gallen

Uncovering More Shallow Heuristics: Probing the Natural Language Inference Capacities of Transformer-Based Pre-Trained Language Models Using Syllogistic Patterns

Add code
Bookmark button
Alert button
Jan 19, 2022
Reto Gubelmann, Siegfried Handschuh

Figure 1 for Uncovering More Shallow Heuristics: Probing the Natural Language Inference Capacities of Transformer-Based Pre-Trained Language Models Using Syllogistic Patterns
Figure 2 for Uncovering More Shallow Heuristics: Probing the Natural Language Inference Capacities of Transformer-Based Pre-Trained Language Models Using Syllogistic Patterns
Figure 3 for Uncovering More Shallow Heuristics: Probing the Natural Language Inference Capacities of Transformer-Based Pre-Trained Language Models Using Syllogistic Patterns
Figure 4 for Uncovering More Shallow Heuristics: Probing the Natural Language Inference Capacities of Transformer-Based Pre-Trained Language Models Using Syllogistic Patterns
Viaarxiv icon

Exploring the Promises of Transformer-Based LMs for the Representation of Normative Claims in the Legal Domain

Add code
Bookmark button
Alert button
Aug 25, 2021
Reto Gubelmann, Peter Hongler, Siegfried Handschuh

Figure 1 for Exploring the Promises of Transformer-Based LMs for the Representation of Normative Claims in the Legal Domain
Figure 2 for Exploring the Promises of Transformer-Based LMs for the Representation of Normative Claims in the Legal Domain
Figure 3 for Exploring the Promises of Transformer-Based LMs for the Representation of Normative Claims in the Legal Domain
Figure 4 for Exploring the Promises of Transformer-Based LMs for the Representation of Normative Claims in the Legal Domain
Viaarxiv icon