The intricate hierarchical structure of syntax is fundamental to the intricate and systematic nature of human language. This study investigates the premise that language models, specifically their attention distributions, can encapsulate syntactic dependencies. We introduce Dynamic Syntax Mapping (DSM), an innovative approach for the agnostic induction of these structures. Our method diverges from traditional syntax models which rely on predefined annotation schemata. Instead, we focus on a core characteristic inherent in dependency relations: syntactic substitutability. This concept refers to the interchangeability of words within the same syntactic category at either end of a dependency. By leveraging this property, we generate a collection of syntactically invariant sentences, which serve as the foundation for our parsing framework. Our findings reveal that the use of an increasing array of substitutions notably enhances parsing precision on natural language data. Specifically, in the context of long-distance subject-verb agreement, DSM exhibits a remarkable advancement over prior methodologies. Furthermore, DSM's adaptability is demonstrated through its successful application in varied parsing scenarios, underscoring its broad applicability.
Aspect-based sentiment analysis (ABSA), a nuanced task in text analysis, seeks to discern sentiment orientation linked to specific aspect terms in text. Traditional approaches often overlook or inadequately model the explicit syntactic structures of sentences, crucial for effective aspect term identification and sentiment determination. Addressing this gap, we introduce an innovative model: Syntactic Dependency Enhanced Multi-Task Interaction Architecture (SDEMTIA) for comprehensive ABSA. Our approach innovatively exploits syntactic knowledge (dependency relations and types) using a specialized Syntactic Dependency Embedded Interactive Network (SDEIN). We also incorporate a novel and efficient message-passing mechanism within a multi-task learning framework to bolster learning efficacy. Our extensive experiments on benchmark datasets showcase our model's superiority, significantly surpassing existing methods. Additionally, incorporating BERT as an auxiliary feature extractor further enhances our model's performance.
Recent progress in aspect-level sentiment classification has been propelled by the incorporation of graph neural networks (GNNs) leveraging syntactic structures, particularly dependency trees. Nevertheless, the performance of these models is often hampered by the innate inaccuracies of parsing algorithms. To mitigate this challenge, we introduce SynthFusion, an innovative graph ensemble method that amalgamates predictions from multiple parsers. This strategy blends diverse dependency relations prior to the application of GNNs, enhancing robustness against parsing errors while avoiding extra computational burdens. SynthFusion circumvents the pitfalls of overparameterization and diminishes the risk of overfitting, prevalent in models with stacked GNN layers, by optimizing graph connectivity. Our empirical evaluations on the SemEval14 and Twitter14 datasets affirm that SynthFusion not only outshines models reliant on single dependency trees but also eclipses alternative ensemble techniques, achieving this without an escalation in model complexity.