Alert button
Picture for Xiang Hu

Xiang Hu

Alert button

Generative Pretrained Structured Transformers: Unsupervised Syntactic Language Models at Scale

Add code
Bookmark button
Alert button
Mar 18, 2024
Xiang Hu, Pengyu Ji, Qingyang Zhu, Wei Wu, Kewei Tu

Figure 1 for Generative Pretrained Structured Transformers: Unsupervised Syntactic Language Models at Scale
Figure 2 for Generative Pretrained Structured Transformers: Unsupervised Syntactic Language Models at Scale
Figure 3 for Generative Pretrained Structured Transformers: Unsupervised Syntactic Language Models at Scale
Figure 4 for Generative Pretrained Structured Transformers: Unsupervised Syntactic Language Models at Scale
Viaarxiv icon

Augmenting transformers with recursively composed multi-grained representations

Add code
Bookmark button
Alert button
Sep 28, 2023
Xiang Hu, Qingyang Zhu, Kewei Tu, Wei Wu

Viaarxiv icon

A Multi-Grained Self-Interpretable Symbolic-Neural Model For Single/Multi-Labeled Text Classification

Add code
Bookmark button
Alert button
Mar 06, 2023
Xiang Hu, Xinyu Kong, Kewei Tu

Figure 1 for A Multi-Grained Self-Interpretable Symbolic-Neural Model For Single/Multi-Labeled Text Classification
Figure 2 for A Multi-Grained Self-Interpretable Symbolic-Neural Model For Single/Multi-Labeled Text Classification
Figure 3 for A Multi-Grained Self-Interpretable Symbolic-Neural Model For Single/Multi-Labeled Text Classification
Figure 4 for A Multi-Grained Self-Interpretable Symbolic-Neural Model For Single/Multi-Labeled Text Classification
Viaarxiv icon

Fast-R2D2: A Pretrained Recursive Neural Network based on Pruned CKY for Grammar Induction and Text Representation

Add code
Bookmark button
Alert button
Mar 01, 2022
Xiang Hu, Haitao Mi, Liang Li, Gerard de Melo

Figure 1 for Fast-R2D2: A Pretrained Recursive Neural Network based on Pruned CKY for Grammar Induction and Text Representation
Figure 2 for Fast-R2D2: A Pretrained Recursive Neural Network based on Pruned CKY for Grammar Induction and Text Representation
Figure 3 for Fast-R2D2: A Pretrained Recursive Neural Network based on Pruned CKY for Grammar Induction and Text Representation
Figure 4 for Fast-R2D2: A Pretrained Recursive Neural Network based on Pruned CKY for Grammar Induction and Text Representation
Viaarxiv icon

R2D2: Recursive Transformer based on Differentiable Tree for Interpretable Hierarchical Language Modeling

Add code
Bookmark button
Alert button
Jul 02, 2021
Xiang Hu, Haitao Mi, Zujie Wen, Yafang Wang, Yi Su, Jing Zheng, Gerard de Melo

Figure 1 for R2D2: Recursive Transformer based on Differentiable Tree for Interpretable Hierarchical Language Modeling
Figure 2 for R2D2: Recursive Transformer based on Differentiable Tree for Interpretable Hierarchical Language Modeling
Figure 3 for R2D2: Recursive Transformer based on Differentiable Tree for Interpretable Hierarchical Language Modeling
Figure 4 for R2D2: Recursive Transformer based on Differentiable Tree for Interpretable Hierarchical Language Modeling
Viaarxiv icon

Interactive Question Clarification in Dialogue via Reinforcement Learning

Add code
Bookmark button
Alert button
Dec 17, 2020
Xiang Hu, Zujie Wen, Yafang Wang, Xiaolong Li, Gerard de Melo

Figure 1 for Interactive Question Clarification in Dialogue via Reinforcement Learning
Figure 2 for Interactive Question Clarification in Dialogue via Reinforcement Learning
Figure 3 for Interactive Question Clarification in Dialogue via Reinforcement Learning
Figure 4 for Interactive Question Clarification in Dialogue via Reinforcement Learning
Viaarxiv icon