Alert button
Picture for Qiao Liang

Qiao Liang

Alert button

ToolAlpaca: Generalized Tool Learning for Language Models with 3000 Simulated Cases

Jun 08, 2023
Qiaoyu Tang, Ziliang Deng, Hongyu Lin, Xianpei Han, Qiao Liang, Le Sun

Figure 1 for ToolAlpaca: Generalized Tool Learning for Language Models with 3000 Simulated Cases
Figure 2 for ToolAlpaca: Generalized Tool Learning for Language Models with 3000 Simulated Cases
Figure 3 for ToolAlpaca: Generalized Tool Learning for Language Models with 3000 Simulated Cases
Figure 4 for ToolAlpaca: Generalized Tool Learning for Language Models with 3000 Simulated Cases
Viaarxiv icon

A Language Agnostic Multilingual Streaming On-Device ASR System

Aug 29, 2022
Bo Li, Tara N. Sainath, Ruoming Pang, Shuo-yiin Chang, Qiumin Xu, Trevor Strohman, Vince Chen, Qiao Liang, Heguang Liu, Yanzhang He, Parisa Haghani, Sameer Bidichandani

Figure 1 for A Language Agnostic Multilingual Streaming On-Device ASR System
Figure 2 for A Language Agnostic Multilingual Streaming On-Device ASR System
Figure 3 for A Language Agnostic Multilingual Streaming On-Device ASR System
Figure 4 for A Language Agnostic Multilingual Streaming On-Device ASR System
Viaarxiv icon

Streaming Intended Query Detection using E2E Modeling for Continued Conversation

Aug 29, 2022
Shuo-yiin Chang, Guru Prakash, Zelin Wu, Qiao Liang, Tara N. Sainath, Bo Li, Adam Stambler, Shyam Upadhyay, Manaal Faruqui, Trevor Strohman

Figure 1 for Streaming Intended Query Detection using E2E Modeling for Continued Conversation
Figure 2 for Streaming Intended Query Detection using E2E Modeling for Continued Conversation
Figure 3 for Streaming Intended Query Detection using E2E Modeling for Continued Conversation
Figure 4 for Streaming Intended Query Detection using E2E Modeling for Continued Conversation
Viaarxiv icon

Turn-Taking Prediction for Natural Conversational Speech

Aug 29, 2022
Shuo-yiin Chang, Bo Li, Tara N. Sainath, Chao Zhang, Trevor Strohman, Qiao Liang, Yanzhang He

Figure 1 for Turn-Taking Prediction for Natural Conversational Speech
Figure 2 for Turn-Taking Prediction for Natural Conversational Speech
Figure 3 for Turn-Taking Prediction for Natural Conversational Speech
Figure 4 for Turn-Taking Prediction for Natural Conversational Speech
Viaarxiv icon

TRIE++: Towards End-to-End Information Extraction from Visually Rich Documents

Jul 14, 2022
Zhanzhan Cheng, Peng Zhang, Can Li, Qiao Liang, Yunlu Xu, Pengfei Li, Shiliang Pu, Yi Niu, Fei Wu

Figure 1 for TRIE++: Towards End-to-End Information Extraction from Visually Rich Documents
Figure 2 for TRIE++: Towards End-to-End Information Extraction from Visually Rich Documents
Figure 3 for TRIE++: Towards End-to-End Information Extraction from Visually Rich Documents
Figure 4 for TRIE++: Towards End-to-End Information Extraction from Visually Rich Documents
Viaarxiv icon

A Unified Cascaded Encoder ASR Model for Dynamic Model Sizes

Apr 20, 2022
Shaojin Ding, Weiran Wang, Ding Zhao, Tara N. Sainath, Yanzhang He, Robert David, Rami Botros, Xin Wang, Rina Panigrahy, Qiao Liang, Dongseong Hwang, Ian McGraw, Rohit Prabhavalkar, Trevor Strohman

Figure 1 for A Unified Cascaded Encoder ASR Model for Dynamic Model Sizes
Figure 2 for A Unified Cascaded Encoder ASR Model for Dynamic Model Sizes
Figure 3 for A Unified Cascaded Encoder ASR Model for Dynamic Model Sizes
Figure 4 for A Unified Cascaded Encoder ASR Model for Dynamic Model Sizes
Viaarxiv icon

Personal VAD 2.0: Optimizing Personal Voice Activity Detection for On-Device Speech Recognition

Apr 13, 2022
Shaojin Ding, Rajeev Rikhye, Qiao Liang, Yanzhang He, Quan Wang, Arun Narayanan, Tom O'Malley, Ian McGraw

Figure 1 for Personal VAD 2.0: Optimizing Personal Voice Activity Detection for On-Device Speech Recognition
Figure 2 for Personal VAD 2.0: Optimizing Personal Voice Activity Detection for On-Device Speech Recognition
Figure 3 for Personal VAD 2.0: Optimizing Personal Voice Activity Detection for On-Device Speech Recognition
Figure 4 for Personal VAD 2.0: Optimizing Personal Voice Activity Detection for On-Device Speech Recognition
Viaarxiv icon

Closing the Gap between Single-User and Multi-User VoiceFilter-Lite

Feb 24, 2022
Rajeev Rikhye, Quan Wang, Qiao Liang, Yanzhang He, Ian McGraw

Figure 1 for Closing the Gap between Single-User and Multi-User VoiceFilter-Lite
Figure 2 for Closing the Gap between Single-User and Multi-User VoiceFilter-Lite
Figure 3 for Closing the Gap between Single-User and Multi-User VoiceFilter-Lite
Figure 4 for Closing the Gap between Single-User and Multi-User VoiceFilter-Lite
Viaarxiv icon

Multi-user VoiceFilter-Lite via Attentive Speaker Embedding

Jul 02, 2021
Rajeev Rikhye, Quan Wang, Qiao Liang, Yanzhang He, Ian McGraw

Figure 1 for Multi-user VoiceFilter-Lite via Attentive Speaker Embedding
Figure 2 for Multi-user VoiceFilter-Lite via Attentive Speaker Embedding
Figure 3 for Multi-user VoiceFilter-Lite via Attentive Speaker Embedding
Figure 4 for Multi-user VoiceFilter-Lite via Attentive Speaker Embedding
Viaarxiv icon