Alert button
Picture for Kuan-Chieh Wang

Kuan-Chieh Wang

Alert button

Variational Model Inversion Attacks

Jan 26, 2022
Kuan-Chieh Wang, Yan Fu, Ke Li, Ashish Khisti, Richard Zemel, Alireza Makhzani

Figure 1 for Variational Model Inversion Attacks
Figure 2 for Variational Model Inversion Attacks
Figure 3 for Variational Model Inversion Attacks
Figure 4 for Variational Model Inversion Attacks
Viaarxiv icon

Disentanglement and Generalization Under Correlation Shifts

Dec 29, 2021
Christina M. Funke, Paul Vicol, Kuan-Chieh Wang, Matthias Kümmerer, Richard Zemel, Matthias Bethge

Figure 1 for Disentanglement and Generalization Under Correlation Shifts
Figure 2 for Disentanglement and Generalization Under Correlation Shifts
Figure 3 for Disentanglement and Generalization Under Correlation Shifts
Figure 4 for Disentanglement and Generalization Under Correlation Shifts
Viaarxiv icon

Flexible Few-Shot Learning with Contextual Similarity

Dec 10, 2020
Mengye Ren, Eleni Triantafillou, Kuan-Chieh Wang, James Lucas, Jake Snell, Xaq Pitkow, Andreas S. Tolias, Richard Zemel

Figure 1 for Flexible Few-Shot Learning with Contextual Similarity
Figure 2 for Flexible Few-Shot Learning with Contextual Similarity
Figure 3 for Flexible Few-Shot Learning with Contextual Similarity
Figure 4 for Flexible Few-Shot Learning with Contextual Similarity
Viaarxiv icon

Understanding and mitigating exploding inverses in invertible neural networks

Jun 16, 2020
Jens Behrmann, Paul Vicol, Kuan-Chieh Wang, Roger Grosse, Jörn-Henrik Jacobsen

Figure 1 for Understanding and mitigating exploding inverses in invertible neural networks
Figure 2 for Understanding and mitigating exploding inverses in invertible neural networks
Figure 3 for Understanding and mitigating exploding inverses in invertible neural networks
Figure 4 for Understanding and mitigating exploding inverses in invertible neural networks
Viaarxiv icon

Cutting out the Middle-Man: Training and Evaluating Energy-Based Models without Sampling

Feb 14, 2020
Will Grathwohl, Kuan-Chieh Wang, Jorn-Henrik Jacobsen, David Duvenaud, Richard Zemel

Figure 1 for Cutting out the Middle-Man: Training and Evaluating Energy-Based Models without Sampling
Figure 2 for Cutting out the Middle-Man: Training and Evaluating Energy-Based Models without Sampling
Figure 3 for Cutting out the Middle-Man: Training and Evaluating Energy-Based Models without Sampling
Figure 4 for Cutting out the Middle-Man: Training and Evaluating Energy-Based Models without Sampling
Viaarxiv icon

Your Classifier is Secretly an Energy Based Model and You Should Treat it Like One

Dec 11, 2019
Will Grathwohl, Kuan-Chieh Wang, Jörn-Henrik Jacobsen, David Duvenaud, Mohammad Norouzi, Kevin Swersky

Figure 1 for Your Classifier is Secretly an Energy Based Model and You Should Treat it Like One
Figure 2 for Your Classifier is Secretly an Energy Based Model and You Should Treat it Like One
Figure 3 for Your Classifier is Secretly an Energy Based Model and You Should Treat it Like One
Figure 4 for Your Classifier is Secretly an Energy Based Model and You Should Treat it Like One
Viaarxiv icon

Lingvo: a Modular and Scalable Framework for Sequence-to-Sequence Modeling

Feb 21, 2019
Jonathan Shen, Patrick Nguyen, Yonghui Wu, Zhifeng Chen, Mia X. Chen, Ye Jia, Anjuli Kannan, Tara Sainath, Yuan Cao, Chung-Cheng Chiu, Yanzhang He, Jan Chorowski, Smit Hinsu, Stella Laurenzo, James Qin, Orhan Firat, Wolfgang Macherey, Suyog Gupta, Ankur Bapna, Shuyuan Zhang, Ruoming Pang, Ron J. Weiss, Rohit Prabhavalkar, Qiao Liang, Benoit Jacob, Bowen Liang, HyoukJoong Lee, Ciprian Chelba, Sébastien Jean, Bo Li, Melvin Johnson, Rohan Anil, Rajat Tibrewal, Xiaobing Liu, Akiko Eriguchi, Navdeep Jaitly, Naveen Ari, Colin Cherry, Parisa Haghani, Otavio Good, Youlong Cheng, Raziel Alvarez, Isaac Caswell, Wei-Ning Hsu, Zongheng Yang, Kuan-Chieh Wang, Ekaterina Gonina, Katrin Tomanek, Ben Vanik, Zelin Wu, Llion Jones, Mike Schuster, Yanping Huang, Dehao Chen, Kazuki Irie, George Foster, John Richardson, Klaus Macherey, Antoine Bruguier, Heiga Zen, Colin Raffel, Shankar Kumar, Kanishka Rao, David Rybach, Matthew Murray, Vijayaditya Peddinti, Maxim Krikun, Michiel A. U. Bacchiani, Thomas B. Jablin, Rob Suderman, Ian Williams, Benjamin Lee, Deepti Bhatia, Justin Carlson, Semih Yavuz, Yu Zhang, Ian McGraw, Max Galkin, Qi Ge, Golan Pundak, Chad Whipkey, Todd Wang, Uri Alon, Dmitry Lepikhin, Ye Tian, Sara Sabour, William Chan, Shubham Toshniwal, Baohua Liao, Michael Nirschl, Pat Rondon

Figure 1 for Lingvo: a Modular and Scalable Framework for Sequence-to-Sequence Modeling
Figure 2 for Lingvo: a Modular and Scalable Framework for Sequence-to-Sequence Modeling
Figure 3 for Lingvo: a Modular and Scalable Framework for Sequence-to-Sequence Modeling
Viaarxiv icon

Centroid-based deep metric learning for speaker recognition

Feb 06, 2019
Jixuan Wang, Kuan-Chieh Wang, Marc Law, Frank Rudzicz, Michael Brudno

Figure 1 for Centroid-based deep metric learning for speaker recognition
Figure 2 for Centroid-based deep metric learning for speaker recognition
Figure 3 for Centroid-based deep metric learning for speaker recognition
Figure 4 for Centroid-based deep metric learning for speaker recognition
Viaarxiv icon