Alert button
Picture for Ed H. Chi

Ed H. Chi

Alert button

Improving Training Stability for Multitask Ranking Models in Recommender Systems

Feb 17, 2023
Jiaxi Tang, Yoel Drori, Daryl Chang, Maheswaran Sathiamoorthy, Justin Gilmer, Li Wei, Xinyang Yi, Lichan Hong, Ed H. Chi

Figure 1 for Improving Training Stability for Multitask Ranking Models in Recommender Systems
Figure 2 for Improving Training Stability for Multitask Ranking Models in Recommender Systems
Figure 3 for Improving Training Stability for Multitask Ranking Models in Recommender Systems
Figure 4 for Improving Training Stability for Multitask Ranking Models in Recommender Systems
Viaarxiv icon

Latent User Intent Modeling for Sequential Recommenders

Nov 17, 2022
Bo Chang, Alexandros Karatzoglou, Yuyan Wang, Can Xu, Ed H. Chi, Minmin Chen

Figure 1 for Latent User Intent Modeling for Sequential Recommenders
Figure 2 for Latent User Intent Modeling for Sequential Recommenders
Figure 3 for Latent User Intent Modeling for Sequential Recommenders
Figure 4 for Latent User Intent Modeling for Sequential Recommenders
Viaarxiv icon

Empowering Long-tail Item Recommendation through Cross Decoupling Network (CDN)

Oct 25, 2022
Yin Zhang, Ruoxi Wang, Derek Zhiyuan Cheng, Tiansheng Yao, Xinyang Yi, Lichan Hong, James Caverlee, Ed H. Chi

Figure 1 for Empowering Long-tail Item Recommendation through Cross Decoupling Network (CDN)
Figure 2 for Empowering Long-tail Item Recommendation through Cross Decoupling Network (CDN)
Figure 3 for Empowering Long-tail Item Recommendation through Cross Decoupling Network (CDN)
Figure 4 for Empowering Long-tail Item Recommendation through Cross Decoupling Network (CDN)
Viaarxiv icon

Scaling Instruction-Finetuned Language Models

Oct 20, 2022
Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, Jason Wei

Figure 1 for Scaling Instruction-Finetuned Language Models
Figure 2 for Scaling Instruction-Finetuned Language Models
Figure 3 for Scaling Instruction-Finetuned Language Models
Figure 4 for Scaling Instruction-Finetuned Language Models
Viaarxiv icon

Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them

Oct 17, 2022
Mirac Suzgun, Nathan Scales, Nathanael Schärli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc V. Le, Ed H. Chi, Denny Zhou, Jason Wei

Figure 1 for Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them
Figure 2 for Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them
Figure 3 for Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them
Figure 4 for Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them
Viaarxiv icon

Simpson's Paradox in Recommender Fairness: Reconciling differences between per-user and aggregated evaluations

Oct 14, 2022
Flavien Prost, Ben Packer, Jilin Chen, Li Wei, Pierre Kremp, Nicholas Blumm, Susan Wang, Tulsee Doshi, Tonia Osadebe, Lukasz Heldt, Ed H. Chi, Alex Beutel

Figure 1 for Simpson's Paradox in Recommender Fairness: Reconciling differences between per-user and aggregated evaluations
Figure 2 for Simpson's Paradox in Recommender Fairness: Reconciling differences between per-user and aggregated evaluations
Figure 3 for Simpson's Paradox in Recommender Fairness: Reconciling differences between per-user and aggregated evaluations
Figure 4 for Simpson's Paradox in Recommender Fairness: Reconciling differences between per-user and aggregated evaluations
Viaarxiv icon

Reward Shaping for User Satisfaction in a REINFORCE Recommender

Sep 30, 2022
Konstantina Christakopoulou, Can Xu, Sai Zhang, Sriraj Badam, Trevor Potter, Daniel Li, Hao Wan, Xinyang Yi, Ya Le, Chris Berg, Eric Bencomo Dixon, Ed H. Chi, Minmin Chen

Figure 1 for Reward Shaping for User Satisfaction in a REINFORCE Recommender
Figure 2 for Reward Shaping for User Satisfaction in a REINFORCE Recommender
Figure 3 for Reward Shaping for User Satisfaction in a REINFORCE Recommender
Figure 4 for Reward Shaping for User Satisfaction in a REINFORCE Recommender
Viaarxiv icon

Emergent Abilities of Large Language Models

Jun 15, 2022
Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, William Fedus

Figure 1 for Emergent Abilities of Large Language Models
Figure 2 for Emergent Abilities of Large Language Models
Figure 3 for Emergent Abilities of Large Language Models
Figure 4 for Emergent Abilities of Large Language Models
Viaarxiv icon

Improving Multi-Task Generalization via Regularizing Spurious Correlation

May 19, 2022
Ziniu Hu, Zhe Zhao, Xinyang Yi, Tiansheng Yao, Lichan Hong, Yizhou Sun, Ed H. Chi

Figure 1 for Improving Multi-Task Generalization via Regularizing Spurious Correlation
Figure 2 for Improving Multi-Task Generalization via Regularizing Spurious Correlation
Figure 3 for Improving Multi-Task Generalization via Regularizing Spurious Correlation
Figure 4 for Improving Multi-Task Generalization via Regularizing Spurious Correlation
Viaarxiv icon

Learning to Augment for Casual User Recommendation

Apr 02, 2022
Jianling Wang, Ya Le, Bo Chang, Yuyan Wang, Ed H. Chi, Minmin Chen

Figure 1 for Learning to Augment for Casual User Recommendation
Figure 2 for Learning to Augment for Casual User Recommendation
Figure 3 for Learning to Augment for Casual User Recommendation
Figure 4 for Learning to Augment for Casual User Recommendation
Viaarxiv icon