Alert button
Picture for Zixian Ma

Zixian Ma

Alert button

m&m's: A Benchmark to Evaluate Tool-Use for multi-step multi-modal Tasks

Add code
Bookmark button
Alert button
Mar 21, 2024
Zixian Ma, Weikai Huang, Jieyu Zhang, Tanmay Gupta, Ranjay Krishna

Figure 1 for m&m's: A Benchmark to Evaluate Tool-Use for multi-step multi-modal Tasks
Figure 2 for m&m's: A Benchmark to Evaluate Tool-Use for multi-step multi-modal Tasks
Figure 3 for m&m's: A Benchmark to Evaluate Tool-Use for multi-step multi-modal Tasks
Figure 4 for m&m's: A Benchmark to Evaluate Tool-Use for multi-step multi-modal Tasks
Viaarxiv icon

SugarCrepe: Fixing Hackable Benchmarks for Vision-Language Compositionality

Add code
Bookmark button
Alert button
Jun 26, 2023
Cheng-Yu Hsieh, Jieyu Zhang, Zixian Ma, Aniruddha Kembhavi, Ranjay Krishna

Figure 1 for SugarCrepe: Fixing Hackable Benchmarks for Vision-Language Compositionality
Figure 2 for SugarCrepe: Fixing Hackable Benchmarks for Vision-Language Compositionality
Figure 3 for SugarCrepe: Fixing Hackable Benchmarks for Vision-Language Compositionality
Figure 4 for SugarCrepe: Fixing Hackable Benchmarks for Vision-Language Compositionality
Viaarxiv icon

Model Sketching: Centering Concepts in Early-Stage Machine Learning Model Design

Add code
Bookmark button
Alert button
Mar 06, 2023
Michelle S. Lam, Zixian Ma, Anne Li, Izequiel Freitas, Dakuo Wang, James A. Landay, Michael S. Bernstein

Figure 1 for Model Sketching: Centering Concepts in Early-Stage Machine Learning Model Design
Figure 2 for Model Sketching: Centering Concepts in Early-Stage Machine Learning Model Design
Figure 3 for Model Sketching: Centering Concepts in Early-Stage Machine Learning Model Design
Figure 4 for Model Sketching: Centering Concepts in Early-Stage Machine Learning Model Design
Viaarxiv icon

CREPE: Can Vision-Language Foundation Models Reason Compositionally?

Add code
Bookmark button
Alert button
Dec 13, 2022
Zixian Ma, Jerry Hong, Mustafa Omer Gul, Mona Gandhi, Irena Gao, Ranjay Krishna

Figure 1 for CREPE: Can Vision-Language Foundation Models Reason Compositionally?
Figure 2 for CREPE: Can Vision-Language Foundation Models Reason Compositionally?
Figure 3 for CREPE: Can Vision-Language Foundation Models Reason Compositionally?
Figure 4 for CREPE: Can Vision-Language Foundation Models Reason Compositionally?
Viaarxiv icon

ELIGN: Expectation Alignment as a Multi-Agent Intrinsic Reward

Add code
Bookmark button
Alert button
Oct 09, 2022
Zixian Ma, Rose Wang, Li Fei-Fei, Michael Bernstein, Ranjay Krishna

Figure 1 for ELIGN: Expectation Alignment as a Multi-Agent Intrinsic Reward
Figure 2 for ELIGN: Expectation Alignment as a Multi-Agent Intrinsic Reward
Figure 3 for ELIGN: Expectation Alignment as a Multi-Agent Intrinsic Reward
Figure 4 for ELIGN: Expectation Alignment as a Multi-Agent Intrinsic Reward
Viaarxiv icon

MobilePhys: Personalized Mobile Camera-Based Contactless Physiological Sensing

Add code
Bookmark button
Alert button
Jan 11, 2022
Xin Liu, Yuntao Wang, Sinan Xie, Xiaoyu Zhang, Zixian Ma, Daniel McDuff, Shwetak Patel

Figure 1 for MobilePhys: Personalized Mobile Camera-Based Contactless Physiological Sensing
Figure 2 for MobilePhys: Personalized Mobile Camera-Based Contactless Physiological Sensing
Figure 3 for MobilePhys: Personalized Mobile Camera-Based Contactless Physiological Sensing
Figure 4 for MobilePhys: Personalized Mobile Camera-Based Contactless Physiological Sensing
Viaarxiv icon