Alert button
Picture for Ruoxi Jia

Ruoxi Jia

Alert button

The Mirrored Influence Hypothesis: Efficient Data Influence Estimation by Harnessing Forward Passes

Feb 14, 2024
Myeongseob Ko, Feiyang Kang, Weiyan Shi, Ming Jin, Zhou Yu, Ruoxi Jia

Viaarxiv icon

How Johnny Can Persuade LLMs to Jailbreak Them: Rethinking Persuasion to Challenge AI Safety by Humanizing LLMs

Jan 23, 2024
Yi Zeng, Hongpeng Lin, Jingwen Zhang, Diyi Yang, Ruoxi Jia, Weiyan Shi

Viaarxiv icon

Efficient Data Shapley for Weighted Nearest Neighbor Algorithms

Jan 20, 2024
Jiachen T. Wang, Prateek Mittal, Ruoxi Jia

Viaarxiv icon

Data Acquisition: A New Frontier in Data-centric AI

Nov 22, 2023
Lingjiao Chen, Bilge Acun, Newsha Ardalani, Yifan Sun, Feiyang Kang, Hanrui Lyu, Yongchan Kwon, Ruoxi Jia, Carole-Jean Wu, Matei Zaharia, James Zou

Viaarxiv icon

Learning to Rank for Active Learning via Multi-Task Bilevel Optimization

Oct 25, 2023
Zixin Ding, Si Chen, Ruoxi Jia, Yuxin Chen

Viaarxiv icon

Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!

Oct 05, 2023
Xiangyu Qi, Yi Zeng, Tinghao Xie, Pin-Yu Chen, Ruoxi Jia, Prateek Mittal, Peter Henderson

Figure 1 for Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!
Figure 2 for Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!
Figure 3 for Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!
Figure 4 for Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!
Viaarxiv icon

Practical Membership Inference Attacks Against Large-Scale Multi-Modal Models: A Pilot Study

Sep 29, 2023
Myeongseob Ko, Ming Jin, Chenguang Wang, Ruoxi Jia

Figure 1 for Practical Membership Inference Attacks Against Large-Scale Multi-Modal Models: A Pilot Study
Figure 2 for Practical Membership Inference Attacks Against Large-Scale Multi-Modal Models: A Pilot Study
Figure 3 for Practical Membership Inference Attacks Against Large-Scale Multi-Modal Models: A Pilot Study
Figure 4 for Practical Membership Inference Attacks Against Large-Scale Multi-Modal Models: A Pilot Study
Viaarxiv icon

Threshold KNN-Shapley: A Linear-Time and Privacy-Friendly Approach to Data Valuation

Aug 30, 2023
Jiachen T. Wang, Yuqing Zhu, Yu-Xiang Wang, Ruoxi Jia, Prateek Mittal

Figure 1 for Threshold KNN-Shapley: A Linear-Time and Privacy-Friendly Approach to Data Valuation
Figure 2 for Threshold KNN-Shapley: A Linear-Time and Privacy-Friendly Approach to Data Valuation
Figure 3 for Threshold KNN-Shapley: A Linear-Time and Privacy-Friendly Approach to Data Valuation
Figure 4 for Threshold KNN-Shapley: A Linear-Time and Privacy-Friendly Approach to Data Valuation
Viaarxiv icon

Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models

Aug 20, 2023
Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Lu Wang, Ruoxi Jia, Ming Jin

Figure 1 for Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Figure 2 for Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Figure 3 for Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Figure 4 for Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Viaarxiv icon