Picture for Philip S. Yu

Philip S. Yu

University of Illinois at Chicago

A Survey of Graph Neural Networks in Real world: Imbalance, Noise, Privacy and OOD Challenges

Add code
Mar 07, 2024
Viaarxiv icon

Against Filter Bubbles: Diversified Music Recommendation via Weighted Hypergraph Embedding Learning

Add code
Feb 26, 2024
Viaarxiv icon

Overcoming Pitfalls in Graph Contrastive Learning Evaluation: Toward Comprehensive Benchmarks

Add code
Feb 24, 2024
Figure 1 for Overcoming Pitfalls in Graph Contrastive Learning Evaluation: Toward Comprehensive Benchmarks
Figure 2 for Overcoming Pitfalls in Graph Contrastive Learning Evaluation: Toward Comprehensive Benchmarks
Figure 3 for Overcoming Pitfalls in Graph Contrastive Learning Evaluation: Toward Comprehensive Benchmarks
Figure 4 for Overcoming Pitfalls in Graph Contrastive Learning Evaluation: Toward Comprehensive Benchmarks
Viaarxiv icon

Rethinking the Roles of Large Language Models in Chinese Grammatical Error Correction

Add code
Feb 18, 2024
Figure 1 for Rethinking the Roles of Large Language Models in Chinese Grammatical Error Correction
Figure 2 for Rethinking the Roles of Large Language Models in Chinese Grammatical Error Correction
Figure 3 for Rethinking the Roles of Large Language Models in Chinese Grammatical Error Correction
Figure 4 for Rethinking the Roles of Large Language Models in Chinese Grammatical Error Correction
Viaarxiv icon

Disclosure and Mitigation of Gender Bias in LLMs

Add code
Feb 17, 2024
Figure 1 for Disclosure and Mitigation of Gender Bias in LLMs
Figure 2 for Disclosure and Mitigation of Gender Bias in LLMs
Figure 3 for Disclosure and Mitigation of Gender Bias in LLMs
Figure 4 for Disclosure and Mitigation of Gender Bias in LLMs
Viaarxiv icon

When LLMs Meet Cunning Questions: A Fallacy Understanding Benchmark for Large Language Models

Add code
Feb 16, 2024
Figure 1 for When LLMs Meet Cunning Questions: A Fallacy Understanding Benchmark for Large Language Models
Figure 2 for When LLMs Meet Cunning Questions: A Fallacy Understanding Benchmark for Large Language Models
Figure 3 for When LLMs Meet Cunning Questions: A Fallacy Understanding Benchmark for Large Language Models
Figure 4 for When LLMs Meet Cunning Questions: A Fallacy Understanding Benchmark for Large Language Models
Viaarxiv icon

Confidence-aware Fine-tuning of Sequential Recommendation Systems via Conformal Prediction

Add code
Feb 14, 2024
Figure 1 for Confidence-aware Fine-tuning of Sequential Recommendation Systems via Conformal Prediction
Figure 2 for Confidence-aware Fine-tuning of Sequential Recommendation Systems via Conformal Prediction
Figure 3 for Confidence-aware Fine-tuning of Sequential Recommendation Systems via Conformal Prediction
Figure 4 for Confidence-aware Fine-tuning of Sequential Recommendation Systems via Conformal Prediction
Viaarxiv icon

Rec-GPT4V: Multimodal Recommendation with Large Vision-Language Models

Add code
Feb 13, 2024
Figure 1 for Rec-GPT4V: Multimodal Recommendation with Large Vision-Language Models
Figure 2 for Rec-GPT4V: Multimodal Recommendation with Large Vision-Language Models
Figure 3 for Rec-GPT4V: Multimodal Recommendation with Large Vision-Language Models
Figure 4 for Rec-GPT4V: Multimodal Recommendation with Large Vision-Language Models
Viaarxiv icon

TrustLLM: Trustworthiness in Large Language Models

Add code
Jan 25, 2024
Figure 1 for TrustLLM: Trustworthiness in Large Language Models
Figure 2 for TrustLLM: Trustworthiness in Large Language Models
Figure 3 for TrustLLM: Trustworthiness in Large Language Models
Figure 4 for TrustLLM: Trustworthiness in Large Language Models
Viaarxiv icon

Multitask Active Learning for Graph Anomaly Detection

Add code
Jan 24, 2024
Viaarxiv icon