Picture for Jizhong Han

Jizhong Han

RecBundle: A Next-Generation Geometric Paradigm for Explainable Recommender Systems

Add code
Mar 17, 2026
Viaarxiv icon

A Cognitive Distribution and Behavior-Consistent Framework for Black-Box Attacks on Recommender Systems

Add code
Feb 12, 2026
Viaarxiv icon

Re-Align: Structured Reasoning-guided Alignment for In-Context Image Generation and Editing

Add code
Jan 08, 2026
Viaarxiv icon

Exploiting Synergistic Cognitive Biases to Bypass Safety in LLMs

Add code
Jul 30, 2025
Viaarxiv icon

Paper Summary Attack: Jailbreaking LLMs through LLM Safety Papers

Add code
Jul 17, 2025
Figure 1 for Paper Summary Attack: Jailbreaking LLMs through LLM Safety Papers
Figure 2 for Paper Summary Attack: Jailbreaking LLMs through LLM Safety Papers
Figure 3 for Paper Summary Attack: Jailbreaking LLMs through LLM Safety Papers
Figure 4 for Paper Summary Attack: Jailbreaking LLMs through LLM Safety Papers
Viaarxiv icon

Align Beyond Prompts: Evaluating World Knowledge Alignment in Text-to-Image Generation

Add code
May 24, 2025
Viaarxiv icon

LyapLock: Bounded Knowledge Preservation in Sequential Large Language Model Editing

Add code
May 21, 2025
Figure 1 for LyapLock: Bounded Knowledge Preservation in Sequential Large Language Model Editing
Figure 2 for LyapLock: Bounded Knowledge Preservation in Sequential Large Language Model Editing
Figure 3 for LyapLock: Bounded Knowledge Preservation in Sequential Large Language Model Editing
Figure 4 for LyapLock: Bounded Knowledge Preservation in Sequential Large Language Model Editing
Viaarxiv icon

Anchor3DLane++: 3D Lane Detection via Sample-Adaptive Sparse 3D Anchor Regression

Add code
Dec 22, 2024
Viaarxiv icon

Multimodal Music Generation with Explicit Bridges and Retrieval Augmentation

Add code
Dec 12, 2024
Viaarxiv icon

The Dark Side of Trust: Authority Citation-Driven Jailbreak Attacks on Large Language Models

Add code
Nov 18, 2024
Figure 1 for The Dark Side of Trust: Authority Citation-Driven Jailbreak Attacks on Large Language Models
Figure 2 for The Dark Side of Trust: Authority Citation-Driven Jailbreak Attacks on Large Language Models
Figure 3 for The Dark Side of Trust: Authority Citation-Driven Jailbreak Attacks on Large Language Models
Figure 4 for The Dark Side of Trust: Authority Citation-Driven Jailbreak Attacks on Large Language Models
Viaarxiv icon