Picture for Martin Kuo

Martin Kuo

T2S-Bench & Structure-of-Thought: Benchmarking and Prompting Comprehensive Text-to-Structure Reasoning

Add code
Mar 04, 2026
Viaarxiv icon

CoreMatching: A Co-adaptive Sparse Inference Framework with Token and Neuron Pruning for Comprehensive Acceleration of Vision-Language Models

Add code
May 25, 2025
Viaarxiv icon

Proactive Privacy Amnesia for Large Language Models: Safeguarding PII with Negligible Impact on Model Utility

Add code
Feb 24, 2025
Viaarxiv icon

H-CoT: Hijacking the Chain-of-Thought Safety Reasoning Mechanism to Jailbreak Large Reasoning Models, Including OpenAI o1/o3, DeepSeek-R1, and Gemini 2.0 Flash Thinking

Add code
Feb 18, 2025
Figure 1 for H-CoT: Hijacking the Chain-of-Thought Safety Reasoning Mechanism to Jailbreak Large Reasoning Models, Including OpenAI o1/o3, DeepSeek-R1, and Gemini 2.0 Flash Thinking
Figure 2 for H-CoT: Hijacking the Chain-of-Thought Safety Reasoning Mechanism to Jailbreak Large Reasoning Models, Including OpenAI o1/o3, DeepSeek-R1, and Gemini 2.0 Flash Thinking
Figure 3 for H-CoT: Hijacking the Chain-of-Thought Safety Reasoning Mechanism to Jailbreak Large Reasoning Models, Including OpenAI o1/o3, DeepSeek-R1, and Gemini 2.0 Flash Thinking
Figure 4 for H-CoT: Hijacking the Chain-of-Thought Safety Reasoning Mechanism to Jailbreak Large Reasoning Models, Including OpenAI o1/o3, DeepSeek-R1, and Gemini 2.0 Flash Thinking
Viaarxiv icon

Min-K%++: Improved Baseline for Detecting Pre-Training Data from Large Language Models

Add code
Apr 03, 2024
Figure 1 for Min-K%++: Improved Baseline for Detecting Pre-Training Data from Large Language Models
Figure 2 for Min-K%++: Improved Baseline for Detecting Pre-Training Data from Large Language Models
Figure 3 for Min-K%++: Improved Baseline for Detecting Pre-Training Data from Large Language Models
Figure 4 for Min-K%++: Improved Baseline for Detecting Pre-Training Data from Large Language Models
Viaarxiv icon

DACBERT: Leveraging Dependency Agreement for Cost-Efficient Bert Pretraining

Add code
Nov 08, 2023
Figure 1 for DACBERT: Leveraging Dependency Agreement for Cost-Efficient Bert Pretraining
Figure 2 for DACBERT: Leveraging Dependency Agreement for Cost-Efficient Bert Pretraining
Figure 3 for DACBERT: Leveraging Dependency Agreement for Cost-Efficient Bert Pretraining
Figure 4 for DACBERT: Leveraging Dependency Agreement for Cost-Efficient Bert Pretraining
Viaarxiv icon

Towards Building the Federated GPT: Federated Instruction Tuning

Add code
May 09, 2023
Figure 1 for Towards Building the Federated GPT: Federated Instruction Tuning
Figure 2 for Towards Building the Federated GPT: Federated Instruction Tuning
Figure 3 for Towards Building the Federated GPT: Federated Instruction Tuning
Figure 4 for Towards Building the Federated GPT: Federated Instruction Tuning
Viaarxiv icon

Tag and Correct: Question aware Open Information Extraction with Two-stage Decoding

Add code
Sep 16, 2020
Figure 1 for Tag and Correct: Question aware Open Information Extraction with Two-stage Decoding
Figure 2 for Tag and Correct: Question aware Open Information Extraction with Two-stage Decoding
Figure 3 for Tag and Correct: Question aware Open Information Extraction with Two-stage Decoding
Figure 4 for Tag and Correct: Question aware Open Information Extraction with Two-stage Decoding
Viaarxiv icon