Picture for Yi Zeng

Yi Zeng

TEFormer: Structured Bidirectional Temporal Enhancement Modeling in Spiking Transformers

Add code
Jan 26, 2026
Viaarxiv icon

CogToM: A Comprehensive Theory of Mind Benchmark inspired by Human Cognition for Large Language Models

Add code
Jan 22, 2026
Viaarxiv icon

TiMem: Temporal-Hierarchical Memory Consolidation for Long-Horizon Conversational Agents

Add code
Jan 06, 2026
Viaarxiv icon

Towards Reliable Evaluation of Adversarial Robustness for Spiking Neural Networks

Add code
Dec 27, 2025
Viaarxiv icon

Efficient LLM Safety Evaluation through Multi-Agent Debate

Add code
Nov 09, 2025
Viaarxiv icon

MVPBench: A Benchmark and Fine-Tuning Framework for Aligning Large Language Models with Diverse Human Values

Add code
Sep 09, 2025
Viaarxiv icon

The Singapore Consensus on Global AI Safety Research Priorities

Add code
Jun 25, 2025
Figure 1 for The Singapore Consensus on Global AI Safety Research Priorities
Figure 2 for The Singapore Consensus on Global AI Safety Research Priorities
Figure 3 for The Singapore Consensus on Global AI Safety Research Priorities
Viaarxiv icon

PandaGuard: Systematic Evaluation of LLM Safety against Jailbreaking Attacks

Add code
May 22, 2025
Viaarxiv icon

STEP: A Unified Spiking Transformer Evaluation Platform for Fair and Reproducible Benchmarking

Add code
May 16, 2025
Viaarxiv icon

Incorporating brain-inspired mechanisms for multimodal learning in artificial intelligence

Add code
May 15, 2025
Viaarxiv icon