Picture for Xing Hu

Xing Hu

CodeV: Empowering LLMs for Verilog Generation through Multi-Level Summarization

Add code
Jul 16, 2024
Viaarxiv icon

TensorTEE: Unifying Heterogeneous TEE Granularity for Efficient Secure Collaborative Tensor Computing

Add code
Jul 12, 2024
Viaarxiv icon

InverseCoder: Unleashing the Power of Instruction-Tuned Code LLMs with Inverse-Instruct

Add code
Jul 08, 2024
Viaarxiv icon

NLPerturbator: Studying the Robustness of Code LLMs to Natural Language Variations

Add code
Jun 28, 2024
Figure 1 for NLPerturbator: Studying the Robustness of Code LLMs to Natural Language Variations
Figure 2 for NLPerturbator: Studying the Robustness of Code LLMs to Natural Language Variations
Figure 3 for NLPerturbator: Studying the Robustness of Code LLMs to Natural Language Variations
Figure 4 for NLPerturbator: Studying the Robustness of Code LLMs to Natural Language Variations
Viaarxiv icon

Adversarial Contrastive Decoding: Boosting Safety Alignment of Large Language Models via Opposite Prompt Optimization

Add code
Jun 24, 2024
Figure 1 for Adversarial Contrastive Decoding: Boosting Safety Alignment of Large Language Models via Opposite Prompt Optimization
Figure 2 for Adversarial Contrastive Decoding: Boosting Safety Alignment of Large Language Models via Opposite Prompt Optimization
Figure 3 for Adversarial Contrastive Decoding: Boosting Safety Alignment of Large Language Models via Opposite Prompt Optimization
Figure 4 for Adversarial Contrastive Decoding: Boosting Safety Alignment of Large Language Models via Opposite Prompt Optimization
Viaarxiv icon

Prompt-based Visual Alignment for Zero-shot Policy Transfer

Add code
Jun 05, 2024
Viaarxiv icon

Enhancing Repository-Level Code Generation with Integrated Contextual Information

Add code
Jun 05, 2024
Figure 1 for Enhancing Repository-Level Code Generation with Integrated Contextual Information
Figure 2 for Enhancing Repository-Level Code Generation with Integrated Contextual Information
Figure 3 for Enhancing Repository-Level Code Generation with Integrated Contextual Information
Figure 4 for Enhancing Repository-Level Code Generation with Integrated Contextual Information
Viaarxiv icon

PillarHist: A Quantization-aware Pillar Feature Encoder based on Height-aware Histogram

Add code
May 29, 2024
Viaarxiv icon

I-LLM: Efficient Integer-Only Inference for Fully-Quantized Low-Bit Large Language Models

Add code
May 28, 2024
Figure 1 for I-LLM: Efficient Integer-Only Inference for Fully-Quantized Low-Bit Large Language Models
Figure 2 for I-LLM: Efficient Integer-Only Inference for Fully-Quantized Low-Bit Large Language Models
Figure 3 for I-LLM: Efficient Integer-Only Inference for Fully-Quantized Low-Bit Large Language Models
Figure 4 for I-LLM: Efficient Integer-Only Inference for Fully-Quantized Low-Bit Large Language Models
Viaarxiv icon

Luban: Building Open-Ended Creative Agents via Autonomous Embodied Verification

Add code
May 24, 2024
Viaarxiv icon