Picture for Wei Wang

Wei Wang

School of Physics and Astronomy, Shanghai Jiao Tong University, State Key Laboratory of Dark Matter Physics, Shanghai Jiao Tong University, Tsung-Dao Lee Institute, Shanghai Jiao Tong University

Arbitrary Reading Order Scene Text Spotter with Local Semantics Guidance

Add code
Dec 13, 2024
Viaarxiv icon

ScaleOT: Privacy-utility-scalable Offsite-tuning with Dynamic LayerReplace and Selective Rank Compression

Add code
Dec 13, 2024
Viaarxiv icon

Protecting Confidentiality, Privacy and Integrity in Collaborative Learning

Add code
Dec 11, 2024
Figure 1 for Protecting Confidentiality, Privacy and Integrity in Collaborative Learning
Figure 2 for Protecting Confidentiality, Privacy and Integrity in Collaborative Learning
Figure 3 for Protecting Confidentiality, Privacy and Integrity in Collaborative Learning
Figure 4 for Protecting Confidentiality, Privacy and Integrity in Collaborative Learning
Viaarxiv icon

Detecting Conversational Mental Manipulation with Intent-Aware Prompting

Add code
Dec 11, 2024
Figure 1 for Detecting Conversational Mental Manipulation with Intent-Aware Prompting
Figure 2 for Detecting Conversational Mental Manipulation with Intent-Aware Prompting
Figure 3 for Detecting Conversational Mental Manipulation with Intent-Aware Prompting
Figure 4 for Detecting Conversational Mental Manipulation with Intent-Aware Prompting
Viaarxiv icon

A High Energy-Efficiency Multi-core Neuromorphic Architecture for Deep SNN Training

Add code
Dec 10, 2024
Figure 1 for A High Energy-Efficiency Multi-core Neuromorphic Architecture for Deep SNN Training
Figure 2 for A High Energy-Efficiency Multi-core Neuromorphic Architecture for Deep SNN Training
Figure 3 for A High Energy-Efficiency Multi-core Neuromorphic Architecture for Deep SNN Training
Figure 4 for A High Energy-Efficiency Multi-core Neuromorphic Architecture for Deep SNN Training
Viaarxiv icon

TT-MPD: Test Time Model Pruning and Distillation

Add code
Dec 10, 2024
Figure 1 for TT-MPD: Test Time Model Pruning and Distillation
Figure 2 for TT-MPD: Test Time Model Pruning and Distillation
Figure 3 for TT-MPD: Test Time Model Pruning and Distillation
Figure 4 for TT-MPD: Test Time Model Pruning and Distillation
Viaarxiv icon

Fully Open Source Moxin-7B Technical Report

Add code
Dec 08, 2024
Figure 1 for Fully Open Source Moxin-7B Technical Report
Figure 2 for Fully Open Source Moxin-7B Technical Report
Figure 3 for Fully Open Source Moxin-7B Technical Report
Figure 4 for Fully Open Source Moxin-7B Technical Report
Viaarxiv icon

IMPACT:InMemory ComPuting Architecture Based on Y-FlAsh Technology for Coalesced Tsetlin Machine Inference

Add code
Dec 04, 2024
Figure 1 for IMPACT:InMemory ComPuting Architecture Based on Y-FlAsh Technology for Coalesced Tsetlin Machine Inference
Figure 2 for IMPACT:InMemory ComPuting Architecture Based on Y-FlAsh Technology for Coalesced Tsetlin Machine Inference
Figure 3 for IMPACT:InMemory ComPuting Architecture Based on Y-FlAsh Technology for Coalesced Tsetlin Machine Inference
Figure 4 for IMPACT:InMemory ComPuting Architecture Based on Y-FlAsh Technology for Coalesced Tsetlin Machine Inference
Viaarxiv icon

Measure Anything: Real-time, Multi-stage Vision-based Dimensional Measurement using Segment Anything

Add code
Dec 04, 2024
Figure 1 for Measure Anything: Real-time, Multi-stage Vision-based Dimensional Measurement using Segment Anything
Figure 2 for Measure Anything: Real-time, Multi-stage Vision-based Dimensional Measurement using Segment Anything
Figure 3 for Measure Anything: Real-time, Multi-stage Vision-based Dimensional Measurement using Segment Anything
Figure 4 for Measure Anything: Real-time, Multi-stage Vision-based Dimensional Measurement using Segment Anything
Viaarxiv icon

Does Few-Shot Learning Help LLM Performance in Code Synthesis?

Add code
Dec 03, 2024
Figure 1 for Does Few-Shot Learning Help LLM Performance in Code Synthesis?
Figure 2 for Does Few-Shot Learning Help LLM Performance in Code Synthesis?
Figure 3 for Does Few-Shot Learning Help LLM Performance in Code Synthesis?
Figure 4 for Does Few-Shot Learning Help LLM Performance in Code Synthesis?
Viaarxiv icon