Alert button
Picture for Bo Dong

Bo Dong

Alert button

GraphPub: Generation of Differential Privacy Graph with High Availability

Add code
Bookmark button
Alert button
Mar 05, 2024
Wanghan Xu, Bin Shi, Ao Liu, Jiqiang Zhang, Bo Dong

Figure 1 for GraphPub: Generation of Differential Privacy Graph with High Availability
Figure 2 for GraphPub: Generation of Differential Privacy Graph with High Availability
Figure 3 for GraphPub: Generation of Differential Privacy Graph with High Availability
Figure 4 for GraphPub: Generation of Differential Privacy Graph with High Availability
Viaarxiv icon

Homography Initialization and Dynamic Weighting Algorithm Based on a Downward-Looking Camera and IMU

Add code
Bookmark button
Alert button
Nov 16, 2023
Bo Dong, Yongkang Tao, Deng Peng, Zhigang Fu

Viaarxiv icon

Efficient LLM Inference on CPUs

Add code
Bookmark button
Alert button
Nov 01, 2023
Haihao Shen, Hanwen Chang, Bo Dong, Yu Luo, Hengyu Meng

Viaarxiv icon

A Geometrical Approach to Evaluate the Adversarial Robustness of Deep Neural Networks

Add code
Bookmark button
Alert button
Oct 10, 2023
Yang Wang, Bo Dong, Ke Xu, Haiyin Piao, Yufei Ding, Baocai Yin, Xin Yang

Figure 1 for A Geometrical Approach to Evaluate the Adversarial Robustness of Deep Neural Networks
Figure 2 for A Geometrical Approach to Evaluate the Adversarial Robustness of Deep Neural Networks
Figure 3 for A Geometrical Approach to Evaluate the Adversarial Robustness of Deep Neural Networks
Figure 4 for A Geometrical Approach to Evaluate the Adversarial Robustness of Deep Neural Networks
Viaarxiv icon

Compressing Context to Enhance Inference Efficiency of Large Language Models

Add code
Bookmark button
Alert button
Oct 09, 2023
Yucheng Li, Bo Dong, Chenghua Lin, Frank Guerin

Figure 1 for Compressing Context to Enhance Inference Efficiency of Large Language Models
Figure 2 for Compressing Context to Enhance Inference Efficiency of Large Language Models
Figure 3 for Compressing Context to Enhance Inference Efficiency of Large Language Models
Figure 4 for Compressing Context to Enhance Inference Efficiency of Large Language Models
Viaarxiv icon

In the Blink of an Eye: Event-based Emotion Recognition

Add code
Bookmark button
Alert button
Oct 06, 2023
Haiwei Zhang, Jiqing Zhang, Bo Dong, Pieter Peers, Wenwei Wu, Xiaopeng Wei, Felix Heide, Xin Yang

Viaarxiv icon

Event-Enhanced Multi-Modal Spiking Neural Network for Dynamic Obstacle Avoidance

Add code
Bookmark button
Alert button
Oct 03, 2023
Yang Wang, Bo Dong, Yuji Zhang, Yunduo Zhou, Haiyang Mei, Ziqi Wei, Xin Yang

Figure 1 for Event-Enhanced Multi-Modal Spiking Neural Network for Dynamic Obstacle Avoidance
Figure 2 for Event-Enhanced Multi-Modal Spiking Neural Network for Dynamic Obstacle Avoidance
Figure 3 for Event-Enhanced Multi-Modal Spiking Neural Network for Dynamic Obstacle Avoidance
Figure 4 for Event-Enhanced Multi-Modal Spiking Neural Network for Dynamic Obstacle Avoidance
Viaarxiv icon

A Unified Query-based Paradigm for Camouflaged Instance Segmentation

Add code
Bookmark button
Alert button
Aug 29, 2023
Bo Dong, Jialun Pei, Rongrong Gao, Tian-Zhu Xiang, Shuo Wang, Huan Xiong

Figure 1 for A Unified Query-based Paradigm for Camouflaged Instance Segmentation
Figure 2 for A Unified Query-based Paradigm for Camouflaged Instance Segmentation
Figure 3 for A Unified Query-based Paradigm for Camouflaged Instance Segmentation
Figure 4 for A Unified Query-based Paradigm for Camouflaged Instance Segmentation
Viaarxiv icon

An Efficient Sparse Inference Software Accelerator for Transformer-based Language Models on CPUs

Add code
Bookmark button
Alert button
Jun 28, 2023
Haihao Shen, Hengyu Meng, Bo Dong, Zhe Wang, Ofir Zafrir, Yi Ding, Yu Luo, Hanwen Chang, Qun Gao, Ziheng Wang, Guy Boudoukh, Moshe Wasserblat

Figure 1 for An Efficient Sparse Inference Software Accelerator for Transformer-based Language Models on CPUs
Figure 2 for An Efficient Sparse Inference Software Accelerator for Transformer-based Language Models on CPUs
Figure 3 for An Efficient Sparse Inference Software Accelerator for Transformer-based Language Models on CPUs
Figure 4 for An Efficient Sparse Inference Software Accelerator for Transformer-based Language Models on CPUs
Viaarxiv icon

A Low-rank Matching Attention based Cross-modal Feature Fusion Method for Conversational Emotion Recognition

Add code
Bookmark button
Alert button
Jun 16, 2023
Yuntao Shou, Xiangyong Cao, Deyu Meng, Bo Dong, Qinghua Zheng

Figure 1 for A Low-rank Matching Attention based Cross-modal Feature Fusion Method for Conversational Emotion Recognition
Figure 2 for A Low-rank Matching Attention based Cross-modal Feature Fusion Method for Conversational Emotion Recognition
Figure 3 for A Low-rank Matching Attention based Cross-modal Feature Fusion Method for Conversational Emotion Recognition
Figure 4 for A Low-rank Matching Attention based Cross-modal Feature Fusion Method for Conversational Emotion Recognition
Viaarxiv icon