Picture for Haojie Zhang

Haojie Zhang

A Unified Label-Aware Contrastive Learning Framework for Few-Shot Named Entity Recognition

Add code
Apr 26, 2024
Viaarxiv icon

Samsung Research China-Beijing at SemEval-2024 Task 3: A multi-stage framework for Emotion-Cause Pair Extraction in Conversations

Add code
Apr 25, 2024
Figure 1 for Samsung Research China-Beijing at SemEval-2024 Task 3: A multi-stage framework for Emotion-Cause Pair Extraction in Conversations
Figure 2 for Samsung Research China-Beijing at SemEval-2024 Task 3: A multi-stage framework for Emotion-Cause Pair Extraction in Conversations
Figure 3 for Samsung Research China-Beijing at SemEval-2024 Task 3: A multi-stage framework for Emotion-Cause Pair Extraction in Conversations
Figure 4 for Samsung Research China-Beijing at SemEval-2024 Task 3: A multi-stage framework for Emotion-Cause Pair Extraction in Conversations
Viaarxiv icon

Sur2f: A Hybrid Representation for High-Quality and Efficient Surface Reconstruction from Multi-view Images

Add code
Jan 08, 2024
Viaarxiv icon

Improving the Generalization of Segmentation Foundation Model under Distribution Shift via Weakly Supervised Adaptation

Add code
Dec 06, 2023
Viaarxiv icon

Fine-Tuning Pre-Trained Language Models Effectively by Optimizing Subnetworks Adaptively

Add code
Nov 03, 2022
Figure 1 for Fine-Tuning Pre-Trained Language Models Effectively by Optimizing Subnetworks Adaptively
Figure 2 for Fine-Tuning Pre-Trained Language Models Effectively by Optimizing Subnetworks Adaptively
Figure 3 for Fine-Tuning Pre-Trained Language Models Effectively by Optimizing Subnetworks Adaptively
Figure 4 for Fine-Tuning Pre-Trained Language Models Effectively by Optimizing Subnetworks Adaptively
Viaarxiv icon

Improve Transformer Pre-Training with Decoupled Directional Relative Position Encoding and Representation Differentiations

Add code
Oct 09, 2022
Figure 1 for Improve Transformer Pre-Training with Decoupled Directional Relative Position Encoding and Representation Differentiations
Figure 2 for Improve Transformer Pre-Training with Decoupled Directional Relative Position Encoding and Representation Differentiations
Figure 3 for Improve Transformer Pre-Training with Decoupled Directional Relative Position Encoding and Representation Differentiations
Figure 4 for Improve Transformer Pre-Training with Decoupled Directional Relative Position Encoding and Representation Differentiations
Viaarxiv icon

Hybrid Robotic-assisted Frameworks for Endomicroscopy Scanning in Retinal Surgeries

Add code
Sep 15, 2019
Figure 1 for Hybrid Robotic-assisted Frameworks for Endomicroscopy Scanning in Retinal Surgeries
Figure 2 for Hybrid Robotic-assisted Frameworks for Endomicroscopy Scanning in Retinal Surgeries
Figure 3 for Hybrid Robotic-assisted Frameworks for Endomicroscopy Scanning in Retinal Surgeries
Figure 4 for Hybrid Robotic-assisted Frameworks for Endomicroscopy Scanning in Retinal Surgeries
Viaarxiv icon