Picture for Liqun Chen

Liqun Chen

Powerful Lossy Compression for Noisy Images

Mar 26, 2024
Figure 1 for Powerful Lossy Compression for Noisy Images
Figure 2 for Powerful Lossy Compression for Noisy Images
Figure 3 for Powerful Lossy Compression for Noisy Images
Figure 4 for Powerful Lossy Compression for Noisy Images
Viaarxiv icon

Jointly Optimizing Image Compression with Low-light Image Enhancement

May 24, 2023
Figure 1 for Jointly Optimizing Image Compression with Low-light Image Enhancement
Figure 2 for Jointly Optimizing Image Compression with Low-light Image Enhancement
Figure 3 for Jointly Optimizing Image Compression with Low-light Image Enhancement
Figure 4 for Jointly Optimizing Image Compression with Low-light Image Enhancement
Viaarxiv icon

Understanding and Constructing Latent Modality Structures in Multi-modal Representation Learning

Mar 10, 2023
Figure 1 for Understanding and Constructing Latent Modality Structures in Multi-modal Representation Learning
Figure 2 for Understanding and Constructing Latent Modality Structures in Multi-modal Representation Learning
Figure 3 for Understanding and Constructing Latent Modality Structures in Multi-modal Representation Learning
Figure 4 for Understanding and Constructing Latent Modality Structures in Multi-modal Representation Learning
Viaarxiv icon

High-Fidelity Variable-Rate Image Compression via Invertible Activation Transformation

Sep 12, 2022
Figure 1 for High-Fidelity Variable-Rate Image Compression via Invertible Activation Transformation
Figure 2 for High-Fidelity Variable-Rate Image Compression via Invertible Activation Transformation
Figure 3 for High-Fidelity Variable-Rate Image Compression via Invertible Activation Transformation
Figure 4 for High-Fidelity Variable-Rate Image Compression via Invertible Activation Transformation
Viaarxiv icon

Vision-Language Pre-Training with Triple Contrastive Learning

Add code
Mar 28, 2022
Figure 1 for Vision-Language Pre-Training with Triple Contrastive Learning
Figure 2 for Vision-Language Pre-Training with Triple Contrastive Learning
Figure 3 for Vision-Language Pre-Training with Triple Contrastive Learning
Figure 4 for Vision-Language Pre-Training with Triple Contrastive Learning
Viaarxiv icon

Multi-modal Alignment using Representation Codebook

Mar 28, 2022
Figure 1 for Multi-modal Alignment using Representation Codebook
Figure 2 for Multi-modal Alignment using Representation Codebook
Figure 3 for Multi-modal Alignment using Representation Codebook
Figure 4 for Multi-modal Alignment using Representation Codebook
Viaarxiv icon

Learning Oriented Remote Sensing Object Detection via Naive Geometric Computing

Dec 01, 2021
Figure 1 for Learning Oriented Remote Sensing Object Detection via Naive Geometric Computing
Figure 2 for Learning Oriented Remote Sensing Object Detection via Naive Geometric Computing
Figure 3 for Learning Oriented Remote Sensing Object Detection via Naive Geometric Computing
Figure 4 for Learning Oriented Remote Sensing Object Detection via Naive Geometric Computing
Viaarxiv icon

Simpler, Faster, Stronger: Breaking The log-K Curse On Contrastive Learners With FlatNCE

Add code
Jul 02, 2021
Figure 1 for Simpler, Faster, Stronger: Breaking The log-K Curse On Contrastive Learners With FlatNCE
Figure 2 for Simpler, Faster, Stronger: Breaking The log-K Curse On Contrastive Learners With FlatNCE
Figure 3 for Simpler, Faster, Stronger: Breaking The log-K Curse On Contrastive Learners With FlatNCE
Figure 4 for Simpler, Faster, Stronger: Breaking The log-K Curse On Contrastive Learners With FlatNCE
Viaarxiv icon

Securing emergent behaviour in swarm robotics

Feb 05, 2021
Viaarxiv icon

Wasserstein Contrastive Representation Distillation

Dec 15, 2020
Figure 1 for Wasserstein Contrastive Representation Distillation
Figure 2 for Wasserstein Contrastive Representation Distillation
Figure 3 for Wasserstein Contrastive Representation Distillation
Figure 4 for Wasserstein Contrastive Representation Distillation
Viaarxiv icon