VQ VAE


Vector-quantized variational autoencoder (VQ VAE) is a generative model that uses vector quantization to learn discrete latent representations.

Learning Enhanced Structural Representations with Block-Based Uncertainties for Ocean Floor Mapping

Add code
Apr 19, 2025
Viaarxiv icon

Hierarchical Vector Quantized Graph Autoencoder with Annealing-Based Code Selection

Add code
Apr 17, 2025
Viaarxiv icon

RadarLLM: Empowering Large Language Models to Understand Human Motion from Millimeter-wave Point Cloud Sequence

Add code
Apr 14, 2025
Viaarxiv icon

Synthetic Aircraft Trajectory Generation Using Time-Based VQ-VAE

Add code
Apr 12, 2025
Viaarxiv icon

Ego4o: Egocentric Human Motion Capture and Understanding from Multi-Modal Input

Add code
Apr 11, 2025
Viaarxiv icon

Instruction-Guided Autoregressive Neural Network Parameter Generation

Add code
Apr 02, 2025
Viaarxiv icon

MuTri: Multi-view Tri-alignment for OCT to OCTA 3D Image Translation

Add code
Apr 02, 2025
Viaarxiv icon

Arch-LLM: Taming LLMs for Neural Architecture Generation via Unsupervised Discrete Representation Learning

Add code
Mar 28, 2025
Viaarxiv icon

Make Some Noise: Towards LLM audio reasoning and generation using sound tokens

Add code
Mar 28, 2025
Viaarxiv icon

HOIGPT: Learning Long Sequence Hand-Object Interaction with Language Models

Add code
Mar 24, 2025
Viaarxiv icon