Picture for Ngai Wong

Ngai Wong

ASMR: Activation-sharing Multi-resolution Coordinate Networks For Efficient Inference

Add code
May 20, 2024
Viaarxiv icon

Nonparametric Teaching of Implicit Neural Representations

Add code
May 17, 2024
Viaarxiv icon

Poisoning-based Backdoor Attacks for Arbitrary Target Label with Positive Triggers

May 09, 2024
Viaarxiv icon

Stochastic Multivariate Universal-Radix Finite-State Machine: a Theoretically and Practically Elegant Nonlinear Function Approximator

May 03, 2024
Figure 1 for Stochastic Multivariate Universal-Radix Finite-State Machine: a Theoretically and Practically Elegant Nonlinear Function Approximator
Figure 2 for Stochastic Multivariate Universal-Radix Finite-State Machine: a Theoretically and Practically Elegant Nonlinear Function Approximator
Figure 3 for Stochastic Multivariate Universal-Radix Finite-State Machine: a Theoretically and Practically Elegant Nonlinear Function Approximator
Figure 4 for Stochastic Multivariate Universal-Radix Finite-State Machine: a Theoretically and Practically Elegant Nonlinear Function Approximator
Viaarxiv icon

Rethinking Kullback-Leibler Divergence in Knowledge Distillation for Large Language Models

Add code
Apr 03, 2024
Viaarxiv icon

Taming Lookup Tables for Efficient Image Retouching

Add code
Mar 28, 2024
Figure 1 for Taming Lookup Tables for Efficient Image Retouching
Figure 2 for Taming Lookup Tables for Efficient Image Retouching
Figure 3 for Taming Lookup Tables for Efficient Image Retouching
Figure 4 for Taming Lookup Tables for Efficient Image Retouching
Viaarxiv icon

APTQ: Attention-aware Post-Training Mixed-Precision Quantization for Large Language Models

Feb 21, 2024
Figure 1 for APTQ: Attention-aware Post-Training Mixed-Precision Quantization for Large Language Models
Figure 2 for APTQ: Attention-aware Post-Training Mixed-Precision Quantization for Large Language Models
Figure 3 for APTQ: Attention-aware Post-Training Mixed-Precision Quantization for Large Language Models
Figure 4 for APTQ: Attention-aware Post-Training Mixed-Precision Quantization for Large Language Models
Viaarxiv icon

LoRETTA: Low-Rank Economic Tensor-Train Adaptation for Ultra-Low-Parameter Fine-Tuning of Large Language Models

Add code
Feb 18, 2024
Viaarxiv icon

Learning Spatially Collaged Fourier Bases for Implicit Neural Representation

Dec 28, 2023
Viaarxiv icon

A Unifying Tensor View for Lightweight CNNs

Dec 15, 2023
Viaarxiv icon