Visual Keyword Spotting


VE-KWS: Visual Modality Enhanced End-to-End Keyword Spotting

Add code
Mar 14, 2023
Figure 1 for VE-KWS: Visual Modality Enhanced End-to-End Keyword Spotting
Figure 2 for VE-KWS: Visual Modality Enhanced End-to-End Keyword Spotting
Figure 3 for VE-KWS: Visual Modality Enhanced End-to-End Keyword Spotting
Figure 4 for VE-KWS: Visual Modality Enhanced End-to-End Keyword Spotting
Viaarxiv icon

LipLearner: Customizable Silent Speech Interactions on Mobile Devices

Add code
Feb 14, 2023
Figure 1 for LipLearner: Customizable Silent Speech Interactions on Mobile Devices
Figure 2 for LipLearner: Customizable Silent Speech Interactions on Mobile Devices
Figure 3 for LipLearner: Customizable Silent Speech Interactions on Mobile Devices
Figure 4 for LipLearner: Customizable Silent Speech Interactions on Mobile Devices
Viaarxiv icon

Visual Keyword Spotting with Attention

Add code
Oct 29, 2021
Figure 1 for Visual Keyword Spotting with Attention
Figure 2 for Visual Keyword Spotting with Attention
Figure 3 for Visual Keyword Spotting with Attention
Figure 4 for Visual Keyword Spotting with Attention
Viaarxiv icon

T-RECX: Tiny-Resource Efficient Convolutional Neural Networks with Early-Exit

Add code
Jul 14, 2022
Figure 1 for T-RECX: Tiny-Resource Efficient Convolutional Neural Networks with Early-Exit
Figure 2 for T-RECX: Tiny-Resource Efficient Convolutional Neural Networks with Early-Exit
Figure 3 for T-RECX: Tiny-Resource Efficient Convolutional Neural Networks with Early-Exit
Figure 4 for T-RECX: Tiny-Resource Efficient Convolutional Neural Networks with Early-Exit
Viaarxiv icon

Depth Pruning with Auxiliary Networks for TinyML

Add code
Apr 22, 2022
Figure 1 for Depth Pruning with Auxiliary Networks for TinyML
Figure 2 for Depth Pruning with Auxiliary Networks for TinyML
Figure 3 for Depth Pruning with Auxiliary Networks for TinyML
Figure 4 for Depth Pruning with Auxiliary Networks for TinyML
Viaarxiv icon

MLPerf Tiny Benchmark

Add code
Jun 28, 2021
Figure 1 for MLPerf Tiny Benchmark
Figure 2 for MLPerf Tiny Benchmark
Figure 3 for MLPerf Tiny Benchmark
Figure 4 for MLPerf Tiny Benchmark
Viaarxiv icon

AnalogNets: ML-HW Co-Design of Noise-robust TinyML Models and Always-On Analog Compute-in-Memory Accelerator

Add code
Nov 10, 2021
Figure 1 for AnalogNets: ML-HW Co-Design of Noise-robust TinyML Models and Always-On Analog Compute-in-Memory Accelerator
Figure 2 for AnalogNets: ML-HW Co-Design of Noise-robust TinyML Models and Always-On Analog Compute-in-Memory Accelerator
Figure 3 for AnalogNets: ML-HW Co-Design of Noise-robust TinyML Models and Always-On Analog Compute-in-Memory Accelerator
Figure 4 for AnalogNets: ML-HW Co-Design of Noise-robust TinyML Models and Always-On Analog Compute-in-Memory Accelerator
Viaarxiv icon

Seeing wake words: Audio-visual Keyword Spotting

Add code
Sep 02, 2020
Figure 1 for Seeing wake words: Audio-visual Keyword Spotting
Figure 2 for Seeing wake words: Audio-visual Keyword Spotting
Figure 3 for Seeing wake words: Audio-visual Keyword Spotting
Figure 4 for Seeing wake words: Audio-visual Keyword Spotting
Viaarxiv icon

A Twitter-Driven Deep Learning Mechanism for the Determination of Vehicle Hijacking Spots in Cities

Add code
Aug 11, 2022
Figure 1 for A Twitter-Driven Deep Learning Mechanism for the Determination of Vehicle Hijacking Spots in Cities
Figure 2 for A Twitter-Driven Deep Learning Mechanism for the Determination of Vehicle Hijacking Spots in Cities
Figure 3 for A Twitter-Driven Deep Learning Mechanism for the Determination of Vehicle Hijacking Spots in Cities
Figure 4 for A Twitter-Driven Deep Learning Mechanism for the Determination of Vehicle Hijacking Spots in Cities
Viaarxiv icon

Keyword localisation in untranscribed speech using visually grounded speech models

Add code
Feb 02, 2022
Figure 1 for Keyword localisation in untranscribed speech using visually grounded speech models
Figure 2 for Keyword localisation in untranscribed speech using visually grounded speech models
Figure 3 for Keyword localisation in untranscribed speech using visually grounded speech models
Figure 4 for Keyword localisation in untranscribed speech using visually grounded speech models
Viaarxiv icon