Picture for Keshab K. Parhi

Keshab K. Parhi

A Survey of Attacks on Large Language Models

Add code
May 18, 2025
Viaarxiv icon

SpikePipe: Accelerated Training of Spiking Neural Networks via Inter-Layer Pipelining and Multiprocessor Scheduling

Add code
Jun 11, 2024
Figure 1 for SpikePipe: Accelerated Training of Spiking Neural Networks via Inter-Layer Pipelining and Multiprocessor Scheduling
Figure 2 for SpikePipe: Accelerated Training of Spiking Neural Networks via Inter-Layer Pipelining and Multiprocessor Scheduling
Figure 3 for SpikePipe: Accelerated Training of Spiking Neural Networks via Inter-Layer Pipelining and Multiprocessor Scheduling
Figure 4 for SpikePipe: Accelerated Training of Spiking Neural Networks via Inter-Layer Pipelining and Multiprocessor Scheduling
Viaarxiv icon

Robust Clustering using Hyperdimensional Computing

Add code
Dec 05, 2023
Viaarxiv icon

Quantum Circuits for Stabilizer Error Correcting Codes: A Tutorial

Add code
Sep 21, 2023
Viaarxiv icon

Systematic Design and Optimization of Quantum Circuits for Stabilizer Codes

Add code
Sep 21, 2023
Figure 1 for Systematic Design and Optimization of Quantum Circuits for Stabilizer Codes
Figure 2 for Systematic Design and Optimization of Quantum Circuits for Stabilizer Codes
Figure 3 for Systematic Design and Optimization of Quantum Circuits for Stabilizer Codes
Figure 4 for Systematic Design and Optimization of Quantum Circuits for Stabilizer Codes
Viaarxiv icon

A Low-Latency FFT-IFFT Cascade Architecture

Add code
Sep 16, 2023
Viaarxiv icon

NTT-Based Polynomial Modular Multiplication for Homomorphic Encryption: A Tutorial

Add code
Jun 21, 2023
Viaarxiv icon

Tensor Decomposition for Model Reduction in Neural Networks: A Review

Add code
Apr 26, 2023
Viaarxiv icon

Multi-Channel FFT Architectures Designed via Folding and Interleaving

Add code
Feb 19, 2022
Figure 1 for Multi-Channel FFT Architectures Designed via Folding and Interleaving
Figure 2 for Multi-Channel FFT Architectures Designed via Folding and Interleaving
Figure 3 for Multi-Channel FFT Architectures Designed via Folding and Interleaving
Figure 4 for Multi-Channel FFT Architectures Designed via Folding and Interleaving
Viaarxiv icon

LayerPipe: Accelerating Deep Neural Network Training by Intra-Layer and Inter-Layer Gradient Pipelining and Multiprocessor Scheduling

Add code
Aug 14, 2021
Figure 1 for LayerPipe: Accelerating Deep Neural Network Training by Intra-Layer and Inter-Layer Gradient Pipelining and Multiprocessor Scheduling
Figure 2 for LayerPipe: Accelerating Deep Neural Network Training by Intra-Layer and Inter-Layer Gradient Pipelining and Multiprocessor Scheduling
Figure 3 for LayerPipe: Accelerating Deep Neural Network Training by Intra-Layer and Inter-Layer Gradient Pipelining and Multiprocessor Scheduling
Figure 4 for LayerPipe: Accelerating Deep Neural Network Training by Intra-Layer and Inter-Layer Gradient Pipelining and Multiprocessor Scheduling
Viaarxiv icon