Picture for Niraj K. Jha

Niraj K. Jha

Princeton University, Princeton, USA

Learning Interpretable Differentiable Logic Networks

Add code
Jul 04, 2024
Viaarxiv icon

METRIK: Measurement-Efficient Randomized Controlled Trials using Transformers with Input Masking

Add code
Jun 24, 2024
Viaarxiv icon

CONFINE: Conformal Prediction for Interpretable Neural Networks

Add code
Jun 01, 2024
Viaarxiv icon

Attention-Driven Training-Free Efficiency Enhancement of Diffusion Models

Add code
May 08, 2024
Figure 1 for Attention-Driven Training-Free Efficiency Enhancement of Diffusion Models
Figure 2 for Attention-Driven Training-Free Efficiency Enhancement of Diffusion Models
Figure 3 for Attention-Driven Training-Free Efficiency Enhancement of Diffusion Models
Figure 4 for Attention-Driven Training-Free Efficiency Enhancement of Diffusion Models
Viaarxiv icon

DynaMo: Accelerating Language Model Inference with Dynamic Multi-Token Sampling

Add code
May 01, 2024
Viaarxiv icon

PAGE: Domain-Incremental Adaptation with Past-Agnostic Generative Replay for Smart Healthcare

Add code
Mar 13, 2024
Figure 1 for PAGE: Domain-Incremental Adaptation with Past-Agnostic Generative Replay for Smart Healthcare
Figure 2 for PAGE: Domain-Incremental Adaptation with Past-Agnostic Generative Replay for Smart Healthcare
Figure 3 for PAGE: Domain-Incremental Adaptation with Past-Agnostic Generative Replay for Smart Healthcare
Figure 4 for PAGE: Domain-Incremental Adaptation with Past-Agnostic Generative Replay for Smart Healthcare
Viaarxiv icon

TAD-SIE: Sample Size Estimation for Clinical Randomized Controlled Trials using a Trend-Adaptive Design with a Synthetic-Intervention-Based Estimator

Add code
Jan 08, 2024
Viaarxiv icon

BREATHE: Second-Order Gradients and Heteroscedastic Emulation based Design Space Exploration

Add code
Aug 16, 2023
Viaarxiv icon

Zero-TPrune: Zero-Shot Token Pruning through Leveraging of the Attention Graph in Pre-Trained Transformers

Add code
May 27, 2023
Figure 1 for Zero-TPrune: Zero-Shot Token Pruning through Leveraging of the Attention Graph in Pre-Trained Transformers
Figure 2 for Zero-TPrune: Zero-Shot Token Pruning through Leveraging of the Attention Graph in Pre-Trained Transformers
Figure 3 for Zero-TPrune: Zero-Shot Token Pruning through Leveraging of the Attention Graph in Pre-Trained Transformers
Figure 4 for Zero-TPrune: Zero-Shot Token Pruning through Leveraging of the Attention Graph in Pre-Trained Transformers
Viaarxiv icon

Im-Promptu: In-Context Composition from Image Prompts

Add code
May 26, 2023
Figure 1 for Im-Promptu: In-Context Composition from Image Prompts
Figure 2 for Im-Promptu: In-Context Composition from Image Prompts
Figure 3 for Im-Promptu: In-Context Composition from Image Prompts
Figure 4 for Im-Promptu: In-Context Composition from Image Prompts
Viaarxiv icon