Picture for Katarzyna Bozek

Katarzyna Bozek

Understanding Cell Fate Decisions with Temporal Attention

Add code
Mar 17, 2026
Viaarxiv icon

Self-Supervised ImageNet Representations for In Vivo Confocal Microscopy: Tortuosity Grading without Segmentation Maps

Add code
Mar 16, 2026
Viaarxiv icon

Context-aware Skin Cancer Epithelial Cell Classification with Scalable Graph Transformers

Add code
Feb 17, 2026
Viaarxiv icon

AMAP-APP: Efficient Segmentation and Morphometry Quantification of Fluorescent Microscopy Images of Podocytes

Add code
Feb 16, 2026
Viaarxiv icon

ActiTect: A Generalizable Machine Learning Pipeline for REM Sleep Behavior Disorder Screening through Standardized Actigraphy

Add code
Nov 12, 2025
Viaarxiv icon

Histo-Miner: Deep Learning based Tissue Features Extraction Pipeline from H&E Whole Slide Images of Cutaneous Squamous Cell Carcinoma

Add code
May 07, 2025
Viaarxiv icon

KPIs 2024 Challenge: Advancing Glomerular Segmentation from Patch- to Slide-Level

Add code
Feb 11, 2025
Figure 1 for KPIs 2024 Challenge: Advancing Glomerular Segmentation from Patch- to Slide-Level
Figure 2 for KPIs 2024 Challenge: Advancing Glomerular Segmentation from Patch- to Slide-Level
Figure 3 for KPIs 2024 Challenge: Advancing Glomerular Segmentation from Patch- to Slide-Level
Figure 4 for KPIs 2024 Challenge: Advancing Glomerular Segmentation from Patch- to Slide-Level
Viaarxiv icon

Fine-tuning a Multiple Instance Learning Feature Extractor with Masked Context Modelling and Knowledge Distillation

Add code
Mar 08, 2024
Figure 1 for Fine-tuning a Multiple Instance Learning Feature Extractor with Masked Context Modelling and Knowledge Distillation
Figure 2 for Fine-tuning a Multiple Instance Learning Feature Extractor with Masked Context Modelling and Knowledge Distillation
Figure 3 for Fine-tuning a Multiple Instance Learning Feature Extractor with Masked Context Modelling and Knowledge Distillation
Figure 4 for Fine-tuning a Multiple Instance Learning Feature Extractor with Masked Context Modelling and Knowledge Distillation
Viaarxiv icon

Language models are good pathologists: using attention-based sequence reduction and text-pretrained transformers for efficient WSI classification

Add code
Nov 14, 2022
Figure 1 for Language models are good pathologists: using attention-based sequence reduction and text-pretrained transformers for efficient WSI classification
Figure 2 for Language models are good pathologists: using attention-based sequence reduction and text-pretrained transformers for efficient WSI classification
Figure 3 for Language models are good pathologists: using attention-based sequence reduction and text-pretrained transformers for efficient WSI classification
Figure 4 for Language models are good pathologists: using attention-based sequence reduction and text-pretrained transformers for efficient WSI classification
Viaarxiv icon

Pixel personality for dense object tracking in a 2D honeybee hive

Add code
Dec 31, 2018
Figure 1 for Pixel personality for dense object tracking in a 2D honeybee hive
Figure 2 for Pixel personality for dense object tracking in a 2D honeybee hive
Figure 3 for Pixel personality for dense object tracking in a 2D honeybee hive
Figure 4 for Pixel personality for dense object tracking in a 2D honeybee hive
Viaarxiv icon