Picture for Katarzyna Bozek

Katarzyna Bozek

ActiTect: A Generalizable Machine Learning Pipeline for REM Sleep Behavior Disorder Screening through Standardized Actigraphy

Add code
Nov 12, 2025
Viaarxiv icon

Histo-Miner: Deep Learning based Tissue Features Extraction Pipeline from H&E Whole Slide Images of Cutaneous Squamous Cell Carcinoma

Add code
May 07, 2025
Viaarxiv icon

KPIs 2024 Challenge: Advancing Glomerular Segmentation from Patch- to Slide-Level

Add code
Feb 11, 2025
Figure 1 for KPIs 2024 Challenge: Advancing Glomerular Segmentation from Patch- to Slide-Level
Figure 2 for KPIs 2024 Challenge: Advancing Glomerular Segmentation from Patch- to Slide-Level
Figure 3 for KPIs 2024 Challenge: Advancing Glomerular Segmentation from Patch- to Slide-Level
Figure 4 for KPIs 2024 Challenge: Advancing Glomerular Segmentation from Patch- to Slide-Level
Viaarxiv icon

Fine-tuning a Multiple Instance Learning Feature Extractor with Masked Context Modelling and Knowledge Distillation

Add code
Mar 08, 2024
Figure 1 for Fine-tuning a Multiple Instance Learning Feature Extractor with Masked Context Modelling and Knowledge Distillation
Figure 2 for Fine-tuning a Multiple Instance Learning Feature Extractor with Masked Context Modelling and Knowledge Distillation
Figure 3 for Fine-tuning a Multiple Instance Learning Feature Extractor with Masked Context Modelling and Knowledge Distillation
Figure 4 for Fine-tuning a Multiple Instance Learning Feature Extractor with Masked Context Modelling and Knowledge Distillation
Viaarxiv icon

Language models are good pathologists: using attention-based sequence reduction and text-pretrained transformers for efficient WSI classification

Add code
Nov 14, 2022
Figure 1 for Language models are good pathologists: using attention-based sequence reduction and text-pretrained transformers for efficient WSI classification
Figure 2 for Language models are good pathologists: using attention-based sequence reduction and text-pretrained transformers for efficient WSI classification
Figure 3 for Language models are good pathologists: using attention-based sequence reduction and text-pretrained transformers for efficient WSI classification
Figure 4 for Language models are good pathologists: using attention-based sequence reduction and text-pretrained transformers for efficient WSI classification
Viaarxiv icon

Pixel personality for dense object tracking in a 2D honeybee hive

Add code
Dec 31, 2018
Figure 1 for Pixel personality for dense object tracking in a 2D honeybee hive
Figure 2 for Pixel personality for dense object tracking in a 2D honeybee hive
Figure 3 for Pixel personality for dense object tracking in a 2D honeybee hive
Figure 4 for Pixel personality for dense object tracking in a 2D honeybee hive
Viaarxiv icon

Towards dense object tracking in a 2D honeybee hive

Add code
Dec 22, 2017
Figure 1 for Towards dense object tracking in a 2D honeybee hive
Figure 2 for Towards dense object tracking in a 2D honeybee hive
Figure 3 for Towards dense object tracking in a 2D honeybee hive
Figure 4 for Towards dense object tracking in a 2D honeybee hive
Viaarxiv icon