Alert button
Picture for Juan I. Pisula

Juan I. Pisula

Alert button

Fine-tuning a Multiple Instance Learning Feature Extractor with Masked Context Modelling and Knowledge Distillation

Add code
Bookmark button
Alert button
Mar 08, 2024
Juan I. Pisula, Katarzyna Bozek

Figure 1 for Fine-tuning a Multiple Instance Learning Feature Extractor with Masked Context Modelling and Knowledge Distillation
Figure 2 for Fine-tuning a Multiple Instance Learning Feature Extractor with Masked Context Modelling and Knowledge Distillation
Figure 3 for Fine-tuning a Multiple Instance Learning Feature Extractor with Masked Context Modelling and Knowledge Distillation
Figure 4 for Fine-tuning a Multiple Instance Learning Feature Extractor with Masked Context Modelling and Knowledge Distillation
Viaarxiv icon

Language models are good pathologists: using attention-based sequence reduction and text-pretrained transformers for efficient WSI classification

Add code
Bookmark button
Alert button
Nov 14, 2022
Juan I. Pisula, Katarzyna Bozek

Figure 1 for Language models are good pathologists: using attention-based sequence reduction and text-pretrained transformers for efficient WSI classification
Figure 2 for Language models are good pathologists: using attention-based sequence reduction and text-pretrained transformers for efficient WSI classification
Figure 3 for Language models are good pathologists: using attention-based sequence reduction and text-pretrained transformers for efficient WSI classification
Figure 4 for Language models are good pathologists: using attention-based sequence reduction and text-pretrained transformers for efficient WSI classification
Viaarxiv icon