Picture for Hongkun Yu

Hongkun Yu

Department of Biomedical Engineering, University of Wisconsin Madison, Madison, WI, USA

ResNCT: A Deep Learning Model for the Synthesis of Nephrographic Phase Images in CT Urography

Add code
May 07, 2024
Figure 1 for ResNCT: A Deep Learning Model for the Synthesis of Nephrographic Phase Images in CT Urography
Figure 2 for ResNCT: A Deep Learning Model for the Synthesis of Nephrographic Phase Images in CT Urography
Figure 3 for ResNCT: A Deep Learning Model for the Synthesis of Nephrographic Phase Images in CT Urography
Figure 4 for ResNCT: A Deep Learning Model for the Synthesis of Nephrographic Phase Images in CT Urography
Viaarxiv icon

Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context

Add code
Mar 08, 2024
Viaarxiv icon

Multitask Multilingual Model Adaptation with Featurized Low-Rank Mixtures

Add code
Feb 27, 2024
Figure 1 for Multitask Multilingual Model Adaptation with Featurized Low-Rank Mixtures
Figure 2 for Multitask Multilingual Model Adaptation with Featurized Low-Rank Mixtures
Figure 3 for Multitask Multilingual Model Adaptation with Featurized Low-Rank Mixtures
Figure 4 for Multitask Multilingual Model Adaptation with Featurized Low-Rank Mixtures
Viaarxiv icon

Multi-step Problem Solving Through a Verifier: An Empirical Analysis on Model-induced Process Supervision

Add code
Feb 05, 2024
Viaarxiv icon

Gemini: A Family of Highly Capable Multimodal Models

Add code
Dec 19, 2023
Viaarxiv icon

Enable Language Models to Implicitly Learn Self-Improvement From Data

Add code
Oct 05, 2023
Figure 1 for Enable Language Models to Implicitly Learn Self-Improvement From Data
Figure 2 for Enable Language Models to Implicitly Learn Self-Improvement From Data
Figure 3 for Enable Language Models to Implicitly Learn Self-Improvement From Data
Figure 4 for Enable Language Models to Implicitly Learn Self-Improvement From Data
Viaarxiv icon

Flan-MoE: Scaling Instruction-Finetuned Language Models with Sparse Mixture of Experts

Add code
May 24, 2023
Figure 1 for Flan-MoE: Scaling Instruction-Finetuned Language Models with Sparse Mixture of Experts
Figure 2 for Flan-MoE: Scaling Instruction-Finetuned Language Models with Sparse Mixture of Experts
Figure 3 for Flan-MoE: Scaling Instruction-Finetuned Language Models with Sparse Mixture of Experts
Figure 4 for Flan-MoE: Scaling Instruction-Finetuned Language Models with Sparse Mixture of Experts
Viaarxiv icon

Scaling Instruction-Finetuned Language Models

Add code
Oct 20, 2022
Figure 1 for Scaling Instruction-Finetuned Language Models
Figure 2 for Scaling Instruction-Finetuned Language Models
Figure 3 for Scaling Instruction-Finetuned Language Models
Figure 4 for Scaling Instruction-Finetuned Language Models
Viaarxiv icon

EncT5: Fine-tuning T5 Encoder for Non-autoregressive Tasks

Add code
Oct 16, 2021
Figure 1 for EncT5: Fine-tuning T5 Encoder for Non-autoregressive Tasks
Figure 2 for EncT5: Fine-tuning T5 Encoder for Non-autoregressive Tasks
Figure 3 for EncT5: Fine-tuning T5 Encoder for Non-autoregressive Tasks
Viaarxiv icon

On the Transformer Growth for Progressive BERT Training

Add code
Oct 23, 2020
Figure 1 for On the Transformer Growth for Progressive BERT Training
Figure 2 for On the Transformer Growth for Progressive BERT Training
Figure 3 for On the Transformer Growth for Progressive BERT Training
Figure 4 for On the Transformer Growth for Progressive BERT Training
Viaarxiv icon