Alert button
Picture for Miriam Cha

Miriam Cha

Alert button

Bidirectional Captioning for Clinically Accurate and Interpretable Models

Add code
Bookmark button
Alert button
Oct 30, 2023
Keegan Quigley, Miriam Cha, Josh Barua, Geeticka Chauhan, Seth Berkowitz, Steven Horng, Polina Golland

Viaarxiv icon

MultiEarth 2023 -- Multimodal Learning for Earth and Environment Workshop and Challenge

Add code
Bookmark button
Alert button
Jun 07, 2023
Miriam Cha, Gregory Angelides, Mark Hamilton, Andy Soszynski, Brandon Swenson, Nathaniel Maidel, Phillip Isola, Taylor Perron, Bill Freeman

Figure 1 for MultiEarth 2023 -- Multimodal Learning for Earth and Environment Workshop and Challenge
Figure 2 for MultiEarth 2023 -- Multimodal Learning for Earth and Environment Workshop and Challenge
Figure 3 for MultiEarth 2023 -- Multimodal Learning for Earth and Environment Workshop and Challenge
Figure 4 for MultiEarth 2023 -- Multimodal Learning for Earth and Environment Workshop and Challenge
Viaarxiv icon

RadTex: Learning Efficient Radiograph Representations from Text Reports

Add code
Bookmark button
Alert button
Aug 05, 2022
Keegan Quigley, Miriam Cha, Ruizhi Liao, Geeticka Chauhan, Steven Horng, Seth Berkowitz, Polina Golland

Figure 1 for RadTex: Learning Efficient Radiograph Representations from Text Reports
Figure 2 for RadTex: Learning Efficient Radiograph Representations from Text Reports
Figure 3 for RadTex: Learning Efficient Radiograph Representations from Text Reports
Figure 4 for RadTex: Learning Efficient Radiograph Representations from Text Reports
Viaarxiv icon

SAR-to-EO Image Translation with Multi-Conditional Adversarial Networks

Add code
Bookmark button
Alert button
Jul 26, 2022
Armando Cabrera, Miriam Cha, Prafull Sharma, Michael Newey

Figure 1 for SAR-to-EO Image Translation with Multi-Conditional Adversarial Networks
Figure 2 for SAR-to-EO Image Translation with Multi-Conditional Adversarial Networks
Figure 3 for SAR-to-EO Image Translation with Multi-Conditional Adversarial Networks
Figure 4 for SAR-to-EO Image Translation with Multi-Conditional Adversarial Networks
Viaarxiv icon

Developing a Series of AI Challenges for the United States Department of the Air Force

Add code
Bookmark button
Alert button
Jul 14, 2022
Vijay Gadepally, Gregory Angelides, Andrei Barbu, Andrew Bowne, Laura J. Brattain, Tamara Broderick, Armando Cabrera, Glenn Carl, Ronisha Carter, Miriam Cha, Emilie Cowen, Jesse Cummings, Bill Freeman, James Glass, Sam Goldberg, Mark Hamilton, Thomas Heldt, Kuan Wei Huang, Phillip Isola, Boris Katz, Jamie Koerner, Yen-Chen Lin, David Mayo, Kyle McAlpin, Taylor Perron, Jean Piou, Hrishikesh M. Rao, Hayley Reynolds, Kaira Samuel, Siddharth Samsi, Morgan Schmidt, Leslie Shing, Olga Simek, Brandon Swenson, Vivienne Sze, Jonathan Taylor, Paul Tylkin, Mark Veillette, Matthew L Weiss, Allan Wollaber, Sophia Yuditskaya, Jeremy Kepner

Figure 1 for Developing a Series of AI Challenges for the United States Department of the Air Force
Figure 2 for Developing a Series of AI Challenges for the United States Department of the Air Force
Figure 3 for Developing a Series of AI Challenges for the United States Department of the Air Force
Figure 4 for Developing a Series of AI Challenges for the United States Department of the Air Force
Viaarxiv icon

MultiEarth 2022 -- Multimodal Learning for Earth and Environment Workshop and Challenge

Add code
Bookmark button
Alert button
Apr 27, 2022
Miriam Cha, Kuan Wei Huang, Morgan Schmidt, Gregory Angelides, Mark Hamilton, Sam Goldberg, Armando Cabrera, Phillip Isola, Taylor Perron, Bill Freeman, Yen-Chen Lin, Brandon Swenson, Jean Piou

Figure 1 for MultiEarth 2022 -- Multimodal Learning for Earth and Environment Workshop and Challenge
Figure 2 for MultiEarth 2022 -- Multimodal Learning for Earth and Environment Workshop and Challenge
Figure 3 for MultiEarth 2022 -- Multimodal Learning for Earth and Environment Workshop and Challenge
Figure 4 for MultiEarth 2022 -- Multimodal Learning for Earth and Environment Workshop and Challenge
Viaarxiv icon

Multimodal Representation Learning via Maximization of Local Mutual Information

Add code
Bookmark button
Alert button
Mar 08, 2021
Ruizhi Liao, Daniel Moyer, Miriam Cha, Keegan Quigley, Seth Berkowitz, Steven Horng, Polina Golland, William M. Wells

Figure 1 for Multimodal Representation Learning via Maximization of Local Mutual Information
Figure 2 for Multimodal Representation Learning via Maximization of Local Mutual Information
Figure 3 for Multimodal Representation Learning via Maximization of Local Mutual Information
Figure 4 for Multimodal Representation Learning via Maximization of Local Mutual Information
Viaarxiv icon

Adversarial Learning of Semantic Relevance in Text to Image Synthesis

Add code
Bookmark button
Alert button
Dec 12, 2018
Miriam Cha, Youngjune L. Gown, H. T. Kung

Figure 1 for Adversarial Learning of Semantic Relevance in Text to Image Synthesis
Figure 2 for Adversarial Learning of Semantic Relevance in Text to Image Synthesis
Figure 3 for Adversarial Learning of Semantic Relevance in Text to Image Synthesis
Figure 4 for Adversarial Learning of Semantic Relevance in Text to Image Synthesis
Viaarxiv icon

Language Modeling by Clustering with Word Embeddings for Text Readability Assessment

Add code
Bookmark button
Alert button
Sep 05, 2017
Miriam Cha, Youngjune Gwon, H. T. Kung

Figure 1 for Language Modeling by Clustering with Word Embeddings for Text Readability Assessment
Figure 2 for Language Modeling by Clustering with Word Embeddings for Text Readability Assessment
Figure 3 for Language Modeling by Clustering with Word Embeddings for Text Readability Assessment
Figure 4 for Language Modeling by Clustering with Word Embeddings for Text Readability Assessment
Viaarxiv icon

Adversarial nets with perceptual losses for text-to-image synthesis

Add code
Bookmark button
Alert button
Aug 30, 2017
Miriam Cha, Youngjune Gwon, H. T. Kung

Figure 1 for Adversarial nets with perceptual losses for text-to-image synthesis
Figure 2 for Adversarial nets with perceptual losses for text-to-image synthesis
Figure 3 for Adversarial nets with perceptual losses for text-to-image synthesis
Figure 4 for Adversarial nets with perceptual losses for text-to-image synthesis
Viaarxiv icon