Alert button
Picture for Ishaan Preetam Chandratreya

Ishaan Preetam Chandratreya

Alert button

SURFSUP: Learning Fluid Simulation for Novel Surfaces

Apr 13, 2023
Arjun Mani, Ishaan Preetam Chandratreya, Elliot Creager, Carl Vondrick, Richard Zemel

Figure 1 for SURFSUP: Learning Fluid Simulation for Novel Surfaces
Figure 2 for SURFSUP: Learning Fluid Simulation for Novel Surfaces
Figure 3 for SURFSUP: Learning Fluid Simulation for Novel Surfaces
Figure 4 for SURFSUP: Learning Fluid Simulation for Novel Surfaces

Modeling the mechanics of fluid in complex scenes is vital to applications in design, graphics, and robotics. Learning-based methods provide fast and differentiable fluid simulators, however most prior work is unable to accurately model how fluids interact with genuinely novel surfaces not seen during training. We introduce SURFSUP, a framework that represents objects implicitly using signed distance functions (SDFs), rather than an explicit representation of meshes or particles. This continuous representation of geometry enables more accurate simulation of fluid-object interactions over long time periods while simultaneously making computation more efficient. Moreover, SURFSUP trained on simple shape primitives generalizes considerably out-of-distribution, even to complex real-world scenes and objects. Finally, we show we can invert our model to design simple objects to manipulate fluid flow.

* Website: https://surfsup.cs.columbia.edu/ 
Viaarxiv icon

Task Bias in Vision-Language Models

Dec 08, 2022
Sachit Menon, Ishaan Preetam Chandratreya, Carl Vondrick

Figure 1 for Task Bias in Vision-Language Models
Figure 2 for Task Bias in Vision-Language Models
Figure 3 for Task Bias in Vision-Language Models
Figure 4 for Task Bias in Vision-Language Models

Incidental supervision from language has become a popular approach for learning generic visual representations that can be prompted to perform many recognition tasks in computer vision. We conduct an in-depth exploration of the CLIP model and show that its visual representation is often strongly biased towards solving some tasks more than others. Moreover, which task the representation will be biased towards is unpredictable, with little consistency across images. To resolve this task bias, we show how to learn a visual prompt that guides the representation towards features relevant to their task of interest. Our results show that these visual prompts can be independent of the input image and still effectively provide a conditioning mechanism to steer visual representations towards the desired task.

* First two authors contributed equally 
Viaarxiv icon

There is a Time and Place for Reasoning Beyond the Image

Mar 28, 2022
Xingyu Fu, Ben Zhou, Ishaan Preetam Chandratreya, Carl Vondrick, Dan Roth

Figure 1 for There is a Time and Place for Reasoning Beyond the Image
Figure 2 for There is a Time and Place for Reasoning Beyond the Image
Figure 3 for There is a Time and Place for Reasoning Beyond the Image
Figure 4 for There is a Time and Place for Reasoning Beyond the Image

Images are often more significant than only the pixels to human eyes, as we can infer, associate, and reason with contextual information from other sources to establish a more complete picture. For example, in Figure 1, we can find a way to identify the news articles related to the picture through segment-wise understandings of the signs, the buildings, the crowds, and more. This reasoning could provide the time and place the image was taken, which will help us in subsequent tasks, such as automatic storyline construction, correction of image source in intended effect photographs, and upper-stream processing such as image clustering for certain location or time. In this work, we formulate this problem and introduce TARA: a dataset with 16k images with their associated news, time, and location, automatically extracted from New York Times, and an additional 61k examples as distant supervision from WIT. On top of the extractions, we present a crowdsourced subset in which we believe it is possible to find the images' spatio-temporal information for evaluation purpose. We show that there exists a $70\%$ gap between a state-of-the-art joint model and human performance, which is slightly filled by our proposed model that uses segment-wise reasoning, motivating higher-level vision-language joint models that can conduct open-ended reasoning with world knowledge. The data and code are publicly available at https://github.com/zeyofu/TARA.

* Article accepted to the ACL 2022 Main conference 
Viaarxiv icon