Alert button
Picture for Andrew Owens

Andrew Owens

Alert button

Visual Anagrams: Generating Multi-View Optical Illusions with Diffusion Models

Nov 29, 2023
Daniel Geng, Inbum Park, Andrew Owens

We address the problem of synthesizing multi-view optical illusions: images that change appearance upon a transformation, such as a flip or rotation. We propose a simple, zero-shot method for obtaining these illusions from off-the-shelf text-to-image diffusion models. During the reverse diffusion process, we estimate the noise from different views of a noisy image. We then combine these noise estimates together and denoise the image. A theoretical analysis suggests that this method works precisely for views that can be written as orthogonal transformations, of which permutations are a subset. This leads to the idea of a visual anagram--an image that changes appearance under some rearrangement of pixels. This includes rotations and flips, but also more exotic pixel permutations such as a jigsaw rearrangement. Our approach also naturally extends to illusions with more than two views. We provide both qualitative and quantitative results demonstrating the effectiveness and flexibility of our method. Please see our project webpage for additional visualizations and results: https://dangeng.github.io/visual_anagrams/

Viaarxiv icon

Self-Supervised Motion Magnification by Backpropagating Through Optical Flow

Nov 28, 2023
Zhaoying Pan, Daniel Geng, Andrew Owens

This paper presents a simple, self-supervised method for magnifying subtle motions in video: given an input video and a magnification factor, we manipulate the video such that its new optical flow is scaled by the desired amount. To train our model, we propose a loss function that estimates the optical flow of the generated video and penalizes how far if deviates from the given magnification factor. Thus, training involves differentiating through a pretrained optical flow network. Since our model is self-supervised, we can further improve its performance through test-time adaptation, by finetuning it on the input video. It can also be easily extended to magnify the motions of only user-selected objects. Our approach avoids the need for synthetic magnification datasets that have been used to train prior learning-based approaches. Instead, it leverages the existing capabilities of off-the-shelf motion estimators. We demonstrate the effectiveness of our method through evaluations of both visual quality and quantitative metrics on a range of real-world and synthetic videos, and we show our method works for both supervised and unsupervised optical flow methods.

* Thirty-seventh Conference on Neural Information Processing Systems (2023)  
Viaarxiv icon

Generating Visual Scenes from Touch

Sep 26, 2023
Fengyu Yang, Jiacheng Zhang, Andrew Owens

An emerging line of work has sought to generate plausible imagery from touch. Existing approaches, however, tackle only narrow aspects of the visuo-tactile synthesis problem, and lag significantly behind the quality of cross-modal synthesis methods in other domains. We draw on recent advances in latent diffusion to create a model for synthesizing images from tactile signals (and vice versa) and apply it to a number of visuo-tactile synthesis tasks. Using this model, we significantly outperform prior work on the tactile-driven stylization problem, i.e., manipulating an image to match a touch signal, and we are the first to successfully generate images from touch without additional sources of information about the scene. We also successfully use our model to address two novel synthesis problems: generating images that do not contain the touch sensor or the hand holding it, and estimating an image's shading from its reflectance and touch.

* ICCV 2023; Project site: https://fredfyyang.github.io/vision-from-touch/ 
Viaarxiv icon

Conditional Generation of Audio from Video via Foley Analogies

Apr 17, 2023
Yuexi Du, Ziyang Chen, Justin Salamon, Bryan Russell, Andrew Owens

Figure 1 for Conditional Generation of Audio from Video via Foley Analogies
Figure 2 for Conditional Generation of Audio from Video via Foley Analogies
Figure 3 for Conditional Generation of Audio from Video via Foley Analogies
Figure 4 for Conditional Generation of Audio from Video via Foley Analogies

The sound effects that designers add to videos are designed to convey a particular artistic effect and, thus, may be quite different from a scene's true sound. Inspired by the challenges of creating a soundtrack for a video that differs from its true sound, but that nonetheless matches the actions occurring on screen, we propose the problem of conditional Foley. We present the following contributions to address this problem. First, we propose a pretext task for training our model to predict sound for an input video clip using a conditional audio-visual clip sampled from another time within the same source video. Second, we propose a model for generating a soundtrack for a silent input video, given a user-supplied example that specifies what the video should "sound like". We show through human studies and automated evaluation metrics that our model successfully generates sound from video, while varying its output according to the content of a supplied example. Project site: https://xypb.github.io/CondFoleyGen/

* CVPR 2023 
Viaarxiv icon

Sound to Visual Scene Generation by Audio-to-Visual Latent Alignment

Mar 30, 2023
Kim Sung-Bin, Arda Senocak, Hyunwoo Ha, Andrew Owens, Tae-Hyun Oh

Figure 1 for Sound to Visual Scene Generation by Audio-to-Visual Latent Alignment
Figure 2 for Sound to Visual Scene Generation by Audio-to-Visual Latent Alignment
Figure 3 for Sound to Visual Scene Generation by Audio-to-Visual Latent Alignment
Figure 4 for Sound to Visual Scene Generation by Audio-to-Visual Latent Alignment

How does audio describe the world around us? In this paper, we propose a method for generating an image of a scene from sound. Our method addresses the challenges of dealing with the large gaps that often exist between sight and sound. We design a model that works by scheduling the learning procedure of each model component to associate audio-visual modalities despite their information gaps. The key idea is to enrich the audio features with visual information by learning to align audio to visual latent space. We translate the input audio to visual features, then use a pre-trained generator to produce an image. To further improve the quality of our generated images, we use sound source localization to select the audio-visual pairs that have strong cross-modal correlations. We obtain substantially better results on the VEGAS and VGGSound datasets than prior approaches. We also show that we can control our model's predictions by applying simple manipulations to the input waveform, or to the latent space.

* CVPR 2023 
Viaarxiv icon

Text2Room: Extracting Textured 3D Meshes from 2D Text-to-Image Models

Mar 21, 2023
Lukas Höllein, Ang Cao, Andrew Owens, Justin Johnson, Matthias Nießner

Figure 1 for Text2Room: Extracting Textured 3D Meshes from 2D Text-to-Image Models
Figure 2 for Text2Room: Extracting Textured 3D Meshes from 2D Text-to-Image Models
Figure 3 for Text2Room: Extracting Textured 3D Meshes from 2D Text-to-Image Models
Figure 4 for Text2Room: Extracting Textured 3D Meshes from 2D Text-to-Image Models

We present Text2Room, a method for generating room-scale textured 3D meshes from a given text prompt as input. To this end, we leverage pre-trained 2D text-to-image models to synthesize a sequence of images from different poses. In order to lift these outputs into a consistent 3D scene representation, we combine monocular depth estimation with a text-conditioned inpainting model. The core idea of our approach is a tailored viewpoint selection such that the content of each image can be fused into a seamless, textured 3D mesh. More specifically, we propose a continuous alignment strategy that iteratively fuses scene frames with the existing geometry to create a seamless mesh. Unlike existing works that focus on generating single objects or zoom-out trajectories from text, our method generates complete 3D scenes with multiple objects and explicit 3D geometry. We evaluate our approach using qualitative and quantitative metrics, demonstrating it as the first method to generate room-scale 3D geometry with compelling textures from only text as input.

* video: https://youtu.be/fjRnFL91EZc project page: https://lukashoel.github.io/text-to-room/ code: https://github.com/lukasHoel/text2room 
Viaarxiv icon

Sound Localization from Motion: Jointly Learning Sound Direction and Camera Rotation

Mar 20, 2023
Ziyang Chen, Shengyi Qian, Andrew Owens

Figure 1 for Sound Localization from Motion: Jointly Learning Sound Direction and Camera Rotation
Figure 2 for Sound Localization from Motion: Jointly Learning Sound Direction and Camera Rotation
Figure 3 for Sound Localization from Motion: Jointly Learning Sound Direction and Camera Rotation
Figure 4 for Sound Localization from Motion: Jointly Learning Sound Direction and Camera Rotation

The images and sounds that we perceive undergo subtle but geometrically consistent changes as we rotate our heads. In this paper, we use these cues to solve a problem we call Sound Localization from Motion (SLfM): jointly estimating camera rotation and localizing sound sources. We learn to solve these tasks solely through self-supervision. A visual model predicts camera rotation from a pair of images, while an audio model predicts the direction of sound sources from binaural sounds. We train these models to generate predictions that agree with one another. At test time, the models can be deployed independently. To obtain a feature representation that is well-suited to solving this challenging problem, we also propose a method for learning an audio-visual representation through cross-view binauralization: estimating binaural sound from one view, given images and sound from another. Our model can successfully estimate accurate rotations on both real and synthetic scenes, and localize sound sources with accuracy competitive with state-of-the-art self-supervised approaches. Project site: https://ificl.github.io/SLfM/

* Project site: https://ificl.github.io/SLfM/ 
Viaarxiv icon

EXIF as Language: Learning Cross-Modal Associations Between Images and Camera Metadata

Jan 11, 2023
Chenhao Zheng, Ayush Shrivastava, Andrew Owens

Figure 1 for EXIF as Language: Learning Cross-Modal Associations Between Images and Camera Metadata
Figure 2 for EXIF as Language: Learning Cross-Modal Associations Between Images and Camera Metadata
Figure 3 for EXIF as Language: Learning Cross-Modal Associations Between Images and Camera Metadata
Figure 4 for EXIF as Language: Learning Cross-Modal Associations Between Images and Camera Metadata

We learn a visual representation that captures information about the camera that recorded a given photo. To do this, we train a multimodal embedding between image patches and the EXIF metadata that cameras automatically insert into image files. Our model represents this metadata by simply converting it to text and then processing it with a transformer. The features that we learn significantly outperform other self-supervised and supervised features on downstream image forensics and calibration tasks. In particular, we successfully localize spliced image regions "zero shot" by clustering the visual embeddings for all of the patches within an image.

* Project link: http://hellomuffin.github.io/exif-as-language 
Viaarxiv icon

Self-Supervised Video Forensics by Audio-Visual Anomaly Detection

Jan 04, 2023
Chao Feng, Ziyang Chen, Andrew Owens

Figure 1 for Self-Supervised Video Forensics by Audio-Visual Anomaly Detection
Figure 2 for Self-Supervised Video Forensics by Audio-Visual Anomaly Detection
Figure 3 for Self-Supervised Video Forensics by Audio-Visual Anomaly Detection
Figure 4 for Self-Supervised Video Forensics by Audio-Visual Anomaly Detection

Manipulated videos often contain subtle inconsistencies between their visual and audio signals. We propose a video forensics method, based on anomaly detection, that can identify these inconsistencies, and that can be trained solely using real, unlabeled data. We train an autoregressive model to generate sequences of audio-visual features, using feature sets that capture the temporal synchronization between video frames and sound. At test time, we then flag videos that the model assigns low probability. Despite being trained entirely on real videos, our model obtains strong performance on the task of detecting manipulated speech videos. Project site: https://cfeng16.github.io/audio-visual-forensics

Viaarxiv icon

Touch and Go: Learning from Human-Collected Vision and Touch

Nov 29, 2022
Fengyu Yang, Chenyang Ma, Jiacheng Zhang, Jing Zhu, Wenzhen Yuan, Andrew Owens

Figure 1 for Touch and Go: Learning from Human-Collected Vision and Touch
Figure 2 for Touch and Go: Learning from Human-Collected Vision and Touch
Figure 3 for Touch and Go: Learning from Human-Collected Vision and Touch
Figure 4 for Touch and Go: Learning from Human-Collected Vision and Touch

The ability to associate touch with sight is essential for tasks that require physically interacting with objects in the world. We propose a dataset with paired visual and tactile data called Touch and Go, in which human data collectors probe objects in natural environments using tactile sensors, while simultaneously recording egocentric video. In contrast to previous efforts, which have largely been confined to lab settings or simulated environments, our dataset spans a large number of "in the wild" objects and scenes. To demonstrate our dataset's effectiveness, we successfully apply it to a variety of tasks: 1) self-supervised visuo-tactile feature learning, 2) tactile-driven image stylization, i.e., making the visual appearance of an object more consistent with a given tactile signal, and 3) predicting future frames of a tactile signal from visuo-tactile inputs.

* Accepted by NeurIPS 2022 Track of Datasets and Benchmarks 
Viaarxiv icon