Get our free extension to see links to code for papers anywhere online!

Chrome logo  Add to Chrome

Firefox logo Add to Firefox

"photo": models, code, and papers

AI-supported Framework of Semi-Automatic Monoplotting for Monocular Oblique Visual Data Analysis

Nov 28, 2021
Behzad Golparvar, Ruo-Qian Wang

In the last decades, the development of smartphones, drones, aerial patrols, and digital cameras enabled high-quality photographs available to large populations and, thus, provides an opportunity to collect massive data of the nature and society with global coverage. However, the data collected with new photography tools is usually oblique - they are difficult to be georeferenced, and huge amounts of data is often obsolete. Georeferencing oblique imagery data may be solved by a technique called monoplotting, which only requires a single image and Digital Elevation Model (DEM). In traditional monoplotting, a human user has to manually choose a series of ground control point (GCP) pairs in the image and DEM and then determine the extrinsic and intrinsic parameters of the camera to establish a pixel-level correspondence between photos and the DEM to enable the mapping and georeferencing of objects in photos. This traditional method is difficult to scale due to several challenges including the labor-intensive inputs, the need of rich experience to identify well-defined GCPs, and limitations in camera pose estimation. Therefore, existing monoplotting methods are rarely used in analyzing large-scale databases or near-real-time warning systems. In this paper, we propose and demonstrate a novel semi-automatic monoplotting framework that provides pixel-level correspondence between photos and DEMs requiring minimal human interventions. A pipeline of analyses was developed including key point detection in images and DEM rasters, retrieving georeferenced 3D DEM GCPs, regularized gradient-based optimization, pose estimation, ray tracing, and the correspondence identification between image pixels and real world coordinates. Two numerical experiments show that the framework is superior in georeferencing visual data in 3-D coordinates, paving a way toward fully automatic monoplotting methodology.

Access Paper or Ask Questions

Deferred Neural Rendering: Image Synthesis using Neural Textures

Apr 28, 2019
Justus Thies, Michael Zollhöfer, Matthias Nießner

The modern computer graphics pipeline can synthesize images at remarkable visual quality; however, it requires well-defined, high-quality 3D content as input. In this work, we explore the use of imperfect 3D content, for instance, obtained from photo-metric reconstructions with noisy and incomplete surface geometry, while still aiming to produce photo-realistic (re-)renderings. To address this challenging problem, we introduce Deferred Neural Rendering, a new paradigm for image synthesis that combines the traditional graphics pipeline with learnable components. Specifically, we propose Neural Textures, which are learned feature maps that are trained as part of the scene capture process. Similar to traditional textures, neural textures are stored as maps on top of 3D mesh proxies; however, the high-dimensional feature maps contain significantly more information, which can be interpreted by our new deferred neural rendering pipeline. Both neural textures and deferred neural renderer are trained end-to-end, enabling us to synthesize photo-realistic images even when the original 3D content was imperfect. In contrast to traditional, black-box 2D generative neural networks, our 3D representation gives us explicit control over the generated output, and allows for a wide range of application domains. For instance, we can synthesize temporally-consistent video re-renderings of recorded 3D scenes as our representation is inherently embedded in 3D space. This way, neural textures can be utilized to coherently re-render or manipulate existing video content in both static and dynamic environments at real-time rates. We show the effectiveness of our approach in several experiments on novel view synthesis, scene editing, and facial reenactment, and compare to state-of-the-art approaches that leverage the standard graphics pipeline as well as conventional generative neural networks.

* Video: SIGGRAPH 2019 
Access Paper or Ask Questions

Good Artists Copy, Great Artists Steal: Model Extraction Attacks Against Image Translation Generative Adversarial Networks

Apr 26, 2021
Sebastian Szyller, Vasisht Duddu, Tommi Gröndahl, N. Asokan

Machine learning models are typically made available to potential client users via inference APIs. Model extraction attacks occur when a malicious client uses information gleaned from queries to the inference API of a victim model $F_V$ to build a surrogate model $F_A$ that has comparable functionality. Recent research has shown successful model extraction attacks against image classification, and NLP models. In this paper, we show the first model extraction attack against real-world generative adversarial network (GAN) image translation models. We present a framework for conducting model extraction attacks against image translation models, and show that the adversary can successfully extract functional surrogate models. The adversary is not required to know $F_V$'s architecture or any other information about it beyond its intended image translation task, and queries $F_V$'s inference interface using data drawn from the same domain as the training data for $F_V$. We evaluate the effectiveness of our attacks using three different instances of two popular categories of image translation: (1) Selfie-to-Anime and (2) Monet-to-Photo (image style transfer), and (3) Super-Resolution (super resolution). Using standard performance metrics for GANs, we show that our attacks are effective in each of the three cases -- the differences between $F_V$ and $F_A$, compared to the target are in the following ranges: Selfie-to-Anime: FID $13.36-68.66$, Monet-to-Photo: FID $3.57-4.40$, and Super-Resolution: SSIM: $0.06-0.08$ and PSNR: $1.43-4.46$. Furthermore, we conducted a large scale (125 participants) user study on Selfie-to-Anime and Monet-to-Photo to show that human perception of the images produced by the victim and surrogate models can be considered equivalent, within an equivalence bound of Cohen's $d=0.3$.

* 9 pages, 7 figures 
Access Paper or Ask Questions

Robust Image Captioning

Dec 06, 2020
Daniel Yarnell, Xian Wang

Automated captioning of photos is a mission that incorporates the difficulties of photo analysis and text generation. One essential feature of captioning is the concept of attention: how to determine what to specify and in which sequence. In this study, we leverage the Object Relation using adversarial robust cut algorithm, that grows upon this method by specifically embedding knowledge about the spatial association between input data through graph representation. Our experimental study represent the promising performance of our proposed method for image captioning.

Access Paper or Ask Questions

Aesthetics and neural network image representations

Sep 16, 2021
Romuald A. Janik

We analyze the spaces of images encoded by generative networks of the BigGAN architecture. We find that generic multiplicative perturbations away from the photo-realistic point often lead to images which appear as "artistic renditions" of the corresponding objects. This demonstrates an emergence of aesthetic properties directly from the structure of the photo-realistic environment coupled with its neural network parametrization. Moreover, modifying a deep semantic part of the neural network encoding leads to the appearance of symbolic visual representations.

* 10 pages, 4 figures 
Access Paper or Ask Questions

A Novel Method for Vectorization

Mar 04, 2014
Tolga Birdal, Emrah Bala

Vectorization of images is a key concern uniting computer graphics and computer vision communities. In this paper we are presenting a novel idea for efficient, customizable vectorization of raster images, based on Catmull Rom spline fitting. The algorithm maintains a good balance between photo-realism and photo abstraction, and hence is applicable to applications with artistic concerns or applications where less information loss is crucial. The resulting algorithm is fast, parallelizable and can satisfy general soft realtime requirements. Moreover, the smoothness of the vectorized images aesthetically outperforms outputs of many polygon-based methods

* Prepared in Siggraph format, not published in a conference, 7 pages, 9 figures 
Access Paper or Ask Questions

Let's Talk! Striking Up Conversations via Conversational Visual Question Generation

May 19, 2022
Shih-Han Chan, Tsai-Lun Yang, Yun-Wei Chu, Chi-Yang Hsu, Ting-Hao Huang, Yu-Shian Chiu, Lun-Wei Ku

An engaging and provocative question can open up a great conversation. In this work, we explore a novel scenario: a conversation agent views a set of the user's photos (for example, from social media platforms) and asks an engaging question to initiate a conversation with the user. The existing vision-to-question models mostly generate tedious and obvious questions, which might not be ideals conversation starters. This paper introduces a two-phase framework that first generates a visual story for the photo set and then uses the story to produce an interesting question. The human evaluation shows that our framework generates more response-provoking questions for starting conversations than other vision-to-question baselines.

* Accepted as a full talk paper on AAAI-DEEPDIAL'21 
Access Paper or Ask Questions

Understanding Aesthetics in Photography using Deep Convolutional Neural Networks

Aug 08, 2017
Maciej Suchecki, Tomasz Trzcinski

Evaluating aesthetic value of digital photographs is a challenging task, mainly due to numerous factors that need to be taken into account and subjective manner of this process. In this paper, we propose to approach this problem using deep convolutional neural networks. Using a dataset of over 1.7 million photos collected from Flickr, we train and evaluate a deep learning model whose goal is to classify input images by analysing their aesthetic value. The result of this work is a publicly available Web-based application that can be used in several real-life applications, e.g. to improve the workflow of professional photographers by pre-selecting the best photos.

Access Paper or Ask Questions

High Diversity Attribute Guided Face Generation with GANs

Jun 28, 2018
Evgeny Izutov

In this work we focused on GAN-based solution for the attribute guided face synthesis. Previous works exploited GANs for generation of photo-realistic face images and did not pay attention to the question of diversity of the resulting images. The proposed solution in its turn introducing novel latent space of unit complex numbers is able to provide the diversity on the "birthday paradox" score 3 times higher than the size of the training dataset. It is important to emphasize that our result is shown on relatively small dataset (20k samples vs 200k) while preserving photo-realistic properties of generated faces on significantly higher resolution (128x128 in comparison to 32x32 of previous works).

Access Paper or Ask Questions

From Culture to Clothing: Discovering the World Events Behind A Century of Fashion Images

Feb 02, 2021
Wei-Lin Hsiao, Kristen Grauman

Fashion is intertwined with external cultural factors, but identifying these links remains a manual process limited to only the most salient phenomena. We propose a data-driven approach to identify specific cultural factors affecting the clothes people wear. Using large-scale datasets of news articles and vintage photos spanning a century, we introduce a multi-modal statistical model to detect influence relationships between happenings in the world and people's choice of clothing. Furthermore, we apply our model to improve the concrete vision tasks of visual style forecasting and photo timestamping on two datasets. Our work is a first step towards a computational, scalable, and easily refreshable approach to link culture to clothing.

* Technical report 
Access Paper or Ask Questions